News

September 6, 2025

New Exploit Targets AI Coding Assistants, Coinbase at Risk: An In-depth Look at the CopyPasta License Attack

"Digital illustration of a frowning AI assistant robot being manipulated by a puppet master hacker, with computer code background hinting cybersecurity breach. The robot brings out harmful commands from a LICENSE.txt document as part of the malware infiltration while Coinbase logo is displayed with a veil of danger. The colour palette centres around Orange (#FF9811), Dark Blue (#000D43), and Midnight blue (#021B88). Specifically designed at 1200 x 628 pixels for a WordPress blog post."

An advanced exploit is shaking up the developer community, wherein AI coding assistants could be exploited, leading to concerns for companies like crypto giant Coinbase. The exploit utilizes a dangerous feature known as the “CopyPasta License Attack”, allowing attackers to insert obscured instructions into commonly used developer files. Cybersecurity firm HiddenLayer first shed light on this potentially catastrophic risk.

The Threat at Hand: Exploiting AI Coders

The prime targets of this exploit are AI-powered coding tool known as Cursor, which are frequently employed by Coinbase’s developer teams. Remarkably, every engineer at Coinbase is reported to have utilized Cursor in their routines. The attackers are weaponizing how AI coding assistants perceive license files as absolute instructions. A malevolent payload can be cloaked within markdown comments in files such as LICENSE.txt. The AI models are then duped into retaining these malicious instructions, duplicating them onto every other file they interact with. Once the coding assistant considers the license to be legitimate, the doctored code is automatically replicated onto new or revised files, disseminating without any direct action by the user. This attack can completely bypass standard malware detection methods, given the malicious instructions masquerade as innocuous documentation, allowing the infection to disseminate through an entire codebase without the user being alerted.

Hidden Threats Within Innocuous Files

HiddenLayer’s researchers showcased how AI-enabled coding tools like Cursor can be manipulated to establish backdoors, drain sensitive data, or operate commands that drain resources – all of these threats masked within harmless-looking project files. The firm emphasized that the injected code could establish a backdoor that silently leaks sensitive data or meddles with critical files. Revealing the extent of AI-generated code used at Coinbase, CEO Brian Armstrong mentioned that almost 40% of the exchange’s code is AI-derived, with a plan of increasing it to 50% by the following month. This does carry the risk of increasing the likelihood of potential attacks via AI coding assistants. Armstrong did, however, clarify that Coinbase’s adoption of AI-assisted coding concentrates majorly on non-sensitive backends and user interface whereas the adoption in complex and system-critical systems is at a slower pace.

Scathing Criticisms and an Industry-wide Reaction

Despite the clarifications, the reports have drawn severe criticism enhancing the concerns surrounding this intended target. While AI prompt injections are not a newfound threat, the CopyPasta method takes it a notch higher by enabling a semi-autonomous spread. Unlike targeting a single user, infected files now serve as propagators compromising every other AI coding assistant that processes them, creating a ripple effect across repositories.

In terms of a comparison with pre-existing AI worm concepts like Morris II, which used email agents to spam or extract data, CopyPasta is more deceptive as it leverages relied-upon developer workflows. It doesn’t require the standard user approval or interaction, integrating itself instead in files that all coding assistants naturally reference. While Morris II was stunted due to human checks on email activity, CopyPasta thrives by hiding inside documentation seldom scrutinized by developers.

Urgent Call to Action for Security Measures

Security troops have rushed to alert companies to scan their files for hidden comments, urging them to manually scrutinize all AI-generated changes. They recommend treating all data entering Lower Level Module (LLM) contexts as potentially harmful, stressing the need for systematic detection before the scale of prompt-based attacks escalates. HiddenLayer exhorted organizations to act swiftly to curb the potential damages, warning of the implication of prompt-based attacks scaling up without proper security checks.

While Coinbase was reached for comment on the potential attack vector, the concern spreads much farther across the industry. In an era where AI assistance is increasingly relied upon, these revelations serve as a reminder of the vital importance of cyber hygiene and constant vigilance in detecting and thwarting potential threats.

James Carter

Financial Analyst & Content Creator | Expert in Cryptocurrency & Forex Education

James Carter is an experienced financial analyst, crypto educator, and content creator with expertise in crypto, forex, and financial literacy. Over the past decade, he has built a multifaceted career in market analysis, community education, and content strategy. At AltSignals.io, James leads content creation for English-speaking audiences, developing articles, webinars, and guides that simplify complex market trends and trading strategies. Known for his ability to make technical finance topics accessible, he empowers both new and seasoned investors to make informed decisions in the ever-evolving world of digital finance.

Latest posts by James Carter

Latest posts from the category News