U.S.-based AI company Anthropic confirmed yesterday that a accidental leak exposed the source code of Claude Code, its command-line interface (CLI) tool for developers. The incident was not due to a cyber intrusion but rather a human error during the routine update publication process.
The failure occurred in the npm (Node Package Manager) repository where version 2.1.88 of the package containing a 60 MB source map file (cli.js.map) was made available for download. Unlike an ordinary executable, this file included the sourcesContent field that allowed the full reconstruction of approximately 1,900 files and more than 500,000 lines of original TypeScript code.
Although Anthropic quickly removed the problematic version, the code was rapidly identified by security researcher Chaofan Shou (@Fried_rice) (https://x.com/Fried_rice/status/2038894956459290963). Within hours, copies of the repository were circulating on GitHub and other file storage platforms, forcing Anthropic to issue DMCA takedown notices to contain the spread.
Here is the paraphrased text with the same content and structure:
Inside the Incident: How Was the Code Leaked?
To understand the magnitude of the problem, it’s important to observe Anthropic's workflow. Each time the company releases an update for Claude Code, a new version is launched on npm, the world's largest public JavaScript package repository.
What Went Wrong This Time?
Several factors contributed to this incident:
- Technical Failure: The usual publication process should include only essential files for the tool’s operation, typically minified .js files (optimized and difficult to interpret for humans).
- Oversight: Due to a misconfiguration in the exclusion file (.npmignore), the team accidentally uploaded a .map file.
- Consequence: Given that npm is a public repository, security researchers and developers monitoring new versions immediately detected the file. Before Anthropic could react and remove the version, the source code had already been downloaded and widely distributed.
Why Is This Leak Concerning?
While Anthropic assured that no sensitive customer data or credentials were exposed, access to Claude Code's internal structure opens up more advanced risks:
1. End of the 'Black Box': Facilitation of Jailbreaks
Until now, Claude’s security mechanisms were protected as a corporate secret. With source code access, both researchers and malicious actors can analyze how commands are filtered and how the system decides what to execute in the terminal. This makes it much easier to design prompts that bypass these restrictions and enable unauthorized actions.
2. Supply Chain Threat (Malicious Clones)
This is one of the most immediate risks for developers. With the code available, creating false versions of the tool is relatively easy. An attacker could publish a seemingly legitimate package that actually includes a hidden backdoor.
3. Exposure of Intellectual Property and Future Plans
The leak revealed internal functions like 'Proactive Mode' and 'Sleep Mode,' which had not yet been officially announced. This not only gives an advantage to competitors but also allows identifying vulnerabilities in features that have yet to be used by the public.
Official Response and Actions Taken
In a statement to TecMundo, Anthropic attempted to minimize the impact of the incident:
"Today early, a version of Claude Code included internal source code. No sensitive customer data or credentials were exposed. This is a packaging issue caused by human error, not a security breach. We are implementing measures to prevent it from happening again."
Controlling the spread of the code on the internet has been difficult. The company has initiated legal actions, sending multiple DMCA takedown notices to remove repositories on GitHub and other platforms that replicated the leaked content. This clearly shows that, while user data was not compromised, the strategic value of the exposed code is significant for the company.
Lessons Learned
Ultimately, the Claude Code incident serves as a reminder that technical excellence also depends on paying attention to even the most basic details in our daily work processes. The key lesson is not about the isolated error but about the importance of reinforcing review and automation processes. Failures happen, but they push us to build increasingly resilient and secure systems. Let this case serve as an incentive to review our own routines, ensuring that innovation always goes hand-in-hand with governance and technical rigor.