Anthropic stated on Tuesday that portions of the internal source code for its AI-powered coding assistant, Claude Code, were accidentally released due to “human error.” An internal-use file, mistakenly included in a software update, led to an archive containing nearly 2,000 files and 500,000 lines of code being exposed. These files were swiftly copied to the developer platform GitHub. By early Wednesday, a post on X linking to the leaked code garnered over 29 million views, and a re-engineered version of the source code rapidly became GitHub's most downloaded repository ever. Anthropic has since issued copyright takedown requests to limit the code's dissemination. Analysts, examining the code, discovered blueprints for a Tamagotchi-style coding assistant and an always-on AI agent, as reported by The Verge.
An Anthropic spokesperson confirmed, “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed.” The spokesperson further clarified, “This was a release packaging issue caused by human error, not a security breach.” The exposed code pertained to the tool's internal architecture but did not compromise confidential data from Claude, Anthropic's foundational AI model.
Notably, elements of Claude Code's source code were already partially known, having been reverse-engineered by independent developers. An earlier iteration of the assistant also had its source code exposed in February 2025.
Claude Code has become a pivotal product for Anthropic, evidenced by its expanding paid subscriber base. TechCrunch reported last week that, according to an Anthropic spokesperson, paid subscriptions have more than doubled this year. Concurrently, Anthropic’s Claude chatbot experienced a surge in popularity following CEO Dario Amodei’s dispute with the Pentagon, leading Claude to top Apple’s chart of free apps in the US just over a month ago. Amodei had maintained a firm stance against the use of his company’s technology for mass surveillance and fully autonomous weapons.
This incident marks Anthropic's second data leak in recent weeks. Fortune previously reported on a separate breach, noting that the company had stored thousands of internal files on publicly accessible systems. This earlier exposure included a draft blog post mentioning upcoming models codenamed “Mythos” and “Capybara.”
Experts express concern that these leaks might indicate internal security vulnerabilities within Anthropic, a particularly troubling prospect for a company dedicated to AI safety. Furthermore, the exposed data could offer competitors such as OpenAI and Google valuable insights into the operational mechanisms of Claude Code's AI system. The Wall Street Journal specifically noted that the latest leak comprised commercially sensitive details, including tools and instructions on configuring Anthropic's AI models to function as coding agents.
This recent breach follows by weeks the US government's designation of Anthropic as a supply chain risk, an allegation Anthropic is currently contesting in court. Last week, a US district judge issued a temporary injunction to block this designation.