News

Anthropic Claude Leak: User Vulgar Language Tracked, Logged as "Negative"

Anthropic Claude Leak: User Vulgar Language Tracked, Logged as "Negative"

Anthropic's Claude Code AI assistant experienced a significant source code leak earlier this week, prompting the company to issue thousands of copyright takedown requests in an attempt to recover its pilfered work.

The leak allowed developers to reverse engineer aspects of the popular chatbot, raising concerns that it could provide a substantial advantage to Anthropic's competitors. Furthermore, the leaked code offered insights into upcoming or experimental initiatives, including unreleased AI models and an experimental "buddy" feature. This "Tamagotchi"-like assistant is designed to "sit beside your input box and reacts to your coding."

Perhaps most notably, code snippets revealed Anthropic is actively tracking user usage of vulgar language. Developer Rahat Chowdhury tweeted, "Claude Code has a regex that detects 'wtf,' 'ffs,' 'piece of s***,' 'f*** you,' 'this sucks' etc." He added, "It doesn’t change behavior... it just silently logs is_negative: true to analytics." Chowdhury concluded, "Anthropic is tracking how often you rage at your AI."

Boris Cherny, creator of Claude Code, confirmed this practice, stating, "This is one of the signals we use to figure out if people are having a good experience." He further revealed, "We put it on a dashboard and call it the 'f***s' chart." Chowdhury also discovered an internal "full mood classification for their insights but its employee only." He described a system where "When an Anthropic employee gets frustrated, it pops up a prompt asking them to share their transcript, basically 'hey you seem upset, wanna file a bug report?'"

Beyond offering a fascinating glimpse into Anthropic's development process, Cherny has been active on social media addressing the aftermath of his employer's blunder. He attributed the leak to "human error" in a Wednesday tweet, explaining, "Our deploy process has a few manual steps, and we didn’t do one of the steps correctly. We have landed a few improvements and are digging in to add more sanity checks."

Cherny also argued that more AI is the solution to prevent future incidents. He wrote, "Like with any other incident, the counter-intuitive answer is to solve the problem by finding ways to go faster, rather than introducing more process. In this case more automation and [C]laude checking the results." The developer clarified that "no one was fired" following the leak, categorizing it as "an honest mistake."

With the information now public, developers continue to scrutinize the leaked data. Student developer Sigrid Jin's recreated source code repository on GitHub, named "Claw Code" in homage to the open-source AI agent OpenClaw, has been widely forked.

↗ Read original source