Anthropic has announced Claude Security, a new defensive cybersecurity product. It is currently available in public beta for Enterprise-tier Claude users, with availability “coming soon” for Claude Team and Max-tier users.
Claude Security is a significant addition to Anthropic's cyberdefense toolkit. It provides security teams with a method to “scan codebases for vulnerabilities and generate targeted patches” by leveraging the Claude Opus 4.7 model. This initiative underscores Anthropic's commitment to advancing AI-driven security solutions.
Earlier this month, Anthropic debuted Project Glasswing, an AI Manhattan Project focused on discovering vulnerabilities in the world's open-source software infrastructure. Glasswing utilizes an Anthropic model named Mythos, which is deemed too sensitive for public release but is shared with Glasswing participants. These participants include major industry players such as Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, uniting even erstwhile competitors for this crucial security effort.
At the core of both Project Glasswing and Claude Security lies vulnerability scanning. The vast majority of cyberattacks originate from threat actors exploiting existing vulnerabilities. By enabling defenders to find and patch these flaws proactively, the attack surface available to malicious perpetrators is significantly reduced.
To illustrate the concept of a critical vulnerability, consider the plot of Star Wars: A New Hope, which revolves around Princess Leia storing the Death Star plans in R2-D2. Once the Rebels acquire these plans, they identify a fatal flaw: firing a single torpedo down an exhaust port on the Death Star leads to its destruction. This demonstrates a singular, critical vulnerability. Real-world codebases often harbor numerous such flaws, and Anthropic's new Claude Security tool aims to identify them before attackers can exploit them.
In the practical world, all operations rely on software, which is inherently susceptible to vulnerabilities. These weaknesses not only create entry points for adversaries but can also cause operational damage and user-experienced bugs through their mere existence.
The integration of AI into vulnerability scanning has been explored for some time. Initial applications, such as combining OpenAI's Codex with advanced research tools like ChatGPT's Deep Research, demonstrated the capability to identify critical vulnerabilities within security software, despite early challenges in handling project-wide contexts. Since then, models like Codex and Claude Code have significantly improved their capacity to process and understand larger volumes of code within a single context, continuously enhancing their effectiveness in automated vulnerability detection.