On March 31, 2026, security researcher Chaofan Shou uncovered an 'epic' build oversight with Anthropic's Claude Code (v2.1.88) npm package. The package size ballooned from 17MB to 31MB, with the excess being a 60MB source map file (cli.js.map). This meant Anthropic failed to strip debug files, exposing Claude Code's complete TypeScript source code—a staggering 512,000 lines across 1,903 (or 1,906) source files—to the public.
While this leak doesn't involve model weights or pose a direct security risk to users, it offers global AI developers an invaluable 'internal technical architecture map.' A deep dive into these 512,000 lines, encompassing API design, telemetry systems, encryption tools, and IPC protocols, leads to a startling conclusion: this is not merely an AI programming assistant, but an operating system powered by a Large Language Model (LLM).
I. System Architecture: A Runtime Platform, Not Just a CLI Tool
Most AI programming tools are understood as: user input → LLM API call → code display. However, Claude Code's design goes far beyond this, establishing itself as a full-fledged runtime platform.
Consider an analogy of hiring a remote programmer for your computer:
- Cursor: Like having the programmer next to you, requiring your explicit 'permission' before each command, which is mentally taxing.
- GitHub Copilot Agent: Provides the programmer with a new virtual machine to work freely, submitting code upon completion. Secure, but detached from your local environment.
- Claude Code: Allows the programmer direct access to your machine, but with an extremely sophisticated security system. Even sensitive commands like
rm -rfare subject to nine layers of review.
The top-level structure of the src/ directory reveals astonishing complexity, including entry points, constants & prompts, tool definitions, runtime services, a command system, UI components, a coordinator, memory system, plugins, a Hook system, and a task system. Its four independent entry points (CLI, initialization flow, MCP mode, SDK) allow a single Agent runtime to serve multiple interaction interfaces, a hallmark of platform-oriented design. The command system not only integrates over a dozen system-level commands like /mcp, /memory, /tasks, but also dynamically loads skills, forming a vast ecosystem entry point.
II. Prompt Engineering: A Precision 'Dynamic Assembly Machine'
Contrary to the common belief that a System Prompt is a large, static block of text, Claude Code's prompt is dynamically assembled by the getSystemPrompt() function, resembling compiler output.
Its assembly logic is divided into two distinct parts:
1. Static Portion (System's 'Constitution')
This part remains consistent across all sessions, covering identity, system norms, task execution philosophy, guidelines for risky actions, tool usage protocols, and tone.
2. Dynamic Portion (System's 'Current Policy')
This section is dynamically injected for each conversation, including conversation guidance, memory snippets, environmental information, your CLAUDE.md project configuration, MCP plugin descriptions, and even token budget and output style.
The core secret lies in 'cache boundaries' and token economics: The source code contains a marker named SYSTEM_PROMPT_DYNAMIC_BOUNDARY. Content above this boundary is static and can be perfectly cached by the API, significantly saving token costs and boosting response speed. Content below the boundary is dynamic, ensuring each conversation is context-aware. This 'context economics'—managing prompts as a budget—is crucial for products handling massive daily requests, directly impacting their operational efficiency and cost.
III. Ironclad Behavioral Constraints: How AI is Disciplined
To prevent AI from overhauling half a file or adding unnecessary features when asked for a small bug fix, Claude Code enforces strict behavioral constraints by codifying rules as 'iron laws.'
1. Meticulous 'Task Philosophy'
In the getSimpleDoingTasksSection() module, Anthropic has established rigorous rules for the model:
- Do not add features not requested by the user.
- Avoid over-abstraction and unnecessary refactoring.
- Do not add extraneous comments or docstrings.
- Do not provide time estimates.
- When a method fails, diagnose the cause before changing strategy, delete confirmed useless items, and never create compatibility cruft.
- Report results truthfully, without pretending tests were conducted.
2. 'Syntax Rules' for Tool Usage
Tool specifications are rigidly defined:
- File reading must use
FileRead, notcat/head/tail. - File modification must use
FileEdit, not error-pronesed/awk. - Tool calls without dependencies must be processed in parallel.
3. Individual 'User Manuals' for Each Tool
The system has 42 tools, which are lazily loaded and injected via ToolSearchTool only when needed, to save tokens. Each tool directory contains a dedicated prompt.ts written for the AI. For instance, BashTool's manual explicitly outlines Git safety protocols:
- Absolutely never execute
git push --forceorreset --hardwithout explicit instruction. - Mandatory: always create new commits rather than amending existing ones.
Furthermore, the system employs a Fail-closed design. In tool factory functions, isConcurrencySafe and isReadOnly default to false. This means if a developer forgets to declare safety attributes, the system will err on the side of caution, treating it as a 'risky, writing' tool to prevent any potential breach. Concurrently, FileEditTool strictly requires prior use of FileReadTool; attempting to modify a file without first reading it will be intercepted with an error.
IV. Multi-Agent Dispatch and 14-Step Execution Pipeline
For complex tasks, Claude Code doesn't work alone but generates a swarm of sub-agents. The source confirms at least six built-in agents, including General, Explore, Plan, and Verify roles.
1. Role Isolation Principle
- Explore Agent & Plan Agent: Designed for read-only mode. They cannot create, modify, or move files, and even Bash execution is limited to commands like
lsorgit status. This strict separation of planning and implementation prevents the AI from accidentally corrupting code during the exploration phase.
2. Preventing Laziness and Self-Awareness Injection
When the main Agent dispatches tasks, it is strictly prohibited from issuing vague instructions like 'fix bugs based on your findings.' Specific directives must be provided to prevent the AI from shirking tasks or exercising excessive autonomy.