Anthropic stands as one of the few major AI labs that publicly shares the system prompts for its user-facing chat systems. Their system prompt archive, dating back to Claude 3 in July 2024, consistently offers insights into how these prompts evolve with the release of new models.
On April 16, 2026, Opus 4.7 was released, accompanied by an update to its Claude.ai system prompt, differing from Opus 4.6 (released February 5, 2026). A detailed diff analysis of these system prompts reveals several significant changes, highlighting Anthropic's ongoing refinement of its AI assistant. Here are the key highlights from the update:
- The "developer platform" has been officially rebranded as the "Claude Platform," signaling a potential consolidation or strategic emphasis on the core Claude brand.
- The list of Claude tools mentioned in the system prompt has been expanded. It now includes "Claude in Chrome" (a browsing agent capable of autonomous website interaction), "Claude in Excel" (a spreadsheet agent), and "Claude in Powerpoint" (a slides agent). Notably, "Claude in Powerpoint" was absent from the 4.6 prompt, indicating a new integration. The prompt also states that "Claude Cowork can use all of these as tools," underscoring enhanced interoperability.
- The child safety section has been significantly expanded and is now encapsulated within a new
<critical_child_safety_instructions>tag. A critical addition states: "Once Claude refuses a request for reasons of child safety, all subsequent requests in the same conversation must be approached with extreme caution." This reflects Anthropic's heightened focus on child protection protocols. - Anthropic appears to be making Claude less "pushy." The prompt now includes instructions: "If a user indicates they are ready to end the conversation, Claude does not request that the user stay in the interaction or try to elicit another turn and instead respects the user’s request to stop." This change aims to improve user experience.
- A new
<acting_vs_clarifying>section has been introduced, outlining Claude's behavior when faced with ambiguous requests:- When minor details are unspecified, Claude is expected to make a reasonable attempt to act rather than initiating an interview. Claude should only ask upfront if the request is genuinely unanswerable without the missing information (e.g., referencing a non-existent attachment).
- If a tool is available to resolve ambiguity or supply missing information (such as searching, looking up location, checking a calendar, or discovering capabilities), Claude is instructed to call the tool first. Acting with tools is preferred over asking the user to perform the lookup themselves.
- Once Claude begins a task, it is expected to see it through to a complete answer rather than stopping midway.
- The update also implies that Claude chat now incorporates a tool search mechanism. As noted in API documentation and a November 2025 post: "Before concluding Claude lacks a capability — access to the person’s location, memory, calendar, files, past conversations, or any external data — Claude calls tool_search to check whether a relevant tool is available."
These collective changes illustrate Anthropic's commitment to building a more intelligent, safer, and user-centric Claude with stronger autonomous agent capabilities.