Anthropic has officially launched Claude Opus 4.7, a direct upgrade to Opus 4.6. While it is expected to outperform its predecessor on complex, long-running tasks, Anthropic notes that it is "less broadly capable" than the much-discussed Claude Mythos Preview released last week.
Opus 4.7 is now available across all Claude products and Anthropic’s API, as well as on Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry, maintaining the same pricing as Opus 4.6. Anthropic highlights significant improvements in instruction following, vision, creativity, memory, and financial analysis.
Key Enhancements in Opus 4.7
Early access testers for Opus 4.7, including Intuit, GitHub, and Notion, provided strong feedback. A primary improvement is enhanced instruction following. Unlike previous Claude models that might have interpreted or overlooked instructions, the new iteration is reportedly more adept at executing commands precisely.
Anthropic also points out that this improved literal instruction following means prompts written for earlier models might now produce unexpected results. Consequently, users may need to adjust their prompt-writing style to align with Opus 4.7’s more direct approach.
The model offers better vision for high-resolution images, accepting inputs with more than three times the pixels of previous versions. This advancement opens doors for more multimodal applications requiring finer visual detail, such as accurately reading dense screenshots.
Regarding creativity, Opus 4.7 is described as "more tasteful and creative when completing professional tasks." Tester feedback supports the model's ability to produce "higher-quality" interfaces, slides, and documents. Aj Orbach, co-founder and CEO of Triple Whale, commented, "The design taste is genuinely surprising — it makes choices I’d actually ship.”
Another significant upgrade is enhanced memory. Anthropic states the new model is "better at using file system-based memory," allowing it to remember and reference notes across tasks. This capability frees users from repeatedly providing upfront context.
Finally, Anthropic highlights Opus 4.7’s state-of-the-art performance on GDPval-AA, a third-party evaluation that assesses large language models (LLMs) on real-world, economically valuable tasks in domains like finance and law. Internal tests further confirm Opus 4.7 as "a more effective finance analyst than Opus 4.6," attributed to its rigorous analyses and professional output.