Key Takeaway
Anthropic has released Claude Opus 4.7, its most capable publicly available model, with improvements in coding, visual intelligence, document analysis, and factual reliability. It is available now across Claude, the API, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, and other Claude products at the same base price as Opus 4.6.
Claude Opus 4.7 – Key Points
The Story
Anthropic launched Claude Opus 4.7 on Thursday as the newest version of its Opus model family and described it as a notable upgrade over Opus 4.6, especially in advanced software engineering and difficult real-world tasks. The company says it also improves high-resolution image understanding, instruction following, professional content generation, and long-running task performance, while adding automatic safeguards for high-risk cybersecurity requests. Anthropic also published benchmark and safety data showing Opus 4.7 remains below the unreleased Claude Mythos Preview but performs better than Opus 4.6 across a range of reported evaluations.
The Facts
- Claude Opus 4.7 is available now through Claude AI, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, and other Claude products.
- Anthropic describes Opus 4.7 as its most intelligent model available to the general public, while stating that Claude Mythos Preview remains more broadly capable and is not planned for general availability at this stage.
- Opus 4.7 belongs to the Claude Opus family of hybrid reasoning models built for multi-step reasoning and advanced coding. Anthropic launched Opus 4.6 in February, making Opus 4.7 its next public upgrade in the series.
- Anthropic says Opus 4.7 improves over Opus 4.6 in advanced software engineering, especially on harder coding tasks, and that users can hand off more complex long-running work with less supervision because the model follows instructions more precisely, checks its own work, and verifies outputs before responding.
- Anthropic says the model has substantially better vision and can accept images up to 2,576 pixels on the long edge, about 3.75 megapixels, which it says is more than three times as many as prior Claude models. The company positions this as useful for dense screenshots, complex diagrams, data extraction, and pixel-level visual work.
- Anthropic says Opus 4.7 is more creative and more effective on professional tasks such as building interfaces, slides, and documents, and says its internal testing also found stronger performance in finance analysis, modeling, presentations, and tighter task integration.
- Anthropic says Opus 4.7 is better at file system-based memory, allowing it to retain notes across longer, multi-session work and use that context in later tasks.
- Claude Opus 4.7 is priced the same as Claude Opus 4.6 at $5 per million input tokens and $25 per million output tokens.
- Anthropic says Opus 4.7 can use more tokens than its predecessor for two reasons: an updated tokenizer that can raise token counts by roughly 1.0–1.35× depending on content type, and higher-effort reasoning behavior that can produce more output tokens, especially in later turns of agentic tasks.
- In Anthropic-reported results on Humanitys Last Exam without tools, Claude Mythos scored 56.8%, Claude Opus 4.7 scored 46.9%, Gemini 3.1 Pro scored 44.4%, GPT-5-4 Pro scored 42.7%, and Claude Opus 4.6 scored 40.0%.
- In the same benchmark with tools, Anthropic reports GPT-5-4 Pro at 58.7%, Claude Opus 4.7 at 54.7%, and Claude Mythos at 64.7%. Anthropic also says Opus 4.7 beats Opus 4.6 across many reported use cases, including agentic coding, multidisciplinary reasoning, scaled tool use, and agentic computer use.
- Anthropic says Opus 4.7 is state of the art on the Finance Agent evaluation and on GDPval-AA, which it identifies as a third-party evaluation of economically valuable knowledge work across finance, legal, and other domains.
- Anthropic reports that Opus 4.7 has a safety profile similar to Opus 4.6, with low rates of deception, sycophancy, and cooperation with misuse. It says the model improves on some measures such as honesty and resistance to malicious prompt injection, but is modestly weaker on some behavior involving overly detailed harm-reduction advice on controlled substances.
- Anthropic says Opus 4.7 is the first less-capable-than-Mythos model being released with new safeguards that automatically detect and block requests indicating prohibited or high-risk cybersecurity uses. The company says it also experimented during training with efforts to reduce the model’s cyber capabilities, and security professionals with legitimate use cases such as vulnerability research, penetration testing, and red-teaming are invited to apply for Anthropic’s Cyber Verification Program.
How to Access / Pricing
Claude Opus 4.7 is live now in Claude AI, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, and Anthropic’s other Claude products. Anthropic says pricing matches Opus 4.6 at $5 per million input tokens and $25 per million output tokens, although total usage costs may shift because of the updated tokenizer and higher-effort reasoning behavior.
Benchmarks / Evidence Check
The benchmark figures come from Anthropic’s published materials, including its launch materials and system card, not from independent third-party verification. Anthropic says Opus 4.7 is below Claude Mythos Preview on every relevant axis it measured and does not represent a jump beyond existing capability trend lines, while still outperforming Opus 4.6 across a range of reported benchmarks and use cases.
Why This Matters
Claude Opus 4.7 strengthens Anthropic’s public product at a time when frontier AI competition is increasingly defined by coding performance, reasoning control, multimodal accuracy, and deployment safeguards. The release also gives Anthropic a public model it can use to test cybersecurity guardrails as it works toward broader deployment of Mythos-class systems, while addressing demand for a stronger Opus model after criticism from some power users that Opus 4.6 had become less reliable on complex engineering work.
This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.
Anthropic launched Claude Design in research preview for paid users, linking prompts, prototypes, Canva exports, and Claude Code workflows.
Anthropic says Mythos Preview can exploit critical vulnerabilities and remains withheld from public release over cyber and safety risks.
Anthropic expands Enterprise AI Agents across finance, HR, and engineering as software stocks rebound and SaaS disruption fears ease.
Anthropic Series G funding reached $30B at a $380B AI valuation, reporting $14B run-rate revenue and rapid enterprise AI adoption.
Read a comprehensive monthly roundup of the latest AI news!






