Key Takeaway
Anthropic is investigating unauthorized access to Claude Mythos, a restricted cybersecurity-focused AI model the company says is too powerful to release publicly because of weaponization concerns.
Unauthorized Access to Claude Mythos – Key Points
The Story
Unauthorized users reportedly accessed Anthropic’s Claude Mythos Preview through a third-party vendor environment in early April. The model is designed for advanced cybersecurity tasks, with coding abilities Anthropic says can “surpass all but the most skilled humans” at finding and exploiting software vulnerabilities. Anthropic says it currently has no evidence that the reported access affected its own systems, and there is no current suggestion that malicious actors obtained the model.
The Facts
Claude Mythos Preview is not a public model.
Anthropic has no current plans to release it publicly because of concerns that it could be weaponized.
The model is built for high-risk cybersecurity work.
Anthropic says Mythos can identify and exploit vulnerabilities across major operating systems and web browsers when instructed to do so, with coding ability that can exceed all but the most skilled humans in this area.
Official access is limited to selected organizations.
Anthropic is making the model available through Project Glasswing to a limited group that includes Apple, Amazon, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, Nvidia, and other cleared organizations. It has also released Mythos to some technology and financial companies to help them secure systems against its reported vulnerability-exploitation capabilities.
The alleged unauthorized access to Claude Mythos began in early April.
The access reportedly occurred on the same day Anthropic announced limited Mythos testing for selected users.
The users were reportedly part of a Discord channel.
The channel is described as focused on finding information about unreleased AI models. The group reportedly used Mythos regularly but not for hacking, partly to avoid detection.
The group reportedly used knowledge from another breach.
Members used information from the Mercor data breach to make guesses about where the model could be found.
The same group may have accessed other unreleased models.
An individual connected to the reported access told Bloomberg that the group also has access to other unreleased models.
Anthropic says the investigation is ongoing.
The company is investigating a report of unauthorized access through one of its third-party vendor environments and says there is no indication the activity affected Anthropic systems.
Risks / Limitations
The main risk is not ordinary chatbot misuse. Mythos is described as a model capable of helping users find and exploit software vulnerabilities, which makes unauthorized access to Claude Mythos materially different from a leak involving a general consumer chatbot.
At the same time, Anthropic says it has no evidence that the reported access affected its own systems. There is also no current suggestion that malicious actors obtained the model, and the reported users allegedly avoided using it for hacking.
Background
Claude Mythos Preview was released for limited testing through Project Glasswing, a controlled access initiative involving major technology companies and other cleared organizations. The model is attracting government interest because of its cybersecurity capabilities: Inc. reports that the Commerce Department’s Center for AI Standards and Innovation and the National Security Agency already have access, while the Department of Treasury has sought access. The U.S. Cybersecurity and Infrastructure Security Agency reportedly does not have access.
What to Watch Next
The key question is whether Anthropic confirms the scope of unauthorized access to Claude Mythos, how the third-party vendor environment was exposed, whether other unreleased models were accessed by the same group, and whether vendor controls change for restricted AI systems.
Why This Matters
This case highlights a growing AI security problem: the most sensitive models are not only risky because of what they can do, but also because of who can reach them through contractors, vendors, and hidden infrastructure paths. As frontier AI systems become more capable in cybersecurity, access control may become as important as model safety testing itself.
This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.
Anthropic expands Enterprise AI Agents across finance, HR, and engineering as software stocks rebound and SaaS disruption fears ease.
Anthropic says Mythos Preview can exploit critical vulnerabilities and remains withheld from public release over cyber and safety risks.
Anthropic launched Claude Design in research preview for paid users, linking prompts, prototypes, Canva exports, and Claude Code workflows.
Anthropic launched Claude Opus 4.7 with stronger coding, better vision, new cyber safeguards, and unchanged $5/$25 token pricing.
The Anthropic Defense Department lawsuit widened after Microsoft, rival AI researchers, retired military leaders, and rights groups backed the case.
Read a comprehensive monthly roundup of the latest AI news!






