On April 7, 2026, Anthropic did something unprecedented in the history of commercial AI development. They confirmed the existence of Claude Mythos—a 10 trillion parameter model described as "by far the most powerful AI model we have ever developed"—and simultaneously announced that they would not be releasing it to the public. Instead, access would be restricted to approximately 50 organizations through a gated program called Project Glasswing.
The Confirmation That Changed Everything
For weeks, rumors had circulated about a model codenamed "Capybara" within Anthropic. Leaked internal documents suggested something far beyond Claude Opus 4.6—a step-change in capabilities that had engineers both excited and concerned. On April 7, Anthropic stopped the speculation with a direct statement.
"Claude Mythos represents a step change in capabilities above Claude Opus 4.6," the company wrote. "It is currently far ahead of any other AI model in cyber capabilities." The statement continued with language rarely seen from AI labs: "It presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
This wasn't marketing hyperbole. Anthropic was warning the world about their own product.
Key Points
- Claude Mythos is Anthropic's first intentionally restricted model release
- Access limited to ~50 organizations for defensive cybersecurity
- 10 trillion parameters make it 5-10x larger than Claude Opus 4.6
- Pricing set at $25/$125 per million tokens—5x Opus rates
Project Glasswing: The 50-Company Firewall
Rather than a general release, Anthropic created Project Glasswing—a gated early-access program limited to approximately 50 partner organizations. The list reads like a who's who of global technology and finance: AWS, Apple, Microsoft, Google, NVIDIA, Cisco, CrowdStrike, JPMorgan, the Linux Foundation, and others.
These aren't random selections. Each organization has been granted access for a specific purpose: defensive cybersecurity. The mandate is explicit—use Mythos to scan your own infrastructure for vulnerabilities before attackers can weaponize similar capabilities. It's a preemptive strike against a future where AI-powered attacks are commonplace.
The pricing reflects the exclusive positioning: $25 per million input tokens, $125 per million output tokens. That's 5x more expensive than Claude Opus 4.6 for inputs and the same premium for outputs. Anthropic has signaled that even if Mythos becomes more broadly available, it will remain a premium enterprise product, not a general-purpose API.
The 10 Trillion Parameter Question
Ten trillion parameters is a staggering number. For context, Claude Opus 4.6 is believed to be in the hundreds of billions of parameters. GPT-5.4, while OpenAI doesn't disclose specifics, is estimated at roughly 1-2 trillion. Mythos is an order of magnitude larger than anything publicly available.
But parameter count alone doesn't explain Anthropic's caution. The specific capabilities that concern them are:
- Autonomous vulnerability discovery: Mythos can identify security flaws in codebases without human guidance, finding exploit paths that would take human researchers weeks or months
- Multi-hop reasoning about systems: The model can trace complex causal chains across distributed systems, understanding how a small configuration error in one service cascades into a critical vulnerability
- Social engineering at scale: Advanced understanding of human psychology combined with personalization capabilities that could automate sophisticated phishing campaigns
- Exploit generation: The ability to not just identify vulnerabilities but generate working exploit code
"We built something that could significantly accelerate both defensive and offensive cybersecurity capabilities. In a world where cyberattacks already cause hundreds of billions in damage annually, releasing that capability broadly without proper safeguards would be irresponsible."
— Anthropic Spokesperson
The Safety-First Strategy
Anthropic's decision to gate Mythos aligns with their long-standing emphasis on AI safety, but it also reflects a specific philosophical position: capability and availability should not be automatically linked. Just because you can build something doesn't mean you should make it universally accessible.
This stance comes after Anthropic's high-profile refusal to allow military uses of their models in March 2026, which led to the company being labeled a "supply-chain risk" by some defense contractors. The irony is notable: a lab that refused to let its models be used offensively is now warning that its latest model is inherently an offensive capability.
Priority access was granted based on four key factors: operational role in critical infrastructure (power grids, telecommunications, financial systems), maturity of existing security practices and responsible disclosure programs, proven track record of defensive security research, and technical capability to integrate Mythos safely into existing workflows.
The GLM-5.1 Contrast
The same day Anthropic announced Mythos's gated status, Chinese AI lab Zhipu AI released GLM-5.1—a 744 billion parameter model that reportedly beats GPT-5.4 on coding benchmarks—under the MIT license. The most permissive open-source license available. Free for anyone to use, modify, and redistribute, including commercially.
| Feature | GLM-5.1 | Claude Mythos |
|---|---|---|
| Parameters | 744B (40B active) | 10 trillion |
| License | MIT (fully open) | Proprietary (gated) |
| Access | Anyone | ~50 organizations |
| Pricing | ~$1/$3.2 per M tokens | $25/$125 per M tokens |
| Self-hosting | Yes, free | No |
This contrast defines the current moment in AI development. On one side: American labs increasingly cautious about capability release, implementing gating, safety evaluations, and responsible deployment frameworks. On the other: open-source releases that democratize access to powerful AI without restrictions.
What This Means for the Industry
Claude Mythos establishes several precedents that will shape AI development:
Capability-based gating is now a valid release strategy. Anthropic has proven that major labs can publicly acknowledge building something too capable for general release without destroying their credibility. Other labs facing similar decisions now have cover to be cautious.
The "most capable" model is no longer the most accessible. For the first time, the absolute frontier of AI capability is not available through a public API. This bifurcation—between what exists and what's accessible—may become permanent.
Defensive applications get priority access. Anthropic's selection criteria suggest that as AI capabilities advance, priority access may increasingly flow to defensive security applications rather than consumer products.
Mythos arrives amid the most competitive period in AI history. Q1 2026 saw $267.2 billion in AI venture funding—more than double the previous quarterly record. In this environment, Anthropic's choice to gate their most powerful model is a statement about values. They're betting that responsible deployment matters more than raw capability demonstrations.
Looking Forward
Anthropic has not provided a timeline for broader Mythos availability. The company says it will "continue to evaluate responsible deployment options" and may expand access "as we develop greater confidence in our safety measures and as the defensive security ecosystem matures."
For now, Claude Mythos remains a glimpse of what's possible—and a warning about what's coming. A 10 trillion parameter model that its creators deemed too powerful to release broadly. A capability demonstration that doubles as a capability warning.
In the mythology from which the model takes its name, mythos represents the underlying beliefs and assumptions that shape a culture. Anthropic's mythos is clear: capability without responsibility is not progress. Whether the rest of the industry agrees will determine the shape of AI development for years to come.
Claude Mythos forces us to confront a question that will only become more urgent: as AI capabilities advance, who decides what gets built, who gets access, and who bears responsibility when things go wrong? Anthropic has offered one answer. The industry now must decide if it's the right one.