Anthropic’s projected annual revenue more than tripled to over $30 billion in 2026. Also Anthropic: we’re too afraid to release our newest model. That tension tells you everything you need to know about where we are right now with AI development — and why Mythos has the security community genuinely rattled.
I build bots for a living. I spend my days thinking about what AI can do, what it should do, and where the guardrails need to go. So when a company with Anthropic’s reputation ships something so capable that they pump the brakes on their own release, I pay attention. That’s not a PR move. That’s a red flag wrapped in a press release.
So What Is Mythos?
Mythos is Anthropic’s latest AI model, and its headline capability is cybersecurity — specifically, finding and exploiting software vulnerabilities. On paper, that sounds useful. Security teams are understaffed, codebases are enormous, and bugs hide in places humans don’t think to look. An AI that can surface those weaknesses before bad actors do? Sign me up.
Except Mythos doesn’t just surface weaknesses. According to Anthropic’s own red team documentation, Mythos Preview fully autonomously identified and then exploited a 17-year-old remote code execution vulnerability in FreeBSD — one that allows anyone with access to run arbitrary code on affected systems. Autonomously. Without a human in the loop guiding it step by step.
That’s not a security tool. That’s a security threat wearing a security tool’s clothes.
Why This Hits Different for Bot Builders
Most coverage of Mythos frames this as a national security story, and fair enough. But from where I sit, the implications are closer to home than people realize.
The bots we build run on infrastructure. They call APIs, they sit behind web servers, they interact with databases. A lot of that infrastructure is old. A lot of it has never been properly audited. FreeBSD, the system Mythos cracked, powers a significant chunk of the internet’s backend — including systems that many bots and automation pipelines quietly depend on.
If a model like Mythos were widely available, the attack surface for every bot deployment just got a lot more dangerous. Not because the bots themselves are vulnerable in new ways, but because the systems underneath them are now easier to probe at scale, automatically, by anyone who can access the model.
The Capability-Safety Gap Is Real
Anthropic paused the release of Mythos specifically because of its ability to find and exploit vulnerabilities in major software systems. That pause deserves credit — it’s not nothing. But it also reveals something uncomfortable about how frontier AI development works right now.
The capability got built first. The concern came after. That ordering matters.
Experts quoted across outlets from The Guardian to The Economist have flagged Mythos’s apparent superhuman hacking abilities as genuinely alarming. The word “superhuman” is doing a lot of work there, and I don’t think it’s hyperbole. When a model can autonomously find and exploit a vulnerability that sat undetected for 17 years, we’re not talking about a slightly better fuzzing tool. We’re talking about a qualitative shift in what automated systems can do to other systems.
What Responsible Looks Like From Here
I’m not in the camp that says this research shouldn’t exist. Understanding how AI can be used offensively is exactly how you build defenses against it. Red teaming, controlled testing, staged disclosure — these are solid practices and Anthropic appears to be using them.
But “we tested it carefully before releasing it” is a floor, not a ceiling. A few things I’d want to see from any lab shipping models with these capabilities:
- Thorough public documentation of what the model can and cannot do, beyond marketing summaries
- Clear access tiers — not every developer needs the full capability set
- Coordinated disclosure pipelines so that vulnerabilities the model finds get patched, not just catalogued
- Independent audits, not just internal red teams
The revenue numbers tell you Anthropic is growing fast. Fast growth and genuinely dangerous capabilities in the same product line is a combination that needs more than internal caution to manage well.
Where This Leaves Us
Mythos is real, its capabilities are real, and the concern from the security community is earned. For those of us building on top of AI infrastructure every day, this isn’t abstract. The systems our bots run on are the same systems a model like Mythos could be pointed at.
Anthropic got scared of what they built. That’s actually the most honest thing a lab can do. Now the question is whether the rest of the industry — and the people regulating it — are paying close enough attention to follow their lead.
🕒 Published: