
The Math Nobody Taught You Behind Every AI
AI isn't magic — it's linear algebra, calculus, and probability. Here's the math behind every LLM and AI model, explained in plain human language.
Anthropic's Project Glasswing uses Claude Mythos to find zero-day vulnerabilities in major OS and browsers. Here's what this means for AI and cybersecurity.

On April 12, 2026, Anthropic announced Project Glasswing — a coordinated, industry-wide initiative to use AI's most advanced security capabilities for defense rather than offense. The project brings together twelve of the world's most important technology companies, all united around a single goal: find the vulnerabilities in critical software before the bad actors do.
The name comes from the glasswing butterfly, Greta oto. Its wings are transparent — it hides in plain sight. It's a fitting metaphor. The vulnerabilities that Project Glasswing is hunting have been hiding in plain sight for decades.
Launch Partners

The Core Commitment
Anthropic is putting $100M in model usage credits into this project — plus $4M in direct donations to open-source security organizations, including $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation.
Project Glasswing is powered by a new, unreleased frontier model called Claude Mythos — from the Ancient Greek for "utterance" or "narrative", the system of stories through which civilizations made sense of the world.
Mythos Preview is not a product you can sign up for today. It's a model Anthropic has been stress-testing in controlled conditions, and the results are striking enough that they felt the need to act before a general release.
What Mythos Preview Found — Autonomously
Without any human steering, the model identified thousands of previously unknown (zero-day) vulnerabilities across every major operating system and every major web browser. Three examples stand out:
→ A 27-year-old flaw in OpenBSD — one of the most security-hardened operating systems on the planet, used to run firewalls and critical infrastructure. The vulnerability allowed a remote attacker to crash any machine running it simply by connecting to it.
→ A 16-year-old vulnerability in FFmpeg, the video codec library used by almost every piece of modern software
that touches video. Automated tools had run into this exact line of code five million times and never flagged it.
→ A chain of vulnerabilities in the Linux kernel — the software running most of the world's servers — that, when combined, escalated ordinary user access to complete root control of the machine.
All of these have now been reported and patched. For vulnerabilities that are still being fixed, Anthropic has published cryptographic hashes of the details — essentially a timestamped receipt — and will reveal specifics once patches are in place.
Evaluation benchmarks make the gap between Mythos Preview and the next-best model, Claude Opus 4.6, very hard to ignore.

On CyberGym — the benchmark specifically measuring ability to reproduce known security vulnerabilities — Mythos Preview scores 83.1% against Opus 4.6's 66.6%. On SWE-bench Verified, the coding benchmark, it hits 93.9%. Perhaps most impressive: on BrowseComp, it beats Opus 4.6 while using 4.9× fewer tokens.
Here's the uncomfortable truth that Project Glasswing is built on: the same capabilities that make Mythos Preview valuable for defenders are exactly what make it dangerous in the wrong hands.
The time between a vulnerability being discovered and being exploited by attackers has collapsed. What once took months now happens in minutes. State-sponsored actors from China, Russia, North Korea, and Iran are already probing critical infrastructure. Ransomware groups are already causing hospital systems to go offline and putting lives at risk. The global financial cost of cybercrime is estimated at around $500 billion per year.
Up until recently, finding and exploiting software vulnerabilities required deep, rare expertise. That barrier is gone. AI has cleared it.
But — and this is the key insight behind Project Glasswing — the defenders get to use these capabilities too. And if defenders get access first, and move fast, they can fix vulnerabilities before attackers even know they exist.
That's the race. Project Glasswing is an attempt to make sure defense wins it.
This isn't a marketing exercise. The twelve launch partners have been running Mythos Preview in real security operations for weeks before this announcement. Here's what they found:
"AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back."
— Anthony Grieco, SVP & Chief Security & Trust Officer, Cisco
"The window between a vulnerability being discovered and being exploited by an adversary has collapsed — what once took months now happens in minutes with AI."
— Elia Zaitsev, Chief Technology Officer, CrowdStrike
"Open source maintainers — whose software underpins much of the world's critical infrastructure — have historically been left to figure out security on their own. Project Glasswing offers a credible path to changing that equation."
— Jim Zemlin, CEO, The Linux Foundation
"Claude Mythos Preview showed substantial improvements compared to previous models. We look forward to partnering with Anthropic and the broader industry to improve security outcomes for all."
— Igor Tsyganskiy, EVP of Cybersecurity and Microsoft Research, Microsoft
Project Glasswing is explicitly described as a starting point, not a finish line. Here's what the roadmap looks like:
→ Partners get access to Mythos Preview to run vulnerability scanning, black box binary testing, endpoint security, and penetration testing on their most critical systems.
→ The $100M credit commitment covers the research preview period. After that, access is priced at $25 per million input tokens and $125 per million output tokens, available via Claude API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry.
→ Within 90 days, Anthropic will publish a public report on what's been learned, which vulnerabilities have been fixed, and practical recommendations for how the industry should evolve its security practices.
→ Open-source maintainers can apply for access through the Claude for Open Source program — critical, since open-source code underlies the vast majority of the world's software.
→ Anthropic is in ongoing discussions with US government officials about both the offensive and defensive implications of these capabilities.
On Wider Availability
Anthropic has no plans to make Claude Mythos Preview generally available right now. The reason is straightforward: the model's capabilities are powerful enough that releasing it without adequate safeguards would be irresponsible. They plan to test new cybersecurity safeguards on an upcoming Claude Opus model first — a less risky testbed — before bringing Mythos-class capabilities to the broader public.
It's easy to read Project Glasswing as just a cybersecurity story. But it's really a story about where AI capability has arrived.
A model that can autonomously find a 27-year-old vulnerability in OpenBSD isn't just good at cybersecurity. It understands software at a level that surpasses almost every human who has ever looked at that codebase. That level of reasoning — applied to medicine, science, engineering, infrastructure — is what the next few years of AI development looks like.
The same logic that drives Project Glasswing applies everywhere: when AI systems become capable enough to cause serious harm if misused, the responsible move is to channel those capabilities into defense and benefit first, before broader release. Anthropic is betting that doing this publicly, with industry partners, builds more trust than doing it quietly.
Whether you're a developer, a security professional, an executive, or just someone who uses software every day — which is everyone — this matters. The software we all rely on has been quietly riddled with vulnerabilities for decades. For the first time, we have tools capable of finding them faster than attackers can exploit them.
We just have to actually use them.
Written by Manas AI · manas-ai.com · @manasai.tech
Source: anthropic.com/glasswing
Keep reading
You might also enjoy

AI isn't magic — it's linear algebra, calculus, and probability. Here's the math behind every LLM and AI model, explained in plain human language.
ManasAi
Want AI built for your business?
We build custom AI agents, MCP servers, and automation workflows that transform how your team works.
Talk to our team →