The cybersecurity landscape just shifted dramatically. Anthropic announced Project Glasswing, a coalition bringing together AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks—all united around a single uncomfortable truth: AI models can now find and exploit software vulnerabilities better than almost any human.
This isn’t incremental progress. This is a fundamental change in how we need to think about software security. Let’s break down what Project Glasswing is, why it matters, and what it means for everyone building or maintaining software.
The Catalyst: Claude Mythos Preview
At the heart of Project Glasswing is Claude Mythos Preview—an unreleased frontier model from Anthropic that demonstrates capabilities that would have seemed like science fiction just a few years ago. This isn’t just “AI that can help with code review.” Mythos Preview has autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser.
The examples Anthropic shared are sobering:
-
A 27-year-old vulnerability in OpenBSD—one of the most security-hardened operating systems in existence, often used for firewalls and critical infrastructure. The flaw allowed remote attackers to crash any machine just by connecting to it.
-
A 16-year-old vulnerability in FFmpeg—the ubiquitous video encoding library that powers countless applications. Automated testing tools had hit this specific line of code five million times without ever catching the problem.
-
A Linux kernel exploit chain—where Mythos Preview autonomously found and combined multiple vulnerabilities to escalate from ordinary user access to complete system control.
These aren’t theoretical concerns. These are real vulnerabilities that existed in production software used by billions of people, discovered by an AI model operating largely without human guidance.
Why This Matters Now
The defensive implications are significant, but there’s an urgent offensive reality driving this initiative: these capabilities will proliferate. If Anthropic has built a model that can do this, others will too. And unlike Anthropic, not everyone building frontier AI models is committed to responsible disclosure.
The window between vulnerability discovery and exploitation has collapsed. CrowdStrike’s CEO put it bluntly: “What once took months now happens in minutes with AI.”
Consider the current state of software security:
- $500 billion in estimated annual global cybercrime costs
- Critical infrastructure—banking systems, medical records, power grids, logistics networks—all running on software riddled with undiscovered bugs
- State-sponsored actors from China, Iran, North Korea, and Russia already leveraging every advantage they can find
- Open-source software maintainers working with minimal resources while their code underpins most of the world’s systems
The asymmetry has always favored attackers. Defenders need to protect everything; attackers only need to find one flaw. AI threatens to make that asymmetry catastrophic—unless we get defensive AI deployed first.
The Coalition and Its Mission
Project Glasswing’s launch partners read like a who’s-who of critical infrastructure:
- Cloud providers: AWS, Google, Microsoft
- Security specialists: CrowdStrike, Palo Alto Networks
- Hardware: Broadcom, NVIDIA
- Networking: Cisco
- Enterprise: Apple, JPMorganChase
- Open source: Linux Foundation
Anthropic is committing up to $100 million in usage credits for Mythos Preview across these efforts, plus $4 million in direct donations to open-source security organizations. That includes $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation.
The focus areas are practical:
- Local vulnerability detection
- Black box testing of binaries
- Securing endpoints
- Penetration testing of systems
- Supply chain security assessment
Within 90 days, Anthropic commits to publicly reporting what they’ve learned, vulnerabilities fixed, and improvements made. This isn’t just about finding bugs—it’s about developing best practices for a new era of AI-augmented security.
The Open Source Angle
One of the most important aspects of Glasswing is its explicit focus on open-source software. As the Linux Foundation’s Jim Zemlin noted, open-source maintainers have historically been “left to figure out security on their own” while their code “constitutes the vast majority of code in modern systems.”
This has always been a structural problem. The most critical software infrastructure in the world is often maintained by small teams or individual volunteers. They can’t afford expensive security audits. They don’t have dedicated red teams. Yet their code runs inside everything from nuclear power plants to your grandmother’s smart TV.
Project Glasswing offers something unprecedented: giving those maintainers access to security capabilities previously reserved for well-funded enterprises. Maintainers can apply through the “Claude for Open Source” program for access to Mythos Preview.
The Capabilities Under the Hood
The benchmarks Anthropic shared show why Mythos Preview is being treated as a step change rather than an incremental improvement:
| Benchmark | Mythos Preview | Opus 4.6 |
|---|---|---|
| SWE-bench Verified | 77.8% | 53.4% |
| SWE-bench Pro | 82.0% | 65.4% |
| SWE-bench Multilingual | 59.0% | 27.1% |
| Terminal-Bench 2.0 | 87.3% | 77.8% |
| Cybersecurity Vuln Reproduction | 83.1% | 66.6% |
That cybersecurity vulnerability reproduction benchmark is particularly telling—it measures the model’s ability to reproduce known vulnerability exploitation patterns. An 83% success rate means the model can reliably understand and recreate sophisticated attack techniques.
The coding improvements aren’t just about finding bugs, either. Better agentic coding capability means better ability to understand complex codebases, trace data flows, identify logic errors, and propose fixes—all autonomously.
What This Means for Enterprise Security Teams
If you’re running a security team, Glasswing signals several things:
1. Your scanning tools are about to become obsolete (or be upgraded dramatically)
Traditional static analysis, SAST/DAST tools, and fuzzing approaches have obvious limitations that AI can transcend. The FFmpeg vulnerability—hit five million times by automated tests without detection—is a perfect example. Expect security vendors to rapidly integrate AI-powered analysis, or be disrupted by those who do.
2. Patch cycles need to accelerate
When AI can find vulnerabilities in hours that humans missed for decades, your 30-day patch window becomes a liability. The entire industry needs to rethink how quickly fixes move from discovery to deployment.
3. Supply chain visibility becomes critical
If AI can find vulnerabilities this effectively, you need complete visibility into every dependency in your stack. That open-source library you pulled in four years ago? It’s time to inventory and assess everything.
4. Offensive capabilities will proliferate
This is the uncomfortable reality: defensive AI is being deployed by responsible actors now, but offensive AI will spread to adversaries. Your threat model needs to account for attackers with AI-augmented capabilities within the next 12-18 months.
The Responsible Disclosure Question
Anthropic explicitly stated they don’t plan to make Claude Mythos Preview generally available. This raises interesting questions about the dual-use nature of AI security tools.
On one hand, restricting access prevents the model from being used for offense by bad actors. On the other hand, it creates a concentration of capability among a limited set of partners, leaving others less protected.
The proposed solution—developing robust safeguards and launching them with an upcoming Claude Opus model—suggests a path toward broader access with guardrails. But the tension between capability diffusion and safety will remain a central challenge.
What Comes Next
Project Glasswing is explicitly described as “a starting point.” Anthropic envisions this growing into a larger effort involving:
- Updated vulnerability disclosure processes
- Accelerated software update mechanisms
- Enhanced supply chain security standards
- Evolution of secure-by-design practices
- Industry standards for regulated sectors
- Automated triage and patching workflows
The 90-day public report will be worth watching. But more importantly, the collaboration model here—competitors working together on shared security challenges—may be as significant as the technical capabilities themselves.
The Bigger Picture
We’re at an inflection point where AI capability has crossed a threshold that “fundamentally changes the urgency required to protect critical infrastructure from cyber threats,” as Cisco put it. There’s no going back.
The optimistic read: defenders can use the same capabilities that make AI dangerous to build more secure systems than humans ever could. AI that can find a 27-year-old vulnerability in OpenBSD can also ensure new code doesn’t ship with similar flaws.
The realistic read: we’re in a race. Defensive AI needs to be deployed widely before offensive AI becomes a standard tool for adversaries. Project Glasswing is an attempt to give defenders a head start.
The question for everyone building and maintaining software is simple: are you ready for a world where AI-powered vulnerability discovery is the baseline, not the exception?
What do you think about AI-powered cybersecurity? Are you concerned about the dual-use implications, or optimistic about defensive applications? Drop your thoughts in the comments or reach out to us on social media.