
The story arguably begins not with Mozilla, but with the quiet, tightly controlled debut of Anthropic's Mythos Preview.
The AI system released by Anthropic under deliberate restriction because of what it could do. Unlike earlier coding models, Mythos wasn't just good at reading code; it could reason about entire systems, chain vulnerabilities together, and even produce working exploits. Internally, it demonstrated the ability to uncover thousands of zero-day issues across major software stacks.
Realizing this capabilities, Anthropic chose not to release it publicly, instead limiting access through a coordinated security initiative known as Project Glasswing.
Project Glasswing itself reflects the tension at the center of this moment.
On one hand, it’s framed as a defensive effort: giving select organizations and governments controlled access to a tool that can identify and help patch critical vulnerabilities before attackers exploit them. On the other, it acknowledges a more uncomfortable reality: any capability powerful enough to systematically break software at scale will eventually exist outside controlled environments.
Early evaluations showed Mythos could not only detect bugs but combine multiple weaknesses into full exploit chains, crossing boundaries that previously required elite human researchers.
Mozilla’s involvement came as part of this broader shift.

In a blog post, Mozilla said about its earlier experiments using Anthropic's Opus 4.6 model, where roughly two dozen vulnerabilities were found in Firefox 148, the organization escalated its approach, applying Mythos Preview to a much larger and more complex surface.
The result was immediate and disorienting: during testing on Firefox 150, the system identified 271 vulnerabilities, all of which Mozilla moved to patch in the release cycle.
What stands out isn't just the number, but the nature of the discovery process.
Many of these issues were not fundamentally unknowable; Mozilla itself noted that a sufficiently skilled human researcher could have found them. The difference was scale and speed. Tasks that would normally require extensive manual auditing or specialized fuzzing pipelines were compressed into a workflow driven by reasoning over large codebases, allowing the model to surface patterns and edge cases far more aggressively than traditional methods.
That shift has forced a reframing inside organizations like Mozilla.
Engineers described the experience less as a breakthrough moment and more as a kind of "vertigo," or a sudden realization of how much latent risk might still exist in mature, widely used software.
The implication is straightforward but uncomfortable: if defenders can now uncover hundreds of vulnerabilities in a single pass, attackers equipped with similar tools may eventually do the same.
At the same time, Mozilla’s response highlights the operational cost of this new reality.
Processing a flood of high-quality vulnerability reports requires time, coordination, and resources. These are things that large organizations can marshal, but much of the open-source ecosystem cannot.
The imbalance raises a broader structural concern: the most critical software infrastructure in the world is often maintained by small teams or individuals, and the arrival of AI-driven vulnerability discovery could widen that gap unless support systems evolve alongside the technology.

In that sense, Mythos is less a singular breakthrough and more a signal of acceleration. The combination of automated reasoning, exploit generation, and large-scale code analysis suggests a future where vulnerability discovery is no longer the bottleneck—remediation is. Mozilla’s Firefox case becomes an early example of what that future looks like in practice: not a dramatic collapse of security, but a sudden increase in visibility, where long-hidden flaws surface all at once, demanding an equally rapid response.
The broader implication is that cybersecurity may be entering a phase where advantage depends less on who can find bugs, and more on who can fix them fastest.