You don’t hire bouncers in the hope that they’ll smash up your bar, and you don’t hire bank guards in the hope that they’ll break into your vault. But in tech, keeping systems safe starts with attacking them, relentlessly and with all the ingenuity you can muster. Serious software vulnerabilities that make it into products used by the public—browsers, operating systems, login portals, apps—are hard to find, almost by definition: If a grave weakness had been discovered during testing, any responsible developer would have patched the code before the product was released. After release, the best way to find weaknesses is to act like a bad guy. You hack the system, probe for soft spots, find mistakes you can exploit. Then you fix them, before the actual bad guys get there.
The key thing to note is that for most of the tech industry’s history, discovery has been the biggest challenge in cybersecurity. Patching bugs may be what actually improves system safety, in the same way that catching fish is what actually gets you dinner. But you have to find the fish before you catch them, and a code base is a vast and highly opaque ocean. (This is why good hackers make such desirable candidates for IT security jobs.) Security firms that run pentests—penetration tests, where security professionals try to break through a piece of software’s defenses—typically earn $20,000 to $120,000 per test, according to the corporate research firm Forrester. What determines the price a firm can charge within that range is how good it’s perceived to be at finding unknown vulnerabilities.
If we can believe the AI industry—admittedly not the most trustworthy source, as we’ll discuss in a minute—all of that may be about to change. Last month, Anthropic announced that it had developed a new version of its Claude model, code-named Claude Mythos, that is capable of finding zero-day vulnerabilities in code, which is the industry’s term for those serious flaws that an application’s makers haven’t spotted. The term “zero-day” dates back to the early days of computing, when hackers would break into developers’ computers to steal unreleased software that had been out for zero days; today, it’s usually used to refer to either the number of days developers have known about a flaw or the number of days they have to fix it.
Zero-day vulnerabilities are the biggest quarry in cybersecurity because if bad actors find them before developers do, they can be exploited indefinitely without anyone being the wiser. If you know that someone stole your bank password, you can change it and minimize the damage; if you don’t know, whoever has your credentials can watch your transactions for as long as they want and transfer all your money out whenever they feel like it. National governments search for zero-day vulnerabilities and stockpile them to use against their enemies. An AI agent that gave the same power to anyone who had access to it could upend the entire paradigm of cybersecurity. Discovery would no longer be the bottleneck because the agent could analyze seas of code in the blink of an eye. All the zero-day vulnerabilities would, in theory, be exposed almost at once, to be fixed or exploited, depending on who was in control.
Anthropic’s announcement stressed two key points about its new AI agent:
- Mythos is too dangerous to be released to the public because if it falls into the wrong hands, it could give malicious actors—cybercriminals, terrorists, hostile governments—access to the back doors and weak points in all the world’s software systems simultaneously. Anthropic released a 244-page report detailing the results of its testing with Mythos but withheld access to the software from regular users.
- Anthropic was launching a new initiative to minimize the danger of Mythos, called Project Glasswing, named for a staggeringly lovely species of transparent-winged butterfly. Under Project Glasswing, Anthropic would share an early version of Mythos, named Mythos Preview, with around four dozen organizations, mostly big tech companies and major banks. This, Anthropic said, would allow some of the most critical vulnerabilities to be fixed before anyone else could use Mythos to find them.
How big a deal is Mythos? It depends on whom you ask. Some analysts think it will revolutionize the way software is vetted. “Anthropic is now the most important partner for every cybersecurity company,” the industry analysts at Forrester say. Other analysts think it’s all marketing hype. “Half-assed sub-War of the Worlds … horseshit,” the AI critic Ed Zitron wrote of Anthropic’s announcement. Still other analysts—and these are the ones who’ve gotten the most media attention—think it will bring about “bugmaggedon,” potentially destroying the tech landscape as we know it and taking key pillars of modern society with it. “We could be on the brink of total internet collapse,” the BBC says. The White House is reportedly concerned enough about Mythos that it’s considering giving the government a more active role in AI oversight, potentially even requiring tech companies to put new AI models through a federal review process before they can be released. This would represent a dramatic reversal of the Trump administration’s previously hands-off approach to AI regulation.
So how powerful is this thing, really? Is there actually a chance that it will destroy the internet? Just how hopeful and/or terrified should we be?
Let’s try to separate the facts from the speculation. And because we deserve to look at something nice while we contemplate our powerlessness in the face of the latest threat to civilization, let’s have the glasswing butterfly guide us along our path.

Butterfly no. 1: Why are some people so freaked out about an AI agent that helps identify zero-day vulnerabilities? Isn’t this a giant gift to cybersecurity professionals?
People are freaked out because Anthropic, in its Mythos announcement, went out of its way to freak everyone out. The company didn’t just say that Mythos was dangerous in some abstract way. According to Anthropic, Mythos had discovered thousands of zero-day vulnerabilities in just a few weeks of testing. It had found weaknesses in “every major operating system and every web browser.” Some of the flaws it found were decades old; some were in open-source applications, like the video encoder FFmpeg, that are deployed across hundreds of other software products. The announcement conjured an image of an enormously fragile software ecosystem, cobbled together with old, flawed code and ready to collapse if the wrong people found the flaws.
Mythos, in this vision of the tech universe, is a piece of security software so good that it doubles as the ultimate security threat. Used responsibly, it could help developers fix problems with unprecedented speed and efficiency; used irresponsibly, it could imperil the world’s entire digital infrastructure. The fallout “for economies, public safety, and national security,” Anthropic said, “could be severe.”
Large language models—the complex neural networks, designed to work with natural language, that form the basis for AI chatbots like Claude—are rife with problems and often don’t perform their intended tasks reliably; they struggle, for instance, to answer questions accurately. But LLMs, and especially Claude, are better at coding than just about anything else. Now, Anthropic says, AI models “have reached a level of coding capability where they can surpass all but the most skilled humans” at discovering and exploiting security flaws in software. You’ve probably heard about “vibe coding,” which is the practice of using AI to write software based on prompts. Anthropic now seems to be acknowledging the possibility of a vibe apocalypse.

Butterfly no. 2: What do we know about Project Glasswing?
Leaving aside the question of why it was necessary to develop a product that imperils economies, public safety, and national security in the first place, Anthropic’s response to the danger it created was to try to make sure that only the good guys could initially use Mythos. Under Project Glasswing, Anthropic granted access to Mythos Preview to around four dozen organizations, including Apple, Google, Microsoft, the Linux Foundation, Amazon Web Services, NVIDIA, and JPMorganChase. These companies are allowed to use Mythos Preview in their “defensive security work,” bolstering the safeguards of the web while keeping Mythos itself secure for as long as possible. The idea is that if Google, Amazon, et al. have time to tighten up their code before Mythos-enabled attackers go after it, some of the worst consequences can be avoided.
Will it work? Who knows. Anthropic is investigating claims that Mythos has already been accessed by unauthorized users, which isn’t a great sign. And is it excruciatingly ironic for Anthropic to say that only the good guys get to use its superweapon and then turn that superweapon over to a bunch of tech behemoths and investment banks? You can answer that one yourself.

Butterfly no. 3: So wait. Anthropic is being really open about the dangers Mythos poses. Isn’t that a clear sign that Mythos is legit? Why would a company exaggerate the negative potential of its products?
Because the AI industry has been driven from the beginning by wildly overwrought claims, many of them pertaining to the destructive potential of its products. Too dangerous to release to the public is a move the industry has pulled before. In fact, it’s a move Anthropic CEO Dario Amodei has pulled before; when he was an executive at OpenAI, Amodei and other OpenAI leaders released a statement declaring that the ChatGPT model GPT-2 was simply too powerful for the general public to be allowed to use it. That was in 2019; the statement, which generated a wave of media attention and excitement around new breakthroughs in AI, didn’t prevent GPT-2 from being widely released a few months later. The world did not end; seven years later, ChatGPT’s current model, GPT-5, still can’t set a timer.
It may seem like a strange tactic for companies to scaremonger about their own products. When Ford rolls out a new pickup truck, the CEO generally doesn’t go around giving keynote addresses about how much more lethal it will make American highways. But the AI industry is selling a narrative—a mythos, if you will—as much as it’s selling a product, and that narrative is one of revolutionary, transformational power. “Our product can make your life a bit easier, although there are still a lot of kinks to iron out” is not a trillion-dollar sales pitch; “we’ve invented something so powerful that it has the potential to destroy humanity” is. The company that can end the world controls the future, and investors will spend big on that upside bet. After all, if the world ends, an investor’s losses won’t matter anyway.
Think about the way the industry talks about itself. AI isn’t just another tech gizmo. It’s bigger than the internet. It’s bigger than the smartphone. It’s going to reshape human society. It’s going to put millions out of work. It’s going to eliminate money. It’s going to surpass human intelligence. It’s going to replace humans altogether. It’s going to kill all humans. It’s going to be profitable beyond your wildest dreams, at least at some point, although definitely not today.
“It’s just part of this pattern of unsubstantiated claims of power,” according to Emily M. Bender, a technology scholar and the coauthor of The AI Con. Bender points out that companies like OpenAI and Anthropic use the narrative about some far-off, notional apocalypse to distract from the social and environmental damage they’re doing in the present.
I think they also use it to distract from the fact that their products thus far have been largely underwhelming. If the integration of AI into Google Search had been rolled out quietly and evaluated on its merits, it would have gone down as one of the most disastrous tech launches of all time. Your only job is to give me accurate information; you did a decent job of it yesterday, and today you’re telling me to put glue on pizza? But when the same rollout comes slathered in hype—when I’ve been conditioned to experience it as part of a narrative about civilizationally transformative technological innovation—I’m less likely to judge it on its merits, because even its shortcomings can be reframed as marks of the disruptive nature of progress. I can’t see what’s in front of me; I can see only the story I’ve been told.
The tendency of AI companies to talk about the dangers of their products may make people hate the industry (and people really hate the industry). But it also keeps people from saying, “This is kind of neat, I guess? But it’s super buggy and not all that useful.” The Mythos announcement can be understood in that light: It might make people leery of Anthropic, but it makes Mythos seem like a huge deal, which is ultimately what Anthropic wants.

Butterfly no. 4: So Mythos is probably just a big PR campaign? Are there any reasons to think there’s more to it?
There are many reasons to think that Anthropic’s claims about Mythos are exaggerated—for one thing, many of the “thousands” of zero-day vulnerabilities it found are minor, can’t really be exploited, or otherwise aren’t as dangerous as they sound—but there are also reasons to think they’re legitimate, or at least legitimate-ish. For instance, the Mozilla Foundation, the nonprofit organization that develops Firefox, announced that it used Mythos Preview to find 271 bugs in its open-source web browser. That seems like a real thing! While most of the 271 bugs seem to have been relatively low priority, at least three seem to have been given a “high” impact rating. (I’m saying “seem to have been” because it’s not entirely clear from the Firefox changelog which of the most recently fixed vulnerabilities were detected by Mythos.)
Mozilla has been investing heavily in AI startups in an effort to tilt the balance of power in the industry away from the established players, forming a “rebel alliance of sorts,” in the words of Mozilla’s president, Mark Surman, who sees the nexus of established AI powers as “this thing that threatens us.” That is to say: Mozilla doesn’t necessarily love Anthropic; if it’s acknowledging that Mythos has been useful in debugging Firefox, there’s no reason to doubt the endorsement.

Butterfly no. 5: So what’s the truth about this thing? Is Mythos terrifying or not?
The truth is that we can’t say definitively because Mythos, at this point, is barely more than a rumor. So much is still unknown. Hardly anyone in a position to evaluate it disinterestedly has used it. It’s conceivable that the Anthropic announcement is 100 percent accurate, that AI really has surpassed all but the most skilled human coders, and that Mythos really does represent both an existential threat to the internet and a revolutionary hope for cybersecurity. It’s also possible that none of this is true, that Anthropic isn’t releasing Mythos because Mythos is a dud, and that Project Glasswing is a bit of clever PR designed to conceal the slowing pace of LLM development, which would crush Anthropic’s dreams of a $900 billion valuation if it were generally known.
That $900 billion is an important number. It puts Anthropic’s announcement in an important context, one that was weirdly missing from most of the media reports surrounding Mythos. Anthropic has a major funding round coming up in a couple of weeks. It’s hoping to use that round to surpass OpenAI’s valuation and become the biggest AI company in the world. It’s hoping that doing so will set it up for a colossal IPO later this year. Anything Anthropic says now has to be interpreted in light of those plans. And with that in mind—combining the little we know about Mythos with the observable patterns of the AI industry and Anthropic’s current market ambitions—I think the likeliest guess is that Mythos is not nearly as terrifying as we’ve been led to believe.
Remember, Mythos is a version of Claude, a full-scale chatbot, and wasn’t designed solely or even primarily for cybersecurity applications. The company’s decision to focus its announcement so heavily on Mythos’s ability to find zero-day vulnerabilities suggests an attempt to find a plausibly revolutionary new feature to dazzle investors with—to answer the question “What can we sell as a new capability that proves our model is advancing by leaps and bounds and disrupting the industry?” And the decision to portray the new model as too dangerous to release into the wild suggests an attempt to answer a follow-up question: “And how can we maximize the attention we get for it?”
That’s not to say that Mythos can’t do some of what’s been claimed; Project Glasswing may have real utility for organizations like Mozilla. But for Amodei, the overriding point is seemingly to convince investors that Anthropic is central to the future of humanity. He’s used the too dangerous for the public technique before, and now, I suspect, Anthropic is replaying the hits to wring every cent out of its hoped-for IPO.
Again, this is just a guess, but I think it’s a pretty safe one. I suspect the AI analyst Gary Marcus, an industry skeptic, has it right: “The model itself is incrementally better than previous recent models, but certainly not an off-the-chart breakthrough. … To a certain degree, I feel that we were played.”

Butterfly no. 6: What would it look like if Mythos destroyed the internet?
No one knows because no one knows where the vulnerabilities Mythos might uncover are, how numerous they are, or how severe they are. If a Joker-level chaos agent suddenly acquired total power over all digital infrastructure and opted to obliterate it, it would mean that no websites would load, the world’s industries and militaries would be plunged into disarray, banking and insurance would collapse (AI really would bring about the end of money!), hospitals would stop functioning, the power grid would likely fail, and the social order would be pushed to the brink of disintegration. Fascinating premise for a TV show; so unlikely to happen as a result of Mythos that imagining it is mostly a way to give Anthropic free ad space in your brain.
What’s much more likely to happen is some degree of limited, targeted damage. Cybercrime, for instance. Bad actors who gain access to a buffet of tasty zero-day vulnerabilities are not terribly likely to want to annihilate the internet. They’re more likely to want to make some illicit cash by theft, fraud, or some combination of the two. There’s also a chance that nations in conflict could use Mythos to attack each other’s online systems—think Putin trying to bring down Ukrainian banking—although, again, that’s already happening to some degree, using currently available tools.
It’s impossible to predict how much of any of this we’ll see. No one can assess the scale of a threat based on vulnerabilities that haven’t yet been discovered, and no one knows who might gain access to Mythos or when. If I were a crypto exchange, I might be sweating a bit; if I lived in a nation at war—and I do, I think, kind of—the situation might give me some pause. But I’ll put it this way: I’ll be surprised if the harm bad actors cause with Mythos is greater than the harm the AI industry is already causing in the world.
I could be wrong! You’ll never know if I admit it or not, however, because if the worst-case scenario happens, Google Docs will stop functioning and The Ringer will no longer exist.

Butterfly no. 7: What do we think of Anthropic as a whole? It’s supposed to be the good AI company, right?
Let me answer that by repeating the phrases “$900 billion valuation” and “trillion-dollar IPO.” I’m not sure it’s possible for “good” to coexist with those numbers. Anthropic cultivates a reputation as the thoughtful, careful, responsible AI company, the one that’s committed to using AI as a tool to assist human flourishing. (The name Anthropic comes from the Greek word for human, as seen, for instance, in misanthropic, someone who dislikes humanity.) I think this is mostly nonsense. Anthropic does strike me as the best of the new breed of tech giants, but this is less because the company itself is so moral and more because some of its peers are so plainly sociopathic. It doesn’t take a saintly corporation to be better than Palantir. Any company that’s not actively trying to create a techno-fascist police state can do it.
Anthropic does some things that look moderately heroic, such as refusing to let the Pentagon use its AI products to kill people without human involvement. This, it has to be said, is a pretty low moral bar to clear, but it’s more than OpenAI could manage, and it landed Anthropic in some real hot water with the government. On the other hand, every time the company does something laudable, more information follows that seems to make it look worse—its partnership with Palantir to develop military applications, for instance. It’s been reported that Claude was used for “target identification” as part of U.S. Central Command’s recent actions in the Middle East.
My view is that Anthropic is chiefly motivated by money, like most large corporations, and has simply incorporated an aesthetic of conspicuous virtue as a branding element to distinguish it from its competitors. In an earlier era of the internet, Google did the same with its old slogan “Don’t be evil.” We saw how that worked out.

Butterfly no. 8: Most important question. Are there really butterflies with transparent wings???
Yes! Aren’t these the most stunning little guys? I had no idea they existed until I looked up Project Glasswing’s name. I’m so excited about them. They’re a species of brush-footed butterfly. Their wings let them blend into their rainforest habitats, and although they look delicate, they’re extremely strong. They can carry 40 times their body weight! (That’s probably the weight of a Kleenex, but still.) They can fly up to 8 miles an hour and sometimes cover 12 miles in a single day. What a splendid thing to have in the world.
I really want to see one in person. They live in South and Central America, but they’ve been spotted as far north as Texas, so maybe I’ll get lucky someday. I’m not saying I’m glad Mythos exists because it led me to the glasswing butterfly, but we’ve got to take joy where we can find it these days. Mythos may or may not end the world, but at least it gave us this much.








