Skip to main content
Advertisement
Advertisement

Commentary

Commentary: Anthropic's Mythos cyber scare signals the economics of AI scarcity

As the capabilities of frontier AI models advance, gaining access to the technology could become critically important, says Richard Waters for the Financial Times.

Commentary: Anthropic's Mythos cyber scare signals the economics of AI scarcity

This photograph shows a figurine in front of the logo of the AI assistant "Claude" built by the US artificial intelligence safety and research company Anthropic during a photo session in Paris on Feb 13, 2026. (Joel Saget/AFP)

18 Apr 2026 05:59AM

SAN FRANCISCO: The idea that an AI model might be able to pick holes in much of today’s most widely used software has sent a shockwave through the cybersecurity world and left banks and others scrambling to assess the threat to their core technology.

To limit the fallout, Anthropic initially released the model, Claude Mythos, to a small number of tech customers to help them find and fix problems in commonly used software.

There has been less attention to the potential economic implications of this episode for the AI business. As the capabilities of the so-called frontier models advance, access to the technology could become critically important in particular industries or domains. 

That makes the limited distribution of Mythos an interesting test case for the availability and pricing of the most advanced models, with implications for the profit profile of the companies that produce them.

CNA Games
Show More
Show Less

CYBERSECURITY CONCERNS

Worries about AI have been reverberating in the cybersecurity world for a while: Anthropic’s researchers had already claimed to have found 500 “high-severity vulnerabilities” in widely used software using Opus 4.6, a model released publicly early this year.

The company did not fully disclose the results of the tests that led it to warn of the heightened threat from Mythos, making it difficult for researchers to validate its findings. But the warning that has reverberated around the world over the past week could equally well have been sounded six months ago or six months from now, says Bruce Schneier, a US security expert.

None of this lessens the seriousness of the looming cyber threat posed by AI. 

But with OpenAI this week releasing a similar model to a limited number of customers, the lack of full details and the heightened alarms have also raised speculation about the motives of the AI companies.
 

Anthropic is already straining to meet soaring demand for its AI coding agent and simply wouldn’t have had the capacity to meet demand if it hadn’t restricted Mythos, says Schneier. Demand for AI model usage far outstrips available supply, forcing companies to choose how to allocate strained computing resources.

Software companies that can’t get their hands on the latest AI models suddenly find themselves at a disadvantage. If they can’t reassure customers that their products have been “Mythos-vetted”, it hands a big advantage to rivals who can.

QUESTIONS ABOUT AVAILABILITY AND AFFORDABILITY

This raises important questions about the wider availability and affordability of advanced AI models as their capabilities increase. 

AI companies no doubt could - and one day will - find plenty of reasons to limit access, whether because of security or privacy concerns, or maybe for reasons of national security (an issue that has already brought a confrontation between Anthropic and the Pentagon).

It is impossible, from the outside, to distinguish how much this is driven by economic self-interest and how much by a sense of caution. But with limited computing resources, AI companies are already making choices about the most profitable services and customers to focus on. Anthropic is giving out US$100 million worth of credits for customers to test the model on their software - a move which might counter any criticism of profiteering. But the wider point remains.

The Mythos episode also provides fresh ammunition for critics to claim that scare stories like this help to stoke interest. 

The mystique stirred up by it is certainly a useful counterweight to the commoditisation narrative that has hung over the AI model builders. This holds that, with few technological or other moats around their businesses, it will be hard to gain any lasting differentiation. That is a particular concern as Anthropic and OpenAI race towards an initial public offering.

If limited access and AI shortages exacerbated by scarce capacity become more common, it would signal we are moving into a new economic era for the technology. Until this point, the AI race has been accompanied by a deflationary spiral in model pricing as a group of companies vie for leadership.

The economics of scarcity would look very different. They would mean a slower take-off for AI, where many marginal uses are priced out and where the sky is no longer the limit. But this might at least come with higher profit margins.

Source: Financial Times/sk
Advertisement

Also worth reading

Advertisement