In this week’s bulletin, Charlie gives insight into Claude’s Mythos and its potential negative impacts on organisations.
Mythos has been prominent in the news over the last couple of weeks. As I am keen on using AI in my daily work and have a wider interest in how it can change the business continuity profession, I take an interest in anything new that comes out. What caught my attention about this new version of Claude was that it seemed to be a wake-up call to the government on the power of AI and, for the first time, it seems they have recognised that its widespread use could lead to severe impacts on global IT. So far, most of the talk about AI has been around what jobs it may replace and how organisations may use it to replace humans, and its potential major impact on society was often relegated to science fiction. The use of the word ‘dangerous’ is my attribution, but there was a general consensus that as Mythos was so powerful, it should not be allowed general use. So what do you need to know about it?
What?
It was only on the 7th of April that Anthropic announced Project Glasswing and their new version of Claude, Mythos. New versions of AI models don’t normally attract much attention, but this one did, as it was available for general release and made available to 40 handpicked technical companies.
The AI model was not specifically released for a particular purpose but was a more general model. However, in testing, it was found to be particularly good at autonomously finding software flaws in major operating systems and web browsers and then being able to write code to exploit them. It can find flaws that a human couldn’t. Mozilla tested it in Firefox and found 10 times more flaws than they had previously identified, while a 27-year-old flaw was found in OpenBSD, an operating system renowned for its security. The Guardian, in its article on Mythos, likened the software to a burglar who has a key to every building or safe and can access them at will. [1]
If flaws can be found in software and the right tools are written or used to exploit them, access to systems may be gained or the software can be crashed, preventing it from operating. Cyber attackers can extort organisations if they can crash their systems at will, or use the vulnerability to gain access. It is not just organisations’ systems which are vulnerable; if criminals or nation states can gain access, manipulate, or crash financial systems, this could have a massive impact worldwide. Much of commerce is based upon the premise that the financial system is secure.
Due to the power of the system, Anthropic limited access to launch partners including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Reuters also reported that Anthropic extended access to more than 40 additional organisations involved in critical software infrastructure. The idea was to place the software with trusted partners who could use it for good purposes, find flaws in systems, and patch them before malicious actors gained access to Mythos and exploited vulnerabilities. Reuters reported on the 21st April that unauthorised users had gained access to the system via a third party and that Anthropic was investigating. How long it stays in the hands of the ‘good guys’ may be debatable.
So What?
One of the important impacts of Mythos, which follows an already established trend, is the shrinking time between a flaw in software being found and it being exploited. This time has been reducing for years and, in an article I listened to from the Economist, it was suggested that the time from a flaw being discovered to it being exploited could be measured in seconds. Mythos accelerates this further. Patching is critical for closing vulnerabilities, but it takes time for flaws to be reported, for patches to be written, and for them to be distributed to those affected. In the longer term, AI could help reduce this gap, but that capability is not fully realised yet.
In the longer term, AI can also be used to test software before release and assess existing systems for issues, allowing them to be patched before they can be exploited. If Mythos gets into the hands of malicious users, this could become a race between those using Mythos to find and patch vulnerabilities and those using it to identify and exploit them.
Now What?
In the Economist article where the above graph came from, they suggested that Mythos and the AIs to come would allow software vendors, providers, and SaaS companies to better review their software for flaws and patch them before release, therefore making them more secure. However, in the short term, there is likely to be significant turbulence if Mythos and similar systems fall into the wrong hands and are used maliciously. I don’t think organisations can do much differently at the moment, apart from patching as soon as updates are available and remaining alert to the fact that flaws in the software they use are now more easily found.
As business continuity professionals, I believe it is our responsibility to understand the threats facing our organisations and ensure they are recognised and being addressed. It may not be our responsibility to manage the risk directly, but it should at least be our role to take an overview and, if we feel it is being ignored or our organisation is not aware of it, escalate it to senior management.
References
[1] The Guardian (2026) The Guardian view on Anthropic’s Claude Mythos: When AI finds every flaw, who controls the internet? 23rd April 2026. Available at: https://www.theguardian.com/commentisfree/2026/apr/23/the-guardian-view-on-anthropics-claude-mythos-when-ai-finds-every-flaw-who-controls-the-internet
[2] The Economist (2026) Common cyber-security vulnerabilities and exposures [chart], using data from Zero Day Clock. Source data available at: https://zerodayclock.com (Accessed: [22 April 2026])



