For years, the debate around AI in cybersecurity oscillated between two extremes: the utopian promise of automated defense and the alarmist fear of a 'cyber apocalypse' that never seemed to materialize. However, in April 2026, the AISI report on the Claude Mythos Preview model has put an end to this uncertainty.
We are no longer talking about theory. We are discussing a machine capable of autonomously executing a 32-step attack to take full control of a corporate network.
From ‘Script Kiddie’ to Autonomous AgentWhat sets Mythos apart from its predecessors (such as GPT-4 or Claude 3) is not just that it can write malicious code. It also possesses the capability of reasoning and persistence. In AISI tests, the model did more than suggest vulnerabilities; it searched for them in the Linux kernel and Firefox, created the exploits, and executed them.
The data that should keep CISOs awake at night is not the 73% success rate in expert-level tests but the 'Network Takeover' test. In three out of ten attempts, the AI managed to progress from initial recognition to full system control. This transforms AI into more than just a tool; it converts it into an offensive agent that never rests, makes no mistakes due to fatigue, and scales its knowledge at the speed of its servers.
Hype or Real Threat?An Ars Technica article poses the necessary question: Are we overhyping it? The answer is a nuanced 'no'.
It's true that AISI acknowledges these tests were conducted in controlled environments without the resistance of a rapid response team (Blue Team). However, Anthropic’s decision to limit the public release of Mythos to critical partners under the 'Glasswing Project' is the smoking gun. When creators fear their own creation, hype turns into reality.
New 'Proof of Work'We are entering an era where cybersecurity will resemble a 'Proof of Work'. Defenders can no longer rely solely on patches and firewalls; they must invest substantial budgets in 'hardening tokens'. In other words, using AI models as powerful as Mythos to find their own flaws before an attacker does.
Defense is not a matter of human ingenuity against human ingenuity anymore. It's about 'computing capacity against computing capacity'.
Conclusion: A Call to ActionThe UK report is a victory for transparency but a warning for the industry. If an AI can autonomously navigate a complex network, the trust-and-verify model of security has died.
The arrival of Claude Mythos compels us to accept an uncomfortable reality: the tactical advantage is tilting towards the offensive. The only question left is whether organizations are willing to automate their defense at the same speed that attackers will automate their destruction. The era of warnings has ended; the age of offensive AI is here.