The World Just Changed and Almost Nobody Noticed
Mythos found and exploited security flaws no human caught for 27 years. The same reasoning engine is coming for legal research, financial modeling, and every other expertise bottleneck.
An AI Just Taught Itself to Hack. Here’s What That Means for Your Firm.
TL;DR: Anthropic’s Mythos Preview is not “the singularity arrived.” It’s a general-purpose AI that taught itself to break into software without anyone training it to. Other major labs (particularly OpenAI) appear to be about to launch similarly powerful models. We’re entering a period of radical, uneven transformation, and ten years from now, the scarce thing won’t be intelligence. It’ll be trust, governance, and the willingness to actually change how we do things.
I’ve been training attorneys on AI for over a few years now. Two thousand plus across fifty-some firms. And I had a moment this week where I sat at my desk reading the Mythos announcement and thought: I have no idea how to prepare people for this. Not because the technology is confusing. Because the implications are so broad that every conversation I’ve been having with law firm leaders, every carefully scoped “here’s how to use AI for document review” session, suddenly felt small.
That’s not a comfortable thing to admit when advising people on AI is literally what I do. But I think the honest reaction matters more than the polished one right now. So let me walk through what actually happened, in plain terms, because the technical details are important and they’re getting buried in hype.
What actually happened
Anthropic released a preview of a new model called Claude Mythos. They didn’t release it to the public. They gave it to a small group of partner organizations, including major tech companies and government agencies, through an initiative called Project Glasswing. The reason they’re holding it back is simple: this model can find hidden security flaws in software and then build working attacks to exploit them. Automatically. At a scale no AI system has ever demonstrated.
Think of it this way. Every piece of software you use, your browser, your operating system, your hospital’s billing system, has bugs in it. Some of those bugs have been sitting there for decades, invisible to human engineers. A small number of those bugs are dangerous: they’re the kind of flaw that a skilled hacker could use to break in, steal data, or take control of a system. Finding those bugs and turning them into working attacks used to require elite human talent. Months of painstaking work. Mythos does it overnight.
The numbers are striking. Anthropic’s previous best model, Opus 4.6, could turn a discovered software flaw into a working attack essentially zero percent of the time. Mythos Preview does it 72.4% of the time. On Firefox alone (the web browser), Opus 4.6 managed two successful attacks out of hundreds of attempts. Mythos succeeded 181 times. Anthropic engineers who had no formal security training asked the model to go hunt for vulnerabilities while they slept. They woke up to a fully functional attack, ready to use.
Here’s what makes this different from every AI headline you’ve read before. Nobody trained Mythos to do this. They didn’t build a hacking tool. They built a generally smarter AI, better at reasoning, better at writing code, better at working through complex problems. And the ability to find and exploit security flaws just showed up. It emerged on its own, the same way a person who gets dramatically better at math might suddenly find they’re also better at music theory without ever studying it.
That emergence is the story. Not the hacking. The hacking is a symptom.
Why this isn’t just a cybersecurity problem
I keep seeing people frame Mythos as a cybersecurity event. It is that, yes. But if you stop there, you’re missing the bigger picture.
If a model got this good at finding hidden flaws in software by simply becoming a better general reasoner, then the resistance to also becoming excellent at drug discovery, legal analysis, financial modeling, and scientific research isn’t some fundamental barrier. It’s a matter of months. Maybe a year. The underlying capability is general. The applications are everywhere.
There’s a theory floating around that whoever crosses this threshold first is on an exponential curve nobody can catch. I don’t fully buy it. Here’s why. These capabilities emerged from general intelligence gains, not from some proprietary cybersecurity secret sauce. Every lab approaching this threshold will cross it roughly together. Logan Graham, Anthropic’s red-team lead, told Axios that comparable capability at other labs could arrive within 6 to 18 months (I personally think much sooner). OpenAI already says GPT-5.4 Thinking has mitigations for “High capability in Cybersecurity.” and they are now teasing their next generation model “Spud.” Google says Gemini 3.1 Pro is its most advanced model for complex tasks but I wouldn’t be surprised to see a big jump to a 3.5 release come Google I/O in May.
Where the compounding advantage is real is the feedback loop. Anthropic is now using Mythos to find bugs in its own systems, to secure its own infrastructure, to make the development of its next model safer. So the model helps build the next model, which helps build the one after that. The advantage isn’t “we have the best model.” It’s “our model makes our next model better and safer to build.” That’s genuinely hard to replicate from behind. But it’s not a permanent monopoly. The likely outcome is a small group of frontier labs, maybe three or four, running close together. Leads that compound but eventually diffuse.
And, I should be honest here: I’m closer to Anthropic than to the other labs. While I use all of the tools daily, I use Claude as my primary; I recommend it to clients, and I think Anthropic’s approach to safety is the most serious in the industry. So take my read on the competitive dynamics with appropriate skepticism.
What Mythos didn’t do (and why that matters too)
The most important thing Anthropic said this week is what Mythos has not done. It hasn’t crossed into fully autonomous self-improvement. It doesn’t meet Anthropic’s own threshold for automated AI R&D, which basically means: the model can’t yet sit down and design its own replacement without human oversight.
But. It did rediscover four of five key insights in an unpublished machine learning task. And Anthropic says the capability trajectory “bent upward” at this generation, meaning the rate of improvement is accelerating. So we’re not in a full recursive loop where models are designing their successors. We’re in the waiting room next door. Close enough to hear the conversation through the wall.
That distinction matters because it tells you the timescale. We are probably entering an era of recursive assistance rather than full recursive autonomy. Fast enough to reshape every institution you care about. Slow enough that politics, law, energy constraints, and plain old organizational inertia still matter. A lot.
What changes, and where it hits hardest
Healthcare is probably where the stakes are highest in both directions. If a general-purpose model spontaneously learned to find security flaws that human experts missed for 27 years, that same reasoning architecture applied to protein structures, drug interactions, and patient data will produce similar surprises. WHO estimates an 11 million health-worker shortfall by 2030, and the first big AI wave in healthcare isn’t “AI replaces your doctor.” It’s AI collapsing the paperwork, speeding up evidence review, improving triage, and giving every clinician something like an extra team member who never sleeps. Anthropic is already building HIPAA-ready tools for insurance approvals, claims appeals, and care coordination.
Now flip that around. Healthcare data systems are among the most vulnerable in the world. Remember the Change Healthcare attack? Now imagine that kind of breach, but the attacker has Mythos-class capability. I don’t love thinking about that. But you have to.
Education is going to break socially before it stabilizes institutionally. I don’t say that to be dramatic. UNESCO already says AI progress has outpaced education policy. Roughly two-thirds of universities have or are developing AI guidance, but confidence is uneven and one in four respondents reported ethical issues. The near-term shock isn’t a shortage of tutoring. It’s that homework, take-home exams, and the whole system we use to infer what a student actually knows just stops working. Every student is about to have access to a tutor that reasons at an expert level. That’s incredible. But nobody has answered the harder question: what does “learning” even mean when the AI can do the work?
The good version of this is personalized education at scale, for every kid regardless of income or geography. I genuinely believe that’s possible. But I also watch my clients’ firms struggle to adopt a new document management system over 18 months, and I wonder how we expect a public school district to reinvent pedagogy in the same window. The more likely near-term reality is a lot of institutions pretending the old model still works while quietly watching it fall apart.
Climate science could get one of the largest boosts. DeepMind’s GraphCast already outperformed traditional weather forecasting systems on nearly 90% of evaluated targets, producing 10-day forecasts in under 60 seconds. If Mythos-class reasoning generalizes into climate modeling, materials discovery, and grid management, it could compress years of scientific iteration into months. But the IEA projects electricity consumption for data centers more than doubling by 2030. So AI can be a climate accelerator or a climate liability, depending on whether the energy infrastructure can keep up. There’s a useful way to think about this: an AI can design a breakthrough reactor in 30 seconds, but humans will still spend a decade arguing about where to build it.
Economics and jobs is where this gets personal for most people reading this. Anthropic’s Economic Index says 49% of jobs have seen AI usage for at least a quarter of their tasks. The International Labour Organization (ILO) says one in four workers globally are in occupations with some degree of AI exposure, but most jobs are more likely to be transformed than eliminated. The International Monetary Fund (IMF) reports that employment in AI-vulnerable occupations is already 3.6% lower after five years in regions with high demand for AI skills.
Here’s the logic that should worry people. When a model can autonomously find every hidden flaw in a codebase overnight, the cost of a huge swath of knowledge work starts falling toward zero. Not just coding. Legal research, financial analysis, consulting, medical diagnosis. Every profession built on “I know things you don’t, and I can apply that knowledge faster than you can” is facing compression.
I’m not predicting instant mass unemployment. I am predicting pressure on junior white-collar career ladders, fewer training roles, smaller teams producing dramatically more output, and growing returns to whoever controls compute, distribution, trusted data, and customer relationships. If you’re a managing partner wondering whether this affects your associate pipeline, the answer is yes. Sooner than you think.
Where this goes from here
Okay, I want to shift gears. Everything above is about what’s happened and what it means right now. The harder question is what happens next, and I’ll be honest, I kept changing my mind on this while writing.
I think there are really three paths, and they’re not equally likely. The whole thing hinges on one question: can institutions move fast enough to keep up with the technology? My gut says no. But let me lay out all three.
The best case: managed acceleration. Glasswing-style defensive coalitions spread across healthcare, finance, infrastructure, and open-source software. AI becomes standard for security, research assistance, and workflow automation within a year. Within five, personalized tutors, clinical copilots, and scientific research agents are ordinary. Most knowledge workers become “AI managers plus domain owners.” Within ten, the cost of expertise drops dramatically and society gets something approaching an abundance of cognitive capacity. I’d love to tell you this is the most likely path. It requires institutions to move faster than they ever have, and I haven’t seen much evidence that they will.
The scary version: chaotic diffusion. The security benefits are real but unevenly distributed. Large enterprises and governments harden their systems. Small businesses, hospitals, and municipal systems don’t, and become primary targets. A major AI-enabled cyberattack takes down a regional power grid or healthcare system, and people die. Public backlash intensifies. Regulatory responses are panicked and contradictory across jurisdictions. Job displacement in legal, financial, and coding sectors is visible, but companies call it “restructuring.” Within five years, a two-tier economy: people who know how to work with AI are thriving, people who don’t are falling behind fast. And the gap isn’t just individual. It’s geographic. Within ten years, society is dramatically more capable and dramatically more unequal. I don’t think this is the most likely path either, but I think pieces of it are almost certain to happen.
The messy middle: fast models, slow institutions. This is the one I keep coming back to. Software, security, legal ops, and finance move fast. Healthcare, education, and government move more slowly because trust, liability, and integration are genuinely hard. Not because the people in those fields are resistant, but because the stakes of getting it wrong are higher. AI is embedded almost everywhere within five years, mostly as a copilot or first-pass agent rather than a sovereign decision-maker. Within ten, the defining scarcity is not intelligence. It’s trust, legitimacy, governance, and access. Some organizations redesign around these tools and pull ahead. Others bolt AI onto broken processes and wonder why nothing improved.
The most likely path
The messy middle, with real damage from the chaotic version mixed in.
Here’s my reasoning, and I’ll admit it’s partly instinct built on watching 50+ law firms try to adopt AI over the last two years. The technology trajectory in the best case is plausible. The institutional adaptation is not. Humans don’t reorganize at the speed technology moves. We never have. The printing press took a century to reshape education. The internet took 25 years to reshape commerce. AI is moving faster than both, and I still don’t think we’ll keep up.
But atoms still move slower than bits. An AI can design a cure for cancer, but human clinical trials still take years. An AI can design optimal energy grids, but zoning laws and supply chains and community opposition don’t care how smart the model is. The real constraint going forward isn’t intelligence. It’s everything else.
One year from now, I expect Mythos-class capability across multiple labs, with cybersecurity and software development seeing the first real shakeup. Five years out, AI is the default operating layer for knowledge work, tutoring, research, and a big chunk of clinical and educational administration, but not a full substitute for doctors, teachers, or scientists. Ten years out, the world looks dramatically different, but unevenly. Some organizations and countries will have rebuilt around these tools. Others will still be debating whether to allow them.
So what do you actually do
If you run a firm, a company, or a team: stop treating AI as a productivity tool and start treating it as a strategic shift in how your business works. Every law firm, consultancy, and knowledge-work business that isn’t rethinking its delivery model right now is going to look very different in five years, one way or another. And I don’t mean “add a chatbot.” I mean rethink who does what, how you price it, and what your clients are actually paying for.
If you’re an individual professional: become AI-literate now. Not “learn to code.” Learn to work with AI. Learn to direct it, evaluate its output, and know when to trust it and when to push back. This is the most valuable skill for the next decade. It’s not optional anymore.
If you’re in government or policy: invest in the transition infrastructure before the crisis hits. Retraining programs, updated safety nets, regulatory approaches that can adapt instead of ossifying on first contact with reality. And treat frontier AI as critical infrastructure. Hospitals, utilities, schools, and local governments need the same kind of defensive support Glasswing is providing to tech companies. Not after the breach. Before it.
And across all of it, for everyone: we need to start talking seriously about distribution. If the cost of cognitive work drops toward zero and productivity spikes, then shorter workweeks, new tax structures, and updated social insurance models stop sounding like fringe ideas. They start sounding like the obvious next step.
The bottom line
The biggest mistake right now would be to spend the next year arguing about whether this is “AGI” while leaving every institution exactly as it is. The smartest entity in the room is not always going to be a person anymore. That’s just true. And most of the systems we’ve built, from how we train lawyers to how we credential doctors to how we fund schools, assume that it is.
What comes next is going to be messy and uneven and, in a lot of places, genuinely painful. It’s also going to produce breakthroughs that save lives and open doors that were never open before. Both of those things will be true at the same time, in the same decade, sometimes in the same city.
The window to get ready is smaller than most people think. I know that because I thought I was ready, and this week showed me I wasn’t.
You’ve probably read a dozen “AI just changed everything” posts this month. I get it. Most of them are written by people selling picks and shovels during a gold rush. I’m not going to pretend I’m above that dynamic. I advise firms on this stuff for a living. But I also sat at my desk this week and felt genuinely uncertain for the first time in a while, and I think that honesty is worth more to you than another confident prediction.
If you’re running a firm or leading a practice group and you want to talk through what this actually means for your people and your pricing, I’m easy to find. steve@intelligencebyintent.com. No pitch deck, no sales call. Just a conversation about what’s coming and what to do about it.


