Mythos found and exploited security flaws no human caught for 27 years. The same reasoning engine is coming for legal research, financial modeling, and every other expertise bottleneck.
I've been in law enforcement for a long time, and I know that AI is going to be integrated into our workflows. If you're advising lawyers on using this technology, I'm sure I don't have to tell you how difficult it will be to get my agency—and others—to adopt it.
Law enforcement is notoriously averse to change. The old axiom, "If it's not broken, don't fix it," is almost an institutional religion. We're still working with applications that have been around for decades, and it's long past time to rethink our stack and make informed, strategic decisions about where we're headed.
Command staff know this needs to be addressed, and I believe we can find a way forward—we just need decision-makers to recognize the urgency. Funding is the biggest hurdle, with legal concerns close behind. Staff are ready and willing to commit to training, but it has to be approached with clarity and intent.
I've been developing an implementation concept, but I want to make sure I get the pitch right the first time. A poor presentation could derail any chance of moving this forward.
Your article really captured many of the concerns I believe organizations are facing, and it motivated me to get back to work on this presentation. I appreciate the time and effort you put into it—it was exactly the push I needed.
Really great assessment based on what you think Anthropic has created coupled with your actual use-to date of AI in the legal environment. I’m curious why discovering vulnerabilities results in AI exploitation vs AI repair or fix to make a system whole or impenetrable? Why can’t the instruction be- find and fix? Vs Find and exploit? I am a business process improvement expert - you know current state to a new and improved desired future state that everyone agrees to . The process identifies the actions, obstacles and how to get to that desired future state. It seems that AI could assess the future state plan to correct integration conflicts for a smoother implementation. If AI is allowed to just run amok - yes, that is just guaranteed chaos.
Thank you for your fine piece. Regarding present data centers, with their huge power and water requirements, I think you might be skating to where the puck is now, rather than where Mythos might enable it to be. Tell Mythos to design new data centers with greater compute capability that use a fraction of the electricity, water, and space of present designs. It might have to design new efficient chips and the rest of the enabling infrastructure, but it seems as if that could happen in weeks or months. In other words, much faster than just pulling the permits for a new data center built on the present plan.
Great piece. Every future you predicted is one where AI, and organizing around it via first principles, becomes the “best practice” for scalable knowledge work. However, as you acknowledge, best practices will be unevenly distributed (access, energy, bureaucracy, etc.). Where do you believe contrarian value can live amidst the futures you’ve detailed, where a conscious and intelligent organization actively eschews AI for pure human work product, the organic offering vs. GMO produce, shooting on film developed in a darkroom vs. digital/AI video production, etc.? All contrarian strategies have a capacity constraint. You can even start with legal given that’s your domain expertise. Thanks!
Great question, and I think you’re onto something real. In legal, there’s absolutely a market for the firm that says “every word here came from a human brain.” Family law, high-stakes criminal defense, bet-the-company litigation. Clients in those situations aren’t buying efficiency. They’re buying trust and the feeling that someone actually cared enough to sit with their problem. The capacity constraint you mentioned? That’s the whole pricing model. Scarcity is the point.
But here’s where I’d push back a little. The contrarian premium only holds if the human-only work product is actually better, or if the buyer genuinely believes it is. And in legal research, discovery, contract review, we’re getting to a point where the AI-assisted output is more thorough and faster. So the “organic” play works best in high-trust, high-emotion, relationship-driven work. The second it becomes a commodity task dressed up as artisanal, clients figure that out. The version of this I’d actually want to build isn’t “no AI anywhere.” It’s “AI everywhere invisible, human judgment front and center.” That’s the firm that wins.
Really great content, Stephen.
I've been in law enforcement for a long time, and I know that AI is going to be integrated into our workflows. If you're advising lawyers on using this technology, I'm sure I don't have to tell you how difficult it will be to get my agency—and others—to adopt it.
Law enforcement is notoriously averse to change. The old axiom, "If it's not broken, don't fix it," is almost an institutional religion. We're still working with applications that have been around for decades, and it's long past time to rethink our stack and make informed, strategic decisions about where we're headed.
Command staff know this needs to be addressed, and I believe we can find a way forward—we just need decision-makers to recognize the urgency. Funding is the biggest hurdle, with legal concerns close behind. Staff are ready and willing to commit to training, but it has to be approached with clarity and intent.
I've been developing an implementation concept, but I want to make sure I get the pitch right the first time. A poor presentation could derail any chance of moving this forward.
Your article really captured many of the concerns I believe organizations are facing, and it motivated me to get back to work on this presentation. I appreciate the time and effort you put into it—it was exactly the push I needed.
Really great assessment based on what you think Anthropic has created coupled with your actual use-to date of AI in the legal environment. I’m curious why discovering vulnerabilities results in AI exploitation vs AI repair or fix to make a system whole or impenetrable? Why can’t the instruction be- find and fix? Vs Find and exploit? I am a business process improvement expert - you know current state to a new and improved desired future state that everyone agrees to . The process identifies the actions, obstacles and how to get to that desired future state. It seems that AI could assess the future state plan to correct integration conflicts for a smoother implementation. If AI is allowed to just run amok - yes, that is just guaranteed chaos.
I certainly think that with project Glasswing they are definitely doing find and fix. That's why they engaged security and o/s companies.
Thank you for your fine piece. Regarding present data centers, with their huge power and water requirements, I think you might be skating to where the puck is now, rather than where Mythos might enable it to be. Tell Mythos to design new data centers with greater compute capability that use a fraction of the electricity, water, and space of present designs. It might have to design new efficient chips and the rest of the enabling infrastructure, but it seems as if that could happen in weeks or months. In other words, much faster than just pulling the permits for a new data center built on the present plan.
Great piece. Every future you predicted is one where AI, and organizing around it via first principles, becomes the “best practice” for scalable knowledge work. However, as you acknowledge, best practices will be unevenly distributed (access, energy, bureaucracy, etc.). Where do you believe contrarian value can live amidst the futures you’ve detailed, where a conscious and intelligent organization actively eschews AI for pure human work product, the organic offering vs. GMO produce, shooting on film developed in a darkroom vs. digital/AI video production, etc.? All contrarian strategies have a capacity constraint. You can even start with legal given that’s your domain expertise. Thanks!
Great question, and I think you’re onto something real. In legal, there’s absolutely a market for the firm that says “every word here came from a human brain.” Family law, high-stakes criminal defense, bet-the-company litigation. Clients in those situations aren’t buying efficiency. They’re buying trust and the feeling that someone actually cared enough to sit with their problem. The capacity constraint you mentioned? That’s the whole pricing model. Scarcity is the point.
But here’s where I’d push back a little. The contrarian premium only holds if the human-only work product is actually better, or if the buyer genuinely believes it is. And in legal research, discovery, contract review, we’re getting to a point where the AI-assisted output is more thorough and faster. So the “organic” play works best in high-trust, high-emotion, relationship-driven work. The second it becomes a commodity task dressed up as artisanal, clients figure that out. The version of this I’d actually want to build isn’t “no AI anywhere.” It’s “AI everywhere invisible, human judgment front and center.” That’s the firm that wins.