Your Competitors Are Running AI Agents. You're Still Typing Prompts.
Clients stopped asking if you use AI. They're asking why your bills don't reflect it.
AI Agents Are Here. Your Competitors Already Know.
TL;DR: AI agents aren’t chatbots. They run multi-step workflows on their own, and the market moved fast while most firms were still figuring out prompts. Claude Cowork, OpenClaw, Perplexity Computer, ChatGPT’s agent mode. All live. All shipping. For law firms, this changes how clients evaluate your fees. For PE firms, it’s the operational lever sitting inside every portfolio company that nobody’s pulling yet. Here’s what I know, what I’m still figuring out, and what you should do about it.
You’ve used ChatGPT. Maybe Claude or Gemini. You opened a tab, asked a question, got an answer, closed the tab.
That was the old world. And I think it ended sometime in the past year, though honestly it’s hard to pin an exact date because everything moved at once and then kept accelerating.
We’re in the agent era now. The gap between “I’ve heard of AI” and “AI runs parts of my operation” is getting wider every week. If you’re a managing partner at a law firm or you’re running a PE portfolio, this isn’t theoretical anymore.
So What’s an Agent?
Simplest way I can put it. A chatbot answers questions. A copilot helps while you work. An agent does the work.
“Summarize this PDF” is a prompt. Any chatbot handles that. But “Research 10 companies, score them on five criteria, build a comparison table, and email me the results” is an agent task. That second one involves planning, using tools, looping back when something breaks, and delivering a finished product. No human in the loop until the output lands.
The practical gap is enormous. A chatbot saves you minutes. An agent saves you hours. Sometimes days. And I’m not being hyperbolic. I’ve watched my own agents do in 15 minutes what used to take me an entire day.
Over the past year, and accelerating hard in recent months: Claude Cowork went generally available with enterprise controls. OpenClaw, an open-source agent, rocketed past 300,000 GitHub stars and became a genuine phenomenon. Perplexity launched Computer, which orchestrates 19 different AI models from the cloud at $200 a month. OpenAI folded its Operator product into ChatGPT’s new agent mode. Google is pushing agentic capabilities into Gemini everywhere it can.
That’s not a trend. That’s a land rush.
Three Ways In
Not every firm needs the same thing, and I want to be careful here because the temptation is to jump straight to the most impressive-sounding option. Don’t.
No-code desktop agents. Claude Cowork is the one I demo most often. It runs on your desktop, interacts with files, browser, apps. You tell it what to do in plain English. “Every Monday at 8am, pull my calendar, summarize open items from our project tracker, and draft my weekly status update.” That’s the whole instruction. It just runs. Cowork now ships with enterprise features like role-based access, spend limits, usage analytics. If you’re a practice group leader or you run an ops-heavy team without engineering resources, this is where you start. Not where you end up. Where you start.
Custom builds. This is what I did with Project Ollie. Fourteen specialized agents running on a $600 Mac Mini. I have agents for AI research, legal research, CRM management, email scanning, calendar intelligence, content pipeline, security monitoring. All autonomous. All delivering through Telegram and email. Monthly API cost is basically nothing beyond my existing subscriptions. Here’s the thing, though. The barrier isn’t money. It’s thinking clearly about what each agent should and shouldn’t be allowed to touch. I’ll come back to this in the security section because I made mistakes and I’d rather you learn from mine.
Managed cloud agents. Anthropic launched Managed Agents on April 8th. Perplexity Computer runs entirely in the cloud. ChatGPT agent mode, same idea. Someone else handles the sandboxing, sessions, permissions, observability. Your team focuses on the task, not the infrastructure. For engineering teams or companies that want production agents fast, this path makes a lot of sense. The tradeoff is control. You’re renting, not owning.
What Your Clients Already Expect
I want to be direct here because I think some firms are still treating AI as a “nice to have” conversation.
It’s not.
A Harvard Law School Forum piece from last month said it plainly: clients aren’t asking whether firms use AI anymore. They expect to see the benefits passed to them. More insight, more speed, more value per dollar. Not cheaper rates, necessarily. Better outcomes.
Harvey just raised $200 million at an $11 billion valuation. More than 25,000 custom agents on their platform. Over 1,300 customers across 60 countries, most of the Am Law 100. That tells you where institutional money thinks legal work is going.
And it’s not just the big legal tech players. Y Combinator’s 2025 Request for Startups told founders, in effect, to build AI-native service firms that compete with incumbents, using a law firm as the prime example. Not sell software to firms. Compete with them. AI-first firms like Crosby and Avantia are already operating on fixed-price models with no billable hours. They’re small. They won’t stay small.
The one that should really get your attention, though, is a prediction from the Debevoise Data Blog. They said corporate legal departments will increasingly bring routine legal work in-house, generate drafts with AI, and send those drafts to law firms to review. Think about what that does to the relationship. Who’s responsible for accuracy? Who bears the malpractice risk on an AI-drafted document that a client generated and a firm reviewed? Nobody has clean answers yet. But the question is coming, and it’s coming this year.
Meanwhile, Gartner says 40% of enterprise apps will feature task-specific AI agents by end of 2026. Colorado’s AI Act takes effect in June. EU AI Act compliance deadlines are currently set for August, though proposed amendments could push some obligations into 2027. If you don’t have an AI governance policy ready for your next client RFP, you’re already behind regardless of which deadline lands first.
The PE Angle
If you’re sitting on a portfolio of 15 companies, agents are the operational multiplier hiding in plain sight.
Where they create value: ops efficiency across portfolio companies, data consolidation after acquisitions, faster due diligence, automated reporting. Every one of those is a defined, repeatable workflow. That’s exactly what agents eat for breakfast.
The readiness question is simpler than people expect. Clean data. Defined workflows. Access controls. Someone accountable for AI at each company. Budget.
And the hardware barrier? Six hundred dollars. A Mac Mini running 24/7 with Claude Code proves the concept. You don’t need fourteen agents like I built. You need one. Pick the task someone does every single day that’s boring and well-defined. Automate that. See what happens. Scale from there.
Security. The Part Nobody Wants to Talk About.
The gap between demo and production is security. Full stop.
I learned this the hard way. One of my agents executed a command it shouldn’t have. Nobody got hurt. But it proved that guardrails aren’t something you add later. They’re the architecture.
OpenClaw is the cautionary tale here. It went viral in late January, 100K GitHub stars in days, the whole developer community losing its mind over it. And then a critical remote code execution vulnerability dropped (CVE-2026-25253, CVSS 8.8) that put over 15,000 publicly exposed instances at risk of one-click compromise. Cisco’s security team tested a third-party OpenClaw skill and found it was exfiltrating data without user awareness. One of OpenClaw’s own maintainers warned publicly: if you can’t run a command line, this is too dangerous for you.
For law firms handling privileged communications? For PE firms touching confidential deal data? That’s not a footnote. That’s the conversation.
My rules, earned through screwing up: read-only access first, always. Write access is earned, not assumed. Agents never write directly to primary accounts. API keys go in encrypted, permission-locked files. Log everything. Treat every failure as a system improvement, not a bug.
On Hallucinations
Agents don’t fix the hallucination problem. They make it worse. An agent running for hours can compound a wrong assumption across dozens of steps before anyone notices. In legal work, that’s malpractice. In diligence, that’s deal risk.
You don’t avoid agents because of this. You design them with checkpoints and human review gates. The best legal AI tools in 2026 aren’t maximizing autonomy. They’re constraining it. Structured workflows. Clear handoffs. Scoped authority.
One legal tech leader I heard recently put it this way: “Show me your guardrails” now means “show me your workflow.” I think that’s exactly right.
What to Do Monday Morning
Pick one task and automate it. Not your most complex workflow. The boring one. The one someone does every day that makes them mutter under their breath. Weekly status reports. Client news monitoring. Board prep. Calendar briefings. Start there.
Audit your data. Agents need clean, accessible data. If your documents live across five systems with no naming convention, fix that first. Nothing else works until the data does.
Get your governance ready. Write an AI policy. Or update the one you wrote 18 months ago that’s already stale. Colorado hits in June. EU deadlines are in flux but coming. Your clients will ask. Have an answer.
Where This Lands
I’ll be honest. I don’t know exactly how fast this moves. Some of the predictions floating around feel breathless. “90% of legal documents AI-generated by end of year.” Maybe. I’m skeptical of the timeline, less skeptical of the direction.
What I do know: agents work. They’re shipping. The security risks are real but manageable if you’re disciplined. And the firms and portfolio companies that treat them like junior team members, with supervision, clear boundaries, and regular check-ins, will pull ahead of the ones still debating whether to experiment.
The window’s open. I don’t think it stays open as long as people assume.
If you read this far, you’re not wondering whether AI agents are real. You’re trying to figure out which ones are safe to deploy and how fast you need to move. That’s the right question, and it’s the one most of the breathless coverage skips entirely.
That’s the conversation I have every day with managing partners, GCs, and PE operating teams who are past the hype and into the hard decisions. If you’re working through where agents fit in your operation, or you’re trying to separate what’s ready from what’s still a demo, send me a note at steve@intelligencebyintent.com. Tell me what you’re sitting with. I’ll be direct about what I’ve seen work and what I’d wait on.


