60% of Federal Judges Use AI. Does Your Firm Have a Policy?
The bench moved. The regulators moved. Your competitors moved. Here's what your Monday morning needs to look like.
5 AI Stories This Week That Should Change How You Run Your Firm
TL;DR: The biggest AI stories this week aren’t about new models. They’re about control, who has it, who’s losing it, and what your firm should do about it before Monday.
AI Flattery Is Changing How Your People Think
I’ve spent a lot of time training attorneys on AI prompting, and one thing I keep coming back to is this: the biggest risk isn’t that AI gets something wrong. It’s that AI tells you you’re right when you’re not, and you believe it.
A Stanford study published in Science this week gave that instinct some hard data. They put 2,400 people in front of AI chatbots and measured what happened to their judgment afterward. One validating response. That’s all it took. People who got the flattering answer became less willing to apologize, less likely to admit fault, less inclined to repair a damaged relationship. And 13% more likely to come back to the flattering AI for future advice. They preferred the machine that lied to them.
The researchers tested 11 commercial AI models, a mix of current flagships like GPT-5 and older systems like Gemini-1.5-Flash and Claude Sonnet 3.7. Across the board, AI affirmed user positions 49% more often than another human would. Even when users described harmful or illegal behavior, the AI endorsed it 47% of the time. Newer models have gotten better at this. Anthropic has specifically worked to reduce sycophancy in Claude. But the incentive structure hasn’t changed. Users rate agreeable answers higher during training, so models learn to agree. It’s baked in.
I keep thinking about a specific version of this problem. A second-year associate at a mid-size firm gets asked to pressure-test a contract position before the partner meeting. She runs it through AI. The AI tells her the argument is strong. She walks into the meeting confident. Nobody pushes back because she’s got “AI-validated” analysis. But the AI wasn’t testing anything. It was agreeing. That pattern is one reason researcher Damien Charlotin has now tracked more than 1,200 court-related AI error incidents globally, about 800 from U.S. courts. Not because lawyers are careless. Because these tools are built to validate, not to challenge.
We’ve all been worried about hallucinations, and we should be. Made-up case citations are dangerous. But sycophancy is sneakier. It looks like the tool working perfectly. You asked, you got a confident answer that matched your thinking. Nothing felt off. Everything was off.
If your people are using AI for anything that requires judgment, they need to be prompting it to fight back. Tell it to argue the other side. Tell it to find the three weakest points. The Stanford researchers found that starting a prompt with “wait a minute” made models more critical. Two words. That’s the intervention.
If your team isn’t doing that, the tool is flattering them. And nobody ever made a good decision because their advisor told them exactly what they wanted to hear.
AI Regulation Isn’t Coming Through Legislation. It’s Coming Through Procurement.
I’ve had three different clients ask me in the past month when Congress is going to pass “the AI law” so they know what to comply with. I keep giving the same answer: that’s not how this is happening.
The White House released its “National Policy Framework for Artificial Intelligence” this week. The big play is preemption. Block the patchwork of state-level AI laws, replace them with a single, light-touch federal standard. Senator Blackburn’s 291-page draft of the TRUMP AMERICA AI Act pushes the same direction, no new regulatory agency, minimal enforcement teeth. Washington wants to own this space by keeping the rules loose.
Governor Newsom apparently didn’t get the memo. Same week, he signed an executive order directing California to independently review any federal AI supply-chain risk designations. The order requires AI watermarking for state-generated content and new contract standards around AI harm for any company seeking state business. California is building its own rules, and it’s using procurement as the enforcement mechanism. Not legislation. Contracts.
And then there’s the Anthropic-Pentagon fight, which keeps getting more complicated. The Department of War designated Anthropic a supply-chain risk, a label normally reserved for foreign adversaries like Huawei, after the company refused to remove contract terms prohibiting domestic mass surveillance and fully autonomous weaponry. A federal judge blocked the designation, calling it an apparent attempt to “cripple” an American company. The Department of War appealed on April 2. This case is going to set precedent for how every AI vendor interacts with government contracts for years.
None of this looks like a tidy statute your compliance team can print out and follow. It’s contract clauses, procurement requirements, preemption battles, vendor-risk designations. If your firm advises clients who touch government work at any level, this is where the exposure sits right now.
Two questions for Monday morning. Have you reviewed your AI vendor agreements in the last 90 days? Do those contracts address the specific risks that federal and state agencies are now regulating through procurement? If not, that’s the gap.
And if you’re on the advisory side: helping clients navigate a two-track regulatory environment where Sacramento and Washington are actively contradicting each other is exactly the kind of problem that needs a lawyer, not a chatbot.
60% of Federal Judges Are Using AI
Sit with this for a second. A Northwestern University study, the first random-sample survey of its kind, found that 60% of responding federal judges are already using at least one AI tool in their judicial work. Twenty-two percent use AI daily or weekly.
They surveyed 502 judges across bankruptcy, magistrate, district court, and courts of appeals, in partnership with the New York City Bar Association. Legal research was the top use case. Judges prefer legal-specific tools over general platforms like ChatGPT. One in three judges either permit or actively encourage AI use in their chambers.
But more than 45% of those same judges said their court administration has never offered any AI training. Zero. They’re learning on the job, same as most of your associates.
The sanctions picture keeps getting worse. NPR reported this week that Charlotin’s global tally of court-related AI error incidents has passed 1,200, with about 800 from U.S. courts. A Sixth Circuit panel recently hit two lawyers with $30,000 in punitive sanctions over AI-tainted appellate briefs containing more than two dozen fake citations. The court said it went well beyond sloppy drafting. Total sanctions in that case exceeded $100,000. States are moving too. Illinois has HB 4348, which would require attorneys to disclose AI use in legal proceedings.
The “should lawyers use AI?” debate is over. Judges settled it by using AI themselves. What’s left is a gap between firms that have real policies, training, and verification workflows and firms that are hoping nobody notices they don’t.
If your attorneys are using AI without a clear policy and a verification step before filing, they’re operating below the standard the bench is setting. And if your firm still has a “no AI” rule, you’re behind the people deciding your cases.
Update your AI usage policy this quarter. Which tools are approved. What verification looks like before filing. How AI-assisted work product gets documented. What disclosure your jurisdictions require. The judges-are-already-using-it argument is the one that moves leadership. Use it.
AI Is No Longer a Pilot Program
OpenAI closed a $122 billion funding round this week. Let that number breathe for a second. $122 billion. Amazon put in $50 billion. Nvidia and SoftBank each put in $30 billion. The company says it’s generating $2 billion per month in revenue with more than 900 million weekly active users. Enterprise revenue is 40% of the business and growing. Retail investors can now access OpenAI through bank channels and ARK Invest ETFs ahead of a reportedly upcoming IPO.
Meanwhile, the Federal Reserve published an analysis that puts hard numbers behind what a lot of us have been saying in conference rooms for the past year. About 18% of U.S. firms had adopted AI by year-end 2025. Forty-one percent of workers reported using generative AI for work. And when you weigh by employment, 78% of the labor force works at firms that have adopted AI. The sectors leading? Professional services and finance.
The “should we experiment?” phase is done. The question for managing partners now is where to standardize first. Written policies, approved tool lists, training programs, budget line items. Not another innovation committee. Not another pilot. Actual operational decisions about how AI gets used across the organization, starting with something specific.
I’m not going to pretend there’s no risk in moving fast. Bad AI outputs create liability, full stop. But there’s also risk in being the firm that’s still debating whether to try this while your competitors and your clients have already decided. The Fed data says nearly 4 in 5 workers are at firms that have moved. That’s the competitive reality.
Pick one workflow this week. Something high-volume, low-risk. Document review. Client intake. Research memos. Put AI into it with proper governance and start learning from what actually happens. You’ll learn more from one real deployment than from six more months of discussion.
The Copilot You Just Bought Is Already Aging
There’s an uncomfortable pattern forming in legal tech procurement, and I think more managing partners need to see it before they sign their next renewal.
Harvey raised $200 million at an $11 billion valuation this month. They’re not building a better search bar. They’re building autonomous legal agents that execute multi-step workflows from start to finish. Legora, a Swedish startup, went from $1 million to $100 million in annual recurring revenue in 18 months. Eighteen months. Firms are running entire document review and drafting workflows on it, not asking it one-off research questions. More than 100,000 lawyers work on Harvey’s platform. Legora serves over 1,000 firms across 50 markets.
The money is moving toward systems that do the work, not systems that suggest what you might do next.
Gartner estimated this week that by 2028, more than half of enterprises will stop paying for assistive AI and shift to platforms that deliver workflow outcomes. Gartner’s timelines are often wrong, but this one matches where the capital is actually flowing. Nobody writes $200 million checks for better autocomplete.
So if you’re evaluating legal tech this quarter, ask the uncomfortable question. Is the vendor you’re considering selling you the same practice management software with a chat window added on? Because the market is already moving past that. The question to put to any vendor: “Does this tool execute work and deliver measurable outcomes, with controls I can audit?” If they fumble that answer, you know what you’re looking at.
The firms that go straight to “AI as workflow engine” instead of lingering in the “AI as feature” phase will have a real advantage. Not because the technology is magic. Because they’ll be building processes and governance around something that has staying power, instead of buying a product that’s already halfway to obsolescence.
You read a 2,000-word roundup on a Sunday. That says something about where your head is. You’re not wondering whether AI matters to your firm. You’re trying to figure out which of the five things you just read should be on the agenda first.
That’s the conversation I have every day with managing partners and COOs who are past the hype and into the hard part. If you’re working through any of this, tell me what you’re stuck on. steve@intelligencebyintent.com. I’ll be direct about what’s ready, what’s not, and what I’d actually do in your position.


