The Week AI Stopped Being a "Tech Thing"
Your associates are already using it. Your clients are already measuring you by it. The only question left is whether you'll lead or react.
Image created with Gemini Nano Banana 2
10 AI Stories From This Week That Every Attorney and Business Leader Needs to Know
TL;DR: The Pentagon blacklisted Anthropic for refusing to remove safety guardrails. Block laid off 40% of its workforce and blamed AI. Nearly 70% of lawyers are using generative AI but most firms have no policy. Hallucination sanctions are accelerating in federal courts. OpenAI raised $110 billion. And 78 chatbot bills are alive in 27 states. If you lead a firm or run a business, this week handed you a reading list you can’t afford to skip.
I sat down Sunday morning planning to write about one or two things that happened this week in AI. I ended up with ten. That almost never happens. Usually I scrape together three or four decent stories and the rest is just noise. This week? Everything felt connected. Like someone yanked a bunch of threads at the exact same time and the whole sweater started unraveling.
So here are the ten stories that actually matter if you’re a managing partner, a General Counsel, a firm leader, or honestly any executive trying to figure out what the hell AI means for your people and your clients. I ranked them by real impact, not by how loud the headlines were. Some you probably saw. A few I guarantee you didn’t.
Let’s get into it.
1. The Pentagon Labeled Anthropic a National Security Risk. Anthropic Is Suing.
This one is the biggest AI story of the year so far, and I went back and forth for a while on whether I should even lead with it because it’s so politically loaded. But every time I tried to bury it, the legal implications kept dragging it back to the top.
Here’s what actually went down. Anthropic (full disclosure: it’s the model I use for about 60% of my own work, so I’m obviously not neutral here) was the only frontier lab with models running on the Pentagon’s classified networks. $200 million contract, things humming along nicely.
Then the Pentagon wanted to renegotiate. They demanded unrestricted access to Claude for “all lawful purposes.” Anthropic said yes to almost everything but drew two hard lines: no mass surveillance of American citizens and no fully autonomous weapons without a human in the loop.
Pentagon said nope. On February 28th, Defense Secretary Pete Hegseth slapped them with a “supply chain risk to national security” label, the kind of designation we normally reserve for Chinese vendors. Hours later, OpenAI quietly announced its own Pentagon deal - with the exact same two restrictions carved out. Same red lines. Different outcome. You can draw your own conclusions.
Dario Amodei already said they’re suing in federal court. He also reportedly told his team the real issue isn’t the guardrails. It’s that Anthropic hasn’t donated to or publicly praised the President. If you advise tech companies on government contracts, vendor risk, or AI ethics, this case is going to be cited for years. Every GC with defense exposure is running fresh vendor audits right now.
On the bright side (or weird side), Anthropic picked up over a million new consumer signups per day this week. People are voting with their wallets.
2. Block Cut 40% of Its Workforce and Said AI Made It Possible
Jack Dorsey didn’t sugarcoat it. Block (Square, Cash App) just laid off roughly 4,000 of its 10,000 employees and pointed straight at “intelligence tools” in the shareholder letter.
He told Wired that something flipped in December. Opus 4.6 and Codex 5.3 suddenly got scary good with huge codebases, and it opened the door to completely rethinking how companies run.
But here’s where it gets messy. His former head of comms wrote in the New York Times that this smells like classic cost-cutting wearing an AI costume. Internal folks told Business Insider nobody could actually explain how AI was replacing 4,000 specific jobs. Bloomberg called it “AI washing.”
Look, I think the truth is somewhere in the messy middle. AI probably did kill some roles. But 40% in one swing? That still feels like a bet, not a spreadsheet. And I say that as someone who’s all-in on this technology.
For your firm or your clients, the employment law questions are real no matter what the motivation was. Courts haven’t decided how they feel about “AI replaced you” as a layoff defense. WARN Act, wrongful termination, age discrimination (the most exposed workers are often the older, more experienced ones) - employment lawyers, take notes. And if you run a 200-person firm, just imagine your biggest client’s CEO reading this and wondering if your headcount is next.
3. 70% of Lawyers Are Using AI. Most Firms Have No Policy.
The 2026 Legal Industry Report from 8am landed this week and honestly, I expected high numbers. I did not expect this high. Nearly seven in ten legal professionals are now using generative AI on actual work. More than double from last year. I train attorneys on this stuff for a living and I still had to sit with that stat for a minute.
Immigration lawyers are at 40% daily use. Overall, 28% use it every single day and another 31% use it several times a week.
But the number that actually matters: most firms still have zero formal AI policy or training.
Let me say it plainly: your associates are already feeding client work into ChatGPT, Claude, and Gemini right now. Today. And most firms have no rules, no training, no documentation requirements, nothing. This is malpractice waiting to happen. The gap between what individuals are doing and what the firm is governing is the single dumbest unforced error in the profession right now. Fixable in 30 days. But somebody has to start.
4. Federal Courts Are Running Out of Patience With AI Hallucinations
New Orleans federal judge fined a veteran lawyer $1,000 this week for a ChatGPT-drafted brief full of fake cases. Three co-counsel got publicly scolded but not fined because they admitted they never even read it.
Separately, the Fifth Circuit basically said “we’re tired of this” and hit a Texas attorney with a $2,500 fine for the same stunt.
We’re now over 700 documented hallucination cases nationwide. February 2026 alone had more than 30 judicial opinions calling it out. That’s more than one per day.
The fines are still laughably small. I keep saying it: $1,000 for what is effectively fraud on the court is less than two hours of associate time. But don’t get complacent. The real pain is the malpractice suit, the bar complaint, the client who will never trust you again, and your name living forever on PACER next to “sanctioned for AI nonsense.”
We’re past the “I didn’t know” defense. Courts are heading toward “failing to verify AI output is per se unreasonable under Rule 11.” Verification step or don’t use it for filings. Full stop.
5. OpenAI Raised $110 Billion. I’m Still Processing That Number.
February 27th. Largest private funding round in history. Amazon $50B, Nvidia $30B, SoftBank $30B. Pre-money valuation: $730 billion.
I rewrote that paragraph three times because the numbers still sound fake. For context, all U.S. startups combined raised $170 billion in all of 2023. OpenAI just grabbed almost two-thirds of that in one check.
Microsoft sat this one out (they still own ~27%). Amazon stepping up big signals a real shift in who controls enterprise AI distribution. If your firm or clients live on Azure, you might want to think about that.
Also worth noting: OpenAI is projecting $14 billion in losses for 2026 and doesn’t expect profit until 2029-2030. They’re burning cash like it’s 1999. That only makes sense if AI becomes infrastructure. Maybe it will. But the gap between hype and revenue is something every advisor needs to stay honest about.
M&A, antitrust, and securities lawyers: the concentration question just got louder.
6. 78 State AI Chatbot Bills Are Moving Across 27 States
Regulation isn’t coming. It’s already here. Oregon just passed a chatbot safety bill. Similar stuff is moving fast in Washington, Arizona, Iowa, Georgia, Illinois, New York. 78 bills total across 27 states.
Watch New York’s S7263. It goes after AI chatbots that impersonate licensed professionals (lawyers and doctors included). Bad advice = private right of action for damages.
Think about what that does to legal tech products. If your tool’s interface makes users feel like they’re talking to a lawyer, you just created liability. Oregon’s version even has statutory damages. The plaintiff’s bar is about to have a field day.
For tech-company counsel, this state-by-state patchwork is going to make CCPA look like child’s play. For firms just evaluating tools, the liability picture just got messier.
7. Anthropic’s Research Shows AI Job Displacement Isn’t Matching the Hype. Yet.
In the middle of all the layoff panic, Anthropic dropped some actual data (yeah, it feels a little awkward citing them right after the Pentagon story, but the research is peer-reviewed so I’m not ignoring it).
They built an “observed exposure” metric, comparing what AI could do versus what people are actually using it for. The gap is still huge. Real-world usage is a fraction of the theoretical capability.
The part that hit me: the most exposed workers are older, female, more educated, and higher-paid. Sound familiar, senior lawyers?
Counterpoint: no big unemployment spike yet for those workers since late 2022. What they did see was slower hiring of younger people in exposed roles. So not mass firings, just a quiet squeeze on the entry-level pipeline.
Managing partners, the honest takeaway: displacement isn’t tomorrow, but the associate market is already shifting. I wrote a whole article about this earlier this week. Firms that lean into AI training and new service lines will grow. The ones waiting it out will slowly shrink.
8. The March Model Launch Wave Changes the Practical Calculus
GPT-5.4. Gemini 3.1 Pro. Opus 4.6. I know the version numbers are ridiculous at this point.
But collectively these releases moved the conversation from “can AI do legal work?” to “how fast should we roll this out?”
Harvey’s BigLaw Bench score jumped to 91% on document-heavy tasks with GPT-5.4. Anthropic says Sonnet 4.6 now matches Opus 4.6 on enterprise docs, charts, PDFs. Google dropped Gemini 3.1 Pro everywhere.
And almost nobody is talking about how fast the Chinese models (GLM-5, MiniMax M2.5) are closing the gap, from months to weeks. That changes vendor risk conversations fast.
For your practice: routine document review, custody files, discovery, 300-page agreement markups. The quality gap between AI and a junior associate just shrank dramatically. These workflows are viable now.
9. In-House Legal Teams Are Outpacing Their Law Firms on AI
This one should actually worry managing partners more than the Block layoffs.
I talk to in-house teams constantly and the shift in the last six months has been noticeable. The ACC/Everlaw survey shows in-house AI adoption went from 23% to 52% in a year. But the killer stat: 64% of them now expect to rely less on outside counsel because of the capabilities they’re building themselves.
Sixty percent don’t even know if their outside firms are using AI on their matters. That transparency gap is closing. It’s about to show up in RFPs.
Translation: your clients are learning to do more themselves. They’re going to start asking what you’re doing with AI, and “we’re monitoring” won’t cut it anymore. The firms that can show real competence, governance, and transparency will keep the work. Everyone else will watch it quietly move in-house.
10. Vermont Signs AI Election Law as Federal Preemption Fight Heats Up
Vermont just signed a bill regulating synthetic media in elections (March 5th). Meanwhile the Trump administration’s executive order tells the AG to start challenging state AI laws and has Commerce identifying “burdensome” ones by March 11th.
So states are sprinting to regulate while the feds are preparing to sue them for it. Perfect.
For anyone building compliance programs, you’re basically planning for two opposite futures at once. Chaos for clients. Opportunity for lawyers who can navigate both the state rules and the preemption fights.
What to Do With All of This
Alright, that was a lot. If you made it this far, you’re already ahead of 90% of your peers. But reading doesn’t move the needle.
Here’s what I’d actually do this week if I were in your shoes:
Audit your AI usage. Ask people what they’re really using, not what they think you want to hear. You’ll be surprised. Get a policy in place. It doesn’t have to be perfect. It just has to exist. A mediocre policy beats no policy when the bar complaint lands. Build a real verification workflow for anything that touches a court filing. Human checks every citation. No exceptions. Start talking to your clients about AI transparency before they ask. Because they will. Block one hour, not a demo, real client work, and test one of the new models yourself.
I don’t have a crystal ball. Nobody does. But the firms and companies that treat AI as a leadership issue instead of a “tech thing we’ll get to later” are going to be in a completely different position five years from now.
This week made one thing clear: the “monitoring developments” phase is over.
We’re in the doing phase now. Whether you’re ready or not.
If you read all ten of these stories instead of skimming the first three, you’re the person in your firm who already knows the “wait and see” window closed this week. The question sitting in front of you isn’t whether AI matters. It’s how many more weeks your firm operates without a real plan while your associates improvise and your clients start measuring.
That’s the conversation I have every day with managing partners and GCs who’ve hit exactly this point. If you want to talk through what a first 30 days actually looks like, or where your current setup has gaps you haven’t spotted yet, reach out at steve@intelligencebyintent.com. Tell me what you’re working through. I’ll be direct about what’s ready now and what isn’t.
One article, every morning, at smithstephen.com. Written for the people who run firms and companies, not the people selling them software. Subscribe if you’d rather think clearly about AI than anxiously.


