Last Week Had Ten AI Stories. Only Five Were Worth Your Time.
Bigger context windows, stronger benchmarks, and a federal-state regulatory collision. Here's what each one means for your firm.
The Week AI Got More Real
TL;DR: Claude shipped a 1M token context window at flat pricing (via API and Claude Code, not the regular chat) and inline visualizations. GPT-5.4’s benchmarks put it above human experts on desktop task completion. State AI legislation is creating a practice opportunity most firms are ignoring. Morgan Stanley told institutional clients to expect a capability jump that will surprise them. And the case for banning AI at your firm got a lot harder to defend.
Last week I gave you ten stories. That was too many. This week I’m giving you five. Not because less happened, but because these five are the ones actually worth your time.
Claude Had a Big Week. With a Caveat.
I’ve trained over 2,500 attorneys on AI. I’ve been in the room at 50+ law firms running live demos, showing people what Claude can do. What happened this week matters, but I want to be straight about what it means in practice and what it doesn’t.
On March 12, Anthropic launched inline visualizations. Claude now builds charts, diagrams, timelines and interactive tools right inside a conversation. No separate software. No designer. That’s available to everyone including free users, and it’s genuinely useful for case timelines, org charts, damages scenarios. The kind of visual work product that used to require a paralegal and two rounds of revisions.
The next day, the 1M token context window went generally available for Opus 4.6 and Sonnet 4.6 at standard pricing. No long-context premium. A 900,000-token request costs the same per-token rate as a 9,000-token one.
Now here’s the caveat, and it’s a big one. The 1M window is available through the API and Claude Code. The regular Claude chat interface that most attorneys actually use day to day? Still 200,000 tokens. That’s still a lot of text, roughly 150,000 words, but it’s not “load your entire litigation file into one session” territory. Not even close. To get the full million tokens you’d need to work through the API or Claude Code, and most attorneys aren’t set up for that today. Frankly most don’t know what those tools are.
So why does it still matter? Two reasons. First, legal tech platforms that are built on Claude’s API can now pass that full context window through to their products. The tools your firm evaluates over the next six months will be more capable because of this change, even if individual attorneys never touch the API themselves. Second, it signals where the ceiling is heading. The chat interface will catch up. It always does.
The first version of Claude launched in March 2023. Three years later the gap between what’s possible and what’s practical keeps narrowing. But it hasn’t closed yet. The inline visualizations are ready for daily use right now. The 1M context window is ready for the firms and vendors building on the API. For most attorneys working in the regular chat interface, the day-to-day experience hasn’t changed yet.
GPT-5.4 Scored Above Human Experts. Here’s What That Actually Means.
GPT-5.4 launched the week before last. Now that the dust has settled there’s one benchmark worth understanding.
On OSWorld-Verified, which tests whether AI can navigate a computer, open applications, and complete multi-step tasks on its own, GPT-5.4 scored 75.0%. The human expert baseline is 72.4%. First frontier model to score above humans on desktop task completion.
Across 44 professional occupations, GPT-5.4 maintained an 83% match-or-beat rate against human experts. In investment banking simulations it performed at junior analyst level in 87.3% of cases.
Before you panic, a few things to keep in perspective. Benchmarks measure controlled conditions, not messy real-world legal work with ambiguous facts and a client who keeps changing their mind. An 83% match rate also means a 17% miss rate. And “junior analyst level” is exactly that. Junior.
But here’s why it matters anyway. A loaded first-year associate runs roughly $225,000. These tools cost pennies per task. The question isn’t whether AI replaces associates. It’s whether firms that pair associates with these tools get more output at the same cost. The early evidence says yes. Not dramatically more. But enough to shift the math on how you staff matters and how you price them.
State AI Laws Are Building a Practice Area. Most Firms Are Just Watching.
I covered the state regulatory wave last week so I won’t rehash it. But this week it got more specific and honestly more interesting.
Washington and Oregon passed AI companion chatbot safety bills. Virginia passed three AI bills. Utah pushed through nine in a single short session. And a Trump executive order now directs the Attorney General to create an AI litigation task force to challenge state AI laws that the administration considers inconsistent with federal policy.
That last part is the real story. We now have an official federal-state collision course on AI regulation, and it’s only going to get messier. States are passing laws as fast as they can draft them. The federal government is gearing up to challenge them. That tension will produce litigation, compliance work, and advisory engagements for years.
Every company deploying AI needs counsel who understands this space. And at this point that’s most companies. The firms that start building an AI regulatory practice now are going to have a real head start. I’m not going to pretend it’s easy to build from scratch. But the demand is forming and most firms aren’t paying attention to it.
Morgan Stanley Says Prepare for Acceleration
When a bulge-bracket investment bank tells its institutional clients to expect a shift, it’s worth reading carefully. Not as gospel. But as a signal of where serious money is placing its bets.
Morgan Stanley published a report last week saying a significant AI capability jump is likely in the first half of 2026. Their analysts expect a non-linear leap in model capabilities between April and June. Lab executives are telling investors to prepare for progress that will surprise them. And Morgan Stanley’s own survey of roughly 1,000 executives across five countries found an average net workforce reduction of 4% in the past twelve months, directly attributed to AI.
That 4% connects to stories I covered last week. Block cutting nearly half its workforce. Atlassian cutting 1,600 jobs. Over 30,000 workers impacted by AI-cited layoffs so far in 2026. The Morgan Stanley report suggests this is the early phase, not the peak.
Look, I’d be cautious about treating any bank’s prediction as certain. Analysts are wrong all the time. But the reason this matters for your firm isn’t whether Morgan Stanley nailed the exact timeline. It’s that the people managing your clients’ money are reading this report and making decisions based on it. Your clients are thinking about what AI means for their business. You should be too.
After This Week, Banning AI Got Harder to Defend
I’ve made the governance argument before. I’ll make it differently this week because the ground shifted under it.
With inline visualizations, Claude builds work product directly in conversation. With GPT-5.4 scoring above human experts on professional task benchmarks, the tools are getting harder to wave off. And with the 1M context window now available to legal tech vendors building on the API, the next generation of tools your firm evaluates will be meaningfully more capable than what you looked at six months ago.
The gap between firms with AI governance and firms banning AI isn’t philosophical anymore. It’s a gap in what those firms can actually do for clients.
And here’s the practical problem with prohibition. When a firm bans AI, attorneys use it anyway. Personal accounts, no oversight, no security controls. Confidential client data ends up in tools the firm doesn’t manage or even know about. The risk you were trying to avoid becomes invisible and unmanageable. I’ve seen this at firm after firm. The ban doesn’t stop usage. It just stops visibility.
Multiple bar associations including North Carolina, Virginia, Texas, and the ABA have published model AI policies. The templates exist. Adapting one takes weeks, not months. The only thing missing is the decision to do it.
What To Do This Week
Try the inline visualizations in Claude. Ask it to build a timeline or a flowchart on a real matter. It’s free, it takes five minutes, and it’ll show you where the tools actually are right now.
Identify one attorney or practice group to start developing an AI regulatory offering. The client demand is forming and most firms haven’t noticed yet.
If you don’t have an AI governance policy, download a model policy from the ABA or your state bar and start adapting it this month. Not next quarter. This month.
Last week I said the “monitoring developments” phase was over. This week reinforced it. The tools got better. The benchmarks got more serious. The regulation got bigger. And the bankers started telling their biggest clients to get ready.
None of that means you need to panic. It means you need a plan. And the firms that started building one a year ago are in better shape than the ones starting now. But starting now still beats starting next quarter.
If you’re reading a weekly AI roundup on a Sunday, you’re not the person who needs to be told this matters. You’re the person trying to figure out which parts of it matter to your firm specifically, and what to do about them before the week starts.
That’s the conversation I have every day with managing partners, COOs, and practice leaders who are done with the noise and want someone to be straight with them about what’s ready and what isn’t. If you’re sorting through this for your firm, tell me what you’re working through. steve@intelligencebyintent.com. I’ll tell you what I’d do and where I’d wait.


