Your AI Vendor Premium Is the Cheapest Insurance Your Firm Will Buy This Year
A managing partner sent me a panicked YouTube link this week. Here's what I told him, and what I'm telling the next dozen firms that ask.
The China AI Panic Doesn’t Reach Your Firm (Yet)
TL;DR: The US-versus-China open-source AI debate is real. It’s also not your problem this year. Here’s why the premium you’re paying the frontier labs is the cheapest line item in your AI budget.
A managing partner forwarded me a YouTube video this week with two words: “Should we be worried?”
The video was good. It was made by a friend of mine, Matthew Berman, and it walks through a real argument that’s been picking up steam: US open-source AI is losing to China, and if it keeps losing, the country has a problem. Subsidized Chinese labs. No working business model on our side. A developer base that may end up building on Qwen and DeepSeek instead of anything American. He’s not wrong about the macro story.
The next day, by coincidence, I was sitting with a mid-sized private equity firm having exactly this conversation. They’d seen the same headlines. The Chinese open-source models are real. They’re cheap. They’re good. Should they be looking at them?
I told them what I told the managing partner, and what I’m telling you now. None of it changes what your firm should do this year.
That’s the whole article. If you want to stop reading here, the headline is: take the debate seriously at the country level, ignore it in your buying decision, keep going with your closed-source vendor of choice. The premium you’re paying the frontier labs is buying you something Chinese open weights can’t sell you, and for firms that touch sensitive data, that something is reputation insurance.
If you’re still here, here’s the longer version.
What the China AI Argument Actually Says
The short version. Open-source AI in the US doesn’t have a business model that works. When you spend hundreds of millions training a model and then give it away, your competitors host it on cheaper margins than you can. China gets around this because the state subsidizes whoever’s behind. We don’t. So our open-source labs are pulling back (Meta has, mostly) and the open-source models with real momentum right now are coming from Chinese labs.
The longer-run concern Matt’s pointing at is that developer tools, academic research, and a lot of long-tail applications end up built on Chinese open weights. That’s not nothing. Standards get set by whoever ships, and five years from now the AI underneath your tech stack might look very different.
I do take that seriously. It’s just not the question your firm is actually trying to answer this year.
The Three Gaps That Close the Door
Your firm can’t run these models. I don’t mean shouldn’t. I mean can’t, in the literal sense. Almost every law firm and PE firm I work with outsources their IT, which means there’s no internal ML team to even hand the project to. There’s no GPU procurement budget. There aren’t engineers on staff who could fine-tune a model and stand up an inference stack if you asked. Running an open-weights model in production is not downloading a file from Hugging Face and pointing it at your data. It’s serving infrastructure, eval pipelines, monitoring, security review, and a couple of people on payroll who actually know what any of those words mean. The PE firm I sat with that morning has four people in IT total, and most of them are running help desk tickets. The mid-market law firm I trained last month still has partners who can’t get Outlook to sync reliably. The notion that either of them is going to stand up Qwen on a private cloud is a joke. A small handful of firms have the technical bench for it. Most don’t. And the ones that do mostly aren’t bothering, because the next two gaps would still close the door.
Then there’s the trust problem, which doesn’t have a clean answer either. Even if you could run a Chinese-origin model on your own metal with the safety layer stripped off, your GC has to be ready to answer one question: where was this thing trained, and on what. Nobody really knows. I’ve sat in maybe a dozen rooms now where someone walks the partners through “but it’s open source, we control everything,” and I watch the GC’s face. The GC is doing math the rest of the room isn’t. When a client calls and asks where their privileged material got processed, “we pulled the model out of a Hangzhou lab and ran it locally” is not an answer you want to be the first firm to test in a malpractice claim.
And the model itself is maybe 5% of what you’re actually buying when you sign with one of the frontier labs. This is the part of Matt’s argument that I think misses the point hardest. When your firm pays Anthropic or OpenAI or Google, you’re not paying for a model. You’re paying for an enterprise contract, a BAA, IP indemnification, SOC 2 reports, an admin console, audit logs, support, a roadmap, and a vendor relationship someone in procurement can actually manage. You’re also paying for the harness around the model, which by now is most of the value: Claude Code, the agentic stacks, Word and Excel integrations, deep research, document connectors, the workflow tooling your associates already use every day without thinking about it. None of that exists for DeepSeek or Qwen in a form a normal firm can buy, manage, and defend. Even if every other gap got solved tomorrow, your firm would still be writing the check to a US lab, because that’s where the actual product lives. The model alone isn’t a product.
The Premium Is Buying You Something Specific
Here’s the thing nobody actually says out loud when the price comparison comes up. For a firm whose business is privileged client communications, deal documents, sealed pleadings, fund LP data, the cost of saving 60% on inference isn’t 60%. It’s 60% minus whatever a single bad headline costs you, and bad headlines in this business are not cheap.
Picture how it lands. “Law firm processed client M&A docs through Chinese AI model.” It doesn’t really matter at that point whether the model was running on your own private servers, or whether you stripped out every line of safety code. The headline writes itself. The client call comes the next morning. You’d burn through more money on the crisis communications retainer in a week than the inference savings would’ve covered in a year.
So the premium your firm is paying the frontier labs isn’t really an AI premium at all. It’s more like reputation insurance. You’re buying a US contract, US data residency, US legal recourse if something goes wrong, and a vendor name your clients have actually heard of. For firms whose entire product is trust, calling that overhead misses what you’re actually paying for.
I’ve made some version of this argument to maybe two dozen firms in the last six months. Every one of them nods. Then they go back to renewing with the frontier lab they were already using, because once you say it out loud the math is obvious.
What Matt Gets Right About the Long Game
I want to be honest about the parts of this argument that do hold up, because they do.
If a Western company eventually offers an enterprise-grade managed service running on Chinese open weights, the math gets a lot more interesting. Pricing drops. The trust gap narrows because your contract is with a US vendor instead of a Beijing lab. That’s the first signal I’d actually be watching.
The second is the developer base, and this is the piece of Matt’s argument I think holds up best. If the next wave of AI tools (the ones that quietly show up in your tech stack three years from now without anyone deciding to put them there) ends up built on Qwen or DeepSeek under the hood, you’re using Chinese AI whether you chose to or not. That’s not a panic, but it is a planning question, and it’s a real one.
There’s also the chip story, which I’m less sure about. If Chinese labs keep tuning their models to run efficiently on Chinese silicon, and that silicon keeps closing the gap, the cost economics of running anything cheaply could eventually shift in a direction the US doesn’t control. That’s a 24 to 36 month conversation, not a Q2 2026 one.
So I’m not telling you none of this matters. I’m telling you it doesn’t matter to the buying decision in front of you right now.
What to Do Monday Morning
Honestly, none of what I’m about to suggest is exotic. If you’ve been doing the work, most of this is already on your roadmap.
Pick your platform and go deeper. Most firms are still using AI like a slightly smarter search bar. The work this quarter (whether you’ve committed to Claude, ChatGPT, or Gemini) is integration: connecting it to your document management, pushing the agentic tools into associate workflows, training partners on the harness rather than just the chat box.
Stop changing your AI strategy every time a “China is winning” article hits the partner mailing list. The next one will be along in about a week. You can have an answer ready, which is roughly: infrastructure, trust, and tooling gaps mean Chinese open-source models aren’t in our buying conversation, we’re paying attention, moving on.
Watch a few signals that would actually shift the picture. A Western cloud offering managed Qwen with enterprise terms. A US frontier lab releasing open weights that beat the Chinese alternatives. The first credible IP or security incident involving a firm running Chinese open-source in production. None of those have happened yet. When one does, the conversation might be worth reopening.
That’s the list. The harder thing isn’t knowing what to do, it’s getting your firm to actually do any of it.
The Bottom Line
The open-source AI debate is a real debate. It just doesn’t reach into your firm’s buying decisions this year.
The places worth spending your worry are closer to home, and you already know what they are. Governance that hasn’t kept up with how people are actually using the tools. Partners who still haven’t touched any of this. Associates running personal ChatGPT accounts because IT moved too slowly to give them something approved. That’s where the next 12 months are won or lost in firms like yours, and Beijing has nothing to do with any of it.
The open-source AI debate will keep showing up in your inbox, and most of it won't change anything about how you should be running your firm this year. The work in front of you is closer to home: governance that hasn't kept up with how people are actually using these tools, partners who still haven't touched any of it, and the associate running a personal ChatGPT account because IT moved too slowly to give her something approved. The next 12 months get won or lost on those three problems, not on what's coming out of Hangzhou. If you want to talk through how that work is actually getting done at firms like yours, I'm at steve@intelligencebyintent.com. Bookmark the signals worth watching, ignore the rest, and keep going.


