Zero Data Retention Is the Wrong Default for Most Law Firms
Why the "safest" AI policy may be creating the risks you were trying to avoid.
Zero Data Retention Sounds Perfect for Your Law Firm. Here’s What It Actually Costs You.
TL;DR: Zero data retention (ZDR) is the default answer most law firms give when asked about AI and confidentiality. It solves one problem and creates several others. Before you mandate ZDR across your firm, understand the tradeoffs in cost, usability, auditability, and actual security. The better answer for most professional services firms is simpler than you think: get a teams or enterprise subscription, set your policies, and train your people.
I get asked about privacy and confidentiality in AI more than almost any other topic. It comes up in every training session, every consulting engagement, every conversation with a managing partner who’s trying to figure out whether these tools are safe for their firm. And the concept of Zero Data Retention comes up a lot. It sounds like the perfect answer. I wanted to share here what I share with my clients, because this is an important topic and I think a lot of firms are making decisions based on incomplete information.
Every Conversation Starts From Scratch
Here’s the thing nobody tells you in the vendor pitch. Within any project where ZDR is turned on, your AI has amnesia. Every single time.
That associate who spent 45 minutes last Tuesday refining a prompt for a motion to compel template? If she did it in a ZDR-enabled project, it’s gone. The careful back-and-forth your litigation team did to get the AI to format discovery responses the way your partner likes? Vanished. There’s no memory. No learning. No building on what came before. (As I’ll explain later, there are ways to structure your environment so this doesn’t hit everything. But most firms haven’t set it up that way.)
I’ve trained over 2,500 attorneys on AI tools at this point, and the number one frustration I hear from firms running org-wide ZDR is exactly this. It’s like hiring a brilliant associate, working with them all day, then wiping their memory at 5pm. Every morning you start over. Unless your firm builds its own system to capture and share effective prompts externally, or segments ZDR to only the projects that truly need it, the work product of prompt engineering itself becomes disposable.
Your Audit Trail Has a Hole In It
This one should worry managing partners. When something goes wrong with an AI-generated work product, and eventually it will, your first question is going to be: “What exactly did we ask it, and what exactly did it produce?”
With ZDR enabled, the vendor can’t answer that for those interactions. They have nothing. Which means the burden of logging inputs and outputs for ZDR-covered work falls entirely on your firm. Most firms adopting ZDR don’t build that logging infrastructure until after they need it. And here’s the kicker: whatever logging system you do build is itself discoverable, subject to your own retention policies, and a data governance problem you now own entirely. You’ve traded one risk for another.
The Ethics Rules Are Pulling in the Opposite Direction
This is the tension that almost nobody in the ZDR conversation is talking about, and it might be the most important one.
There’s no single rule that says “you must keep a log of every AI interaction.” But there are multiple ethical obligations that, taken together, point strongly in that direction.
ABA Formal Opinion 512 says lawyers using AI must provide competent representation, which includes understanding the benefits and risks of the technology. That’s Model Rule 1.1. Under Rules 5.1 and 5.3, supervising lawyers have to make reasonable efforts to ensure that attorneys and non-lawyers under their authority are following the Rules of Professional Conduct when using AI on client matters. Think about that for a second. You can’t supervise what you can’t see.
Then there’s billing. The ABA opinion notes that if a lawyer uses AI to draft a pleading, they can charge for the time spent inputting information and reviewing the output, but fees must remain reasonable. When a client challenges a bill and says “you charged me four hours for something AI did in ten minutes,” the firm needs records to justify that time. Without an audit trail, you’re defending your billing with nothing but your word.
The DC Bar’s Ethics Opinion 388 goes even further. It says lawyers should consider whether specific interactions with AI in connection with a client matter should be retained as part of the client file. It stops short of saying every AI interaction is automatically part of the record, but the direction is clear: if AI was used on a matter, you should be thinking about what to keep.
And from a liability standpoint, if an AI’s contract drafting or risk analysis leads to a major oversight, a documented audit trail can clarify where things went wrong. That record may prove essential in limiting professional liability exposure.
So here’s the conflict: you’ve got one set of obligations telling firms to protect client data by ensuring the vendor retains nothing, and another set of obligations that essentially require firms to document how AI was used on every matter. Those two things are pulling in opposite directions. ZDR doesn’t just delete the vendor’s copy. It deletes the easiest, most automatic version of the audit trail. And most firms haven’t built anything to replace it.
Troubleshooting Goes Blind
Think about the last time you called tech support and they asked you to reproduce the problem. With ZDR, your vendor can’t even see what happened. You’re filing a support ticket that basically says “trust us, it broke” with zero evidence on their end.
When an attorney gets a hallucinated case citation, or the AI misapplies a legal standard in a contract review, the vendor has no record to investigate. They can’t reproduce it. They can’t diagnose whether it was a model issue, a prompt issue, or something else entirely. You’re on your own.
Some Performance Tradeoffs to Be Aware Of
ZDR does limit some of the performance features that providers offer. For example, OpenAI’s extended prompt caching, which stores data to GPU-local storage for longer cache retention, is not compatible with ZDR. Their basic in-memory caching still works because it doesn’t persist data to disk, but the more advanced caching features are off the table.
The practical impact depends on how your firm uses the tools. For a single attorney running a few research queries, the difference is negligible. But for firms doing high-volume work through the API, losing access to extended caching means forgoing some of the cost and latency savings the providers offer. It’s a tradeoff worth understanding, even if it’s not the dealbreaker some people make it out to be.
It Solves Less Than You Think
I emphasize this point more than any other when I’m speaking to bar associations and firm leadership. ZDR addresses the vendor-side retention risk. Period. That’s it.
It doesn’t protect data in transit. It doesn’t control what happens at the endpoint. And it absolutely does not stop an attorney from copying sensitive AI output into an email, a shared drive, or a brief filed with the court. I’ve watched attorneys treat ZDR like a magic shield that covers all confidentiality concerns, and it just isn’t.
The Heppner ruling and the steady stream of evolving bar ethics opinions are making something clear: the duty of competence around AI use goes far beyond your vendor’s retention policy. ZDR is one layer of protection. It is not the whole answer.
Collaboration Hits a Wall
Many of the most valuable ways firms could use enterprise AI require some form of data persistence. Shared prompt libraries across practice groups. Assistants tuned to specific areas of law. Team knowledge bases that get smarter over time. Persistent research threads that a second attorney can pick up where the first left off.
For any project running ZDR, those features are off the table. If your firm has ZDR turned on across the entire organization, you’re locked into a single-user, single-session model unless you build middleware to handle state management externally. Firms that configure ZDR at the project level can avoid this by keeping collaborative and training work in non-ZDR projects. But most firms haven’t gotten that granular with their setup.
It’s More Flexible Than You Think, But Most Firms Don’t Know That
Here’s something that changes the conversation: ZDR doesn’t have to be all or nothing. OpenAI, for example, allows ZDR to be enabled at the organization level or at the project level. That means a firm could, in theory, set up one project with ZDR for privileged client-matter work and another project without it for general research and internal productivity, all within the same account.
That’s a meaningful level of flexibility that most firms don’t realize they have. But taking advantage of it requires someone at the firm who understands the platform well enough to configure it properly and set clear policies about which work goes where. Most small and midsize firms don’t have that person. So in practice, most firms still pick one setting for the whole organization and live with the tradeoffs. Which usually means they turn ZDR on across the board because the risk calculus for a law firm almost always defaults to maximum caution. And then they quietly absorb all the limitations I’ve described without fully understanding what they gave up.
What’s Actually Working
I want to cut through the noise here because this is the practical answer I give firms every week.
In the US, firms are adopting Teams and Enterprise subscriptions from OpenAI, Anthropic, Google, and Microsoft. And those subscriptions give them the data security, audit capabilities, and administrative controls they need without the tradeoffs that come with blanket ZDR.
I work with law firms, PE firms, valuation firms, accounting firms. They all go through their own diligence after we talk about these issues. They read the commercial terms. And they come to the same conclusion: the enterprise tiers work for them. The protections are there. The terms are clear. Your data isn’t used for training. You get admin controls, audit logs, and proper data handling agreements.
The critical line is this: stop using consumer tools. The free tier of ChatGPT, a personal Claude account, a basic Gemini login. Those don’t give you the protections you need. The teams and enterprise tools do. That’s the distinction that matters most for the vast majority of professional services firms.
Is anything ever 100% foolproof or hack-proof? No. Of course not. No technology is. But that’s been true of every tool firms have ever adopted, from email to cloud storage to document management systems. The standard isn’t perfection. The standard is reasonable measures, proper diligence, and defensible decisions.
One caveat: I’m speaking specifically about US-based firms here. I haven’t dug into the specifics of GDPR requirements and the need for local data processing in UK and European use cases. Those firms may have additional constraints around data residency that change the calculus.
Beyond Law Firms: What About Healthcare?
I get asked a lot whether this same logic applies beyond legal. Healthcare is the most common one, and the answer is yes.
HIPAA has the Business Associate Agreement. If your enterprise LLM provider signs a BAA, that’s a critical step toward being able to process protected health information through the tool. But a signed BAA alone isn’t the whole picture. The organization also needs to confirm that the specific services and features being used are actually covered under the BAA, and the organization still carries its own HIPAA compliance responsibilities around access controls, training, and data handling. Azure OpenAI, Google Cloud, AWS Bedrock, and both Anthropic and OpenAI at their enterprise levels all offer BAAs. But not every tier includes one. The consumer plans and even some mid-tier business plans don’t. So the line isn’t just “enterprise vs. consumer.” It’s “does your specific agreement include a signed BAA, and are the services you’re using in scope.”
The same principle applies here as it does for law firms: the enterprise subscriptions provide the foundation, but the organization has to do the work to confirm the right agreement is in place.
What to Do Monday Morning
If you’re a firm leader trying to get this right, here’s where I’d start.
Get your firm on an enterprise subscription. Not consumer, not free tier. Teams or Enterprise. Read the terms, confirm the data handling, and move forward. This one decision addresses a large share of the avoidable privacy and governance risk.
Train your people on what these tools actually protect, and what they don’t. Nobody should treat an enterprise subscription or a ZDR toggle as a substitute for good data hygiene, proper supervision, or client communication about AI use.
The Bottom Line
Zero data retention is a reasonable default instinct for any law firm thinking about AI and privilege. But instinct isn’t strategy. The firms that will get the most value from AI, without taking on unnecessary risk, are the ones who understand exactly what ZDR costs them and make informed choices instead of reflexive ones.
Don’t let the comfort of “we retain nothing” keep your firm from asking the harder question: “Are we actually protecting what needs protecting?” And the even harder one: “Can we prove how we used AI on this matter if someone asks?”
If you read this far, you’re not the person who set the ZDR policy and stopped thinking about it. You’re the one who suspects the tradeoffs are real and wants to know exactly what they are before your next committee meeting.
That’s the conversation I have every day with managing partners and COOs who are trying to get this right without slowing everything down. If you’re working through how to balance data protection with usability, audit requirements, and cost, send me a note at steve@intelligencebyintent.com. Tell me where you are and what’s not working. I’ll be direct about what’s ready, what isn’t, and where the real risk sits.


