Your Firm's AI Context Is an Asset. Start Treating It Like One
The practical steps to inventory, structure, and protect your firm's AI context while vendors figure out portability
This is a follow-up to my last post. I keep three AI chat tabs open and still paste the same background into each one, again and again. What I am building, who my clients are, what “done” looks like. Somewhere between the third paste and the fourth correction, I feel the hit to my attention. The assistant knows me, sort of, but only inside its own walls. If I move, I start over. That is not a product gap. That is a quality-of-work problem we will keep paying for until we fix it at the market level.
Here is the next-wave idea in plain English: AI memory portability. If a platform remembers me, I should be able to take that memory with me to another assistant, the way I once moved my phone number from AT&T to Verizon. Not a fresh start, a move. To be clear, I am not asking vendors to deliver full memory portability today. Most of them don’t. I am saying we should name the concept now, design toward it inside our firms, and be ready to negotiate for it when the market catches up.
Why does this matter to a business leader who wants better work, not another acronym? Three reasons. First, switching costs turn moral when an assistant holds your institutional knowledge. Leaving can feel like losing part of your mindshare. Second, we have spent two years talking about privacy, who can see the data, and not nearly enough time on autonomy, whether you can move your data, context, and learned signals wherever you want. Third, healthy markets need a real off-ramp. Portability reduces lock-in, pushes prices toward reality, and keeps everyone honest about safety and quality. Even if it is not available yet, planning for it changes how you choose tools today.
What exactly is the “memory” we might want to move later? Some of it is static: profiles, preferences, glossaries, style guides, and prompt libraries. Some of it is relational: links between you, your projects or matters, your files, your clients. Some of it is derived: vectors, embeddings, summaries, classifier tags, and even small fine-tunes the system produced while serving you. And a lot of it is event trails: conversation histories, action logs, tool permissions, decisions with timestamps. If you have ever told an assistant, “use our house style and keep Dr. Nguyen’s title precise,” that is memory.
Since vendors are not ready to hand all of this back, what can you do now? Three internal moves that cost little and save you later.
First, name your memory. Create a simple inventory of what your assistant needs to do good work for you: the specific glossaries, style guides, prompts that actually produce value, and the top client or matter links the assistant should respect. Put this in a living document. When you know what matters, you know what to ask for later.
Second, structure what you can control. Keep your prompts, glossaries, and style guides as files in your own drive with clear versioning. If you build a prompt library, store it in JSON or YAML in your repo, not only inside a chat product. Treat your “house style” like a brand asset, because it is.
Third, keep receipts. Where your assistant writes summaries, labels, or other derived data, save copies in your system of record when possible. I like a simple rule: anything you would be upset to lose should have a home you control. Even a weekly export to a private bucket helps.
If you run a law firm or a professional services practice, there is one more step. Write the future clause now, then hold it until the market can honor it. You are not asking for it today. You are getting the language ready.
Data Ownership: “All user prompts, outputs, summaries, labels, and embeddings derived from firm data are firm-owned IP.”
Portability on Request: “Vendor will publish a plan to support export of user and firm memories as the feature becomes available and will give the firm priority access to pilot programs.”
Derivatives Included: “When available, export will include derived features, for example embeddings, classifier tags, and RAG chunks, in open formats.”
Interoperability: “Vendor will maintain public schemas and import docs when export is released. No additional license fee for export or import tooling.”
Deletion and Retention: “Upon termination, vendor will certify deletion of firm memories within a defined window after verified export, once export is supported.”
Again, this is not a demand for today. It is a placeholder so you do not start from zero when the feature appears. If you have ever bought enterprise software, you know how this works. The clause you write now becomes leverage later.
On the buy side, keep a short list of future-state questions you will fold into RFPs when portability is real. Can we export everything we put in, plus what the system learned. Are the embeddings portable, with model name, dimension size, and distance metric documented. Is there a schema map for memory objects, person, matter, file, tag, summary. Can a competitor import the export with minimal loss. Is there a keys-outside-the-vendor option, for example your own KMS for the vector database. Do we get an audit of agent actions with timestamps and citations. What is the off-ramp process, timeline, fees, and level of support. Pin this list to your procurement playbook. It costs nothing today and saves months later.
So, how should you act this quarter if you are not asking vendors to show memory portability yet?
Choose tools that already expose more of your own work back to you. Products that let you export chats or pull data through an API signal a future path, even if it is not full memory portability.
Store your critical context in places you control. That means glossaries in a repo, style guides in shared folders, prompts in files you can track. Think “move in a weekend,” not “recover from a flood.”
Ask vendors one non-threatening question: “When you ship memory export, what will be in it.” You are not pushing for a date. You are asking for intent and shape.
I want the people I work with to think once, then reuse that thinking. Memory portability is the boring plumbing that will make that possible. Today, it is a concept we should prioritize, a design target for our own operations, and a clause we keep on deck for the next renewal. When the market is ready, you will be too. And when you move, you won’t start from scratch. You will take your number with you. One day, you will take your memory too.
Moving Forward with Confidence
The path to responsible AI adoption doesn’t have to be complicated. After presenting to nearly 1,000 firms on AI, I’ve seen that success comes down to having the right framework, choosing the right tools, and ensuring your team knows how to use them effectively.
The landscape is changing quickly - new capabilities emerge monthly, and the gap between firms that have mastered AI and those still hesitating continues to widen. But with proper policies, the right technology stack, and effective training, firms are discovering that AI can be both safe and transformative for their practice.
Resources to help you get started:
In addition to publishing thought AI leadership on a regular basis, I also work directly with firms to identify the best AI tools for their specific needs, develop customized implementation strategies, and, critically, train their teams to extract maximum value from these technologies. It’s not enough to have the tools; your people need to know how to leverage them effectively.
For ongoing insights on AI best practices, real-world use cases, and emerging capabilities across industries, consider subscribing to my newsletter. While I often focus on legal applications, the broader AI landscape offers lessons that benefit everyone. And if you’d like to discuss your firm’s specific situation, I’m always happy to connect.
Contact: steve@intelligencebyintent.com
Share this article with colleagues who are navigating these same questions.