Your AI Tool Isn't Forgetful. Your Setup Is.
Claude walls off your workstreams. ChatGPT lets you choose. Here's how to pick.
created by ChatGPT Image 1.5
Projects in ChatGPT and Claude: the quiet feature that changes how teams work
If you’ve used ChatGPT or Claude at work, you’ve probably had the same experience I have.
You open a fresh chat. You paste the background. You explain the goal. You upload the same “source of truth” doc again. You remind the model how you like things written. It does a decent job. Then the next day you do it all over again.
That’s not a workflow. That’s Groundhog Day.
Projects are the first feature that really breaks that cycle. Not because it’s flashy. Because it makes AI feel less like a clever demo and more like a persistent workspace where real work can live for weeks.
And here’s the part most people miss: the biggest difference between Projects in ChatGPT and Projects in Claude isn’t the layout. It’s memory. How it behaves. Where it’s allowed to reach. And how easy it is to accidentally pollute.
Let’s unpack it in plain English.
What a “Project” actually is
A Project is a container for a workstream.
Instead of one giant chat thread that turns into spaghetti, you get a workspace where you can keep:
Your reference files. Your running conversations. And your rules.
That’s it. But it’s a big deal, because most work isn’t a single ask. It’s a sequence.
Draft, revise, approve. Then a new version. Then a stakeholder twist. Then someone asks, “Can we make this shorter?” Then Legal wants a disclaimer. Then Sales wants a talk track.
Projects keep all of that in one place, with the same instructions and the same source material close at hand. So the AI stops resetting to zero.
Memory is the real feature, and the real risk
Once you move beyond novelty, AI wins on one thing: continuity.
The best assistant is the one that doesn’t make you repeat yourself. But continuity cuts both ways.
If the system forgets your context, you waste time. If it “remembers” the wrong context, you get strange output. In a business setting, strange output becomes rework. In legal, HR, and finance, it can become something worse.
So when you compare Projects, you’re really comparing boundaries.
How strongly does the tool keep one workstream separate from another?
And how much control do you have when you want it to be personal across everything you do?
Claude Projects: clean separation by design
Claude’s approach is the one most leaders intuitively expect.
Your general chat memory and your project memory are separate worlds.
Projects have their own memory and their own running summary. Importantly, work you do inside projects does not get blended into Claude’s general memory of you.
That sounds small. In practice, it’s a relief.
If you’re juggling multiple client matters, multiple launches, or multiple internal initiatives, you want compartments. You want the assistant to stay in the lane you put it in. Claude makes that the default state.
There’s another practical advantage here. On paid Claude plans, you can search past chats. But you can keep that search scoped. So you can go find something when you need it, without turning every past conversation into a soup that bleeds into the next project.
For teams, this design choice pays off in two places: consistency and safety. Less cross-talk. Fewer “where did that come from?” moments.
ChatGPT Projects: more options, more responsibility
ChatGPT gives you more knobs to turn, especially around memory.
ChatGPT has two memory behaviors that matter in real life.
One is “saved memories,” which are the things you explicitly tell it to remember, like your preferences, your role, or how you like content structured.
The other is “chat history referencing,” where it can draw on past conversations to be more helpful without you re-stating everything.
Projects add another layer: you can decide whether the project should act like a sealed room or a room with doors.
In practice, ChatGPT Projects tend to run in two modes.
One is project-only memory. The project draws context only from the chats and files inside that project. That’s the clean room approach.
The other is a default mode that can allow the project to benefit from your broader memory behavior, depending on your plan and settings. That’s the “make it feel like a real assistant” approach.
This can be fantastic when you want one consistent voice across your work. You don’t want to re-teach tone and formatting every time. You want the assistant to feel like it knows you.
But the tradeoff is obvious. The more you allow cross-chat memory, the more careful you have to be about what you feed it, and how you scope sensitive work.
One detail I like in ChatGPT’s world: when projects are shared for collaboration, the system pushes toward project-only boundaries. That’s what you want when multiple people are using the same workspace. Shared spaces need predictable behavior.
Consumer paid vs enterprise: what changes when governance matters
Here’s the reality: the “best” setup changes the moment you bring real organizational risk into the picture.
On consumer paid plans, both tools are often more personal. You’re optimizing for speed and convenience.
In business and enterprise environments, the story shifts. It becomes about control, admin settings, data handling defaults, and keeping teams from accidentally mixing workstreams.
ChatGPT’s business and enterprise tiers tend to emphasize centralized controls, and there are differences in how far “chat history referencing” goes across those tiers. The effect is that enterprise can feel safer by default, but sometimes a little less magically personalized across time.
Claude’s enterprise story is more straightforward: memory exists, but owners can usually manage it centrally. And Claude’s project boundaries stay strong even when general memory is enabled. So you get the safety advantage without doing as much manual scoping work.
If you’re leading a team, this is the question I’d keep in your head: do we want personalization everywhere, or do we want compartments that keep workstreams clean?
There isn’t one right answer. But there is a wrong answer, which is leaving it to chance.
File uploads: the stuff that decides whether Projects stick
Teams don’t abandon Projects because they don’t like the concept. They abandon Projects because they can’t get the right material in, or they can’t keep it organized.
This is where ChatGPT and Claude feel meaningfully different.
ChatGPT is generous with individual file size. You can upload very large files, and the hard per-file cap is high. For text-heavy documents, there’s also a practical processing cap measured in tokens. And then there’s the part that surprises teams: Projects have a per-project file count limit that depends on your plan.
As of early 2026, the common shape looks like this: free users get a small number of project files, Plus tiers are larger, and Pro, Business, and Enterprise tiers go higher. You can also hit storage caps across your account and organization if you treat Projects like a dumping ground. There are also rate limits on how many files you can upload in a short window, and batch limits on how many you can upload at once.
In plain terms: ChatGPT is great when you want to work deeply with a curated set of big, important files. It rewards being disciplined about what belongs in the project.
Claude is different. The per-file size limit is smaller, and each chat has a cap on how many files you can attach. But Projects can hold a large knowledge base, and it’s designed to retrieve the most relevant parts when you ask questions. Paid Claude plans also support very large context windows. And the project knowledge behavior can stretch as your library grows, because it isn’t only relying on stuffing everything into the prompt at once.
In plain terms: Claude is great when you want to build a bigger project library and let the system pull what matters as needed.
Both work. They just push you toward different habits.
Four examples that actually map to real teams
A legal team managing an active case
Picture a litigation team in the middle of a matter that keeps moving.
The work is repetitive in a frustrating way. Status updates, fact timelines, exhibit summaries, deposition comparisons, draft motions, client emails. The content changes, but the shape of the work repeats.
A case Project becomes the “case room.”
You load the core pleadings, key exhibits, a chronology spreadsheet, deposition transcripts, and the protocols everyone must follow. Then you set strict rules: don’t invent facts, cite sources, and when uncertain, ask for the missing document.
Claude’s strength here is the clean separation. Case work stays in the case project. It doesn’t bleed into other matters. That matters more than people admit, because the worst error isn’t a typo. It’s mixing details across cases.
ChatGPT can be just as safe if you choose project-only memory for the case and treat the project like a sealed room. The payoff is that ChatGPT can be excellent at structured drafting and consistent formatting, especially when you give it strong instructions.
The weekly rhythm becomes very practical. Draft the client update, produce the timeline, surface inconsistencies, list what discovery still needs to prove. Less time reassembling context, more time thinking.
A marketing team running a product launch
Launches don’t fail because nobody worked hard. They fail because messaging drifts.
One doc says the product is for mid-market. Another doc says enterprise. The landing page makes a claim that Legal didn’t approve. Sales gets a deck that doesn’t match the website. By the time you notice, you’re in the comments thread from hell.
A launch Project fixes that by keeping the “approved truth” close. Positioning, ICP notes, brand voice, claim rules, product FAQ, pricing notes, and past examples of what performed well.
ChatGPT shines when the marketing leader wants one consistent voice across everything, because it can carry preferences across work when you allow it. Your outputs start sounding like your team.
Claude shines when you want strict separation between launches, so each campaign gets its own clean workspace without old experiments creeping in.
The most valuable part is speed without drift. You can generate landing copy, ad variants, nurture emails, and sales talk tracks that all pull from the same source material. And that is how you avoid the “five versions of the truth” mess.
A sales team onboarding new reps
Onboarding is where time disappears.
New reps ask the same questions. Enablement lives in ten places. Great calls aren’t turned into teachable patterns. Managers end up repeating themselves, and then everyone pretends the ramp was “fine.”
A sales onboarding Project becomes the ramp hub.
You load the pitch deck, qualification questions, objection handling, pricing rules, competitive notes, and a small set of great call transcripts. Then you use the Project as a practice space.
Roleplay a discovery call with a CFO. Practice negotiating against a competitor. Turn transcripts into a one-page “what good looks like.” Generate a weekly ramp plan that matches the rep’s territory.
Claude often works well when you want a bigger library of enablement material and you want the tool to retrieve the right chunk at the right time.
ChatGPT often works well when you want very consistent outputs, structured artifacts, and tight formatting for talk tracks and sequences.
Either way, it turns onboarding from “read these docs” into “practice the job.”
An HR team managing policy and employee relations
HR is where Projects stop being cute and start being serious.
Policy answers must be consistent. Templates must be current. Tone must be neutral. And sensitive situations must stay contained.
A People Ops Project can hold the handbook, local addenda, standard templates, escalation paths, and manager scripts. Then you set rules like: cite the handbook section, don’t guess, and when policy is unclear, ask what jurisdiction or employee type applies.
Claude’s project separation helps keep HR work out of general memory. ChatGPT’s project-only option gives you the same safety if you choose it. The key is being intentional.
The highest value prompts are boring, which is exactly the point. Draft a manager response, build a neutral performance plan, summarize policy changes into a clear employee FAQ. Less inconsistency. Less improvisation. Less risk.
The tradeoffs nobody likes to talk about
Projects don’t eliminate risk. They move it.
If your files are outdated, the AI will be confidently wrong. If your instructions are vague, the AI will improvise. If your team treats Projects like a junk drawer, the outputs will get messy.
And the human risk is real too. Projects can make AI feel “trusted” because it sounds consistent. That’s not the same as being correct.
So the right approach is simple: treat Projects like a controlled workspace. Curate inputs. Keep rules explicit. Make “show your source” the default, especially in legal, HR, and finance.
What I’d do Monday morning
Pick one repeating workflow that wastes time every week, like case updates, launch assets, onboarding, or policy Q&A.
Create a Project and write a short instruction block: role, tone, and what to do when it’s unsure.
Upload only the documents that should be treated as truth, then name them clearly so humans can maintain them.
Decide your boundary: project-only for sensitive work, broader memory only when cross-workstream personalization is the goal.
Run the same task three times over two weeks and judge it on consistency and rework, not wow factor.
Why I write these articles:
I write these pieces because senior leaders don’t need another AI tool ranking. They need someone who can look at how work actually moves through their organization and say: here’s where AI belongs, here’s where your team and current tools should still lead, and here’s how to keep all of it safe and compliant.
In this article, we looked at why most teams treat AI like a single-use tool instead of a persistent workspace, and how the memory and file boundaries in Projects determine whether your outputs stay consistent or quietly drift. The market is noisy, but the path forward is usually simpler than the hype suggests.
If you want help sorting this out:
Reply to this or email me at steve@intelligencebyintent.com. Tell me what’s slowing your team down and where work is getting stuck. I’ll tell you what I’d test first, which part of the ChatGPT or Claude Project setup fits your workflow, and whether it makes sense for us to go further than that first conversation.
Not ready to talk yet?
Subscribe to my daily newsletter at smithstephen.com. I publish short, practical takes on AI for business leaders who need signal, not noise.



I think your examples would benefit more from using NotebookLM - have you tried that approach?