You Turned Off Training. The Feedback Button Didn't Get the Memo.
Consumer AI privacy is a setting. Your feedback is an action. Guess which one wins.
Training Off Isn’t “Private” in Consumer AI
Every week, someone asks me the same question in a new form. In fact, it came up twice this week in different conversations. h/t to Joseph on Tuesday of this week for asking an additional clarifying question - are the privacy settings different for free versus paid users? I’m going to focus this article on the consumer versions of the tools, as business/enterprise/education editions have different policies.
This is the question: “Are these LLMs training on my data?”
It’s the right question. But most people stop one step too early. They find the “training” toggle, flip it off, and assume the problem is solved.
Here’s the more accurate mental model.
Turning off training is a meaningful reduction in exposure. It is worth doing. But it is not the same thing as privacy, confidentiality, or “nobody will ever see this.” These tools can still process your prompts to serve you, retain data for safety and abuse prevention, and respond to legal requests. In some cases, they may still use data in de-identified ways to improve systems.
And there’s one UI detail that matters way more than people realize.
The thumbs-up and thumbs-down trap
On several major consumer LLMs, the thumbs-up or thumbs-down button is not just a “vibe check.”
It is a feedback submission mechanism.
If you have training turned off, and then you click thumbs up or thumbs down, you may be explicitly sending that conversation (or a chunk of surrounding context) into a review and improvement pipeline anyway.
If you remember only one thing from this piece, make it this:
Training off is a setting. Thumbs feedback is an action. Actions often override settings.
What this means, tool by tool
With ChatGPT, I treat thumbs as “submit this thread.” If I’m working with anything I would not want reused outside my account, I don’t click it, even if the model nailed the answer. That feedback pathway can still route the conversation into model improvement.
With Claude, I treat thumbs as a higher-retention artifact. Thumbs feedback can cause the service to store the entire related conversation. That’s not an accident. It exists to support evaluation, safety work, and improvement workflows. Even if the content is de-linked from your identity, it’s still content you volunteered.
With Gemini, I treat feedback as potentially wide context capture. If you’ve turned off activity and training-related controls, that helps. But if you submit feedback, you can still be sending additional context along with it. If you’re the kind of person who has Gemini open all day, that nuance matters.
With Grok (especially inside X), I treat thumbs as an explicit contribution. If your goal is “don’t use my data for training,” thumbs feedback is the opposite move.
With Perplexity, the big idea is the same: you can opt out of training-style reuse going forward. What’s often less obvious is that “opt out of training” doesn’t mean “no product improvement,” and feedback mechanisms can still create review artifacts. So I treat feedback as a deliberate share unless I see a clear guarantee otherwise.
I want to do a special callout here for NotebookLM (one of my absolute favorite tools). By default, Google does not train on your data. BUT - if you click the thumbs-up or thumbs-down button - it will send your entire notebook - your uploads - your chats - everything to Google for analysis. DO NOT PRESS THUMBS UP OR DOWN IN NOTEBOOKLM. Unless you really want them to have everything in the notebook, then knock yourself out.
The practical playbook I use
When I’m using consumer AI, I keep it simple.
First, I turn off training everywhere the product allows it. It takes 60 seconds and it is a free privacy win.
Second, I use Temporary, Incognito, or Private chat modes for anything sensitive. Those modes are designed to reduce retention and training use, instead of just flipping a preference.
Third, I don’t click thumbs on sensitive threads. If I want to help the model teams, I do it on harmless prompts. I don’t do it on client work, deal memos, financial models, privileged strategy, or anything regulated.
Fourth, I assume consumer AI is not a confidentiality boundary. If I truly need a confidentiality boundary, I change the tier or the architecture. That usually means an enterprise offering with stronger terms, or a controlled internal system where retention, access, and audit are enforced.
This isn’t paranoia. It’s just knowing what the buttons do.
The models are getting better fast. The privacy controls are improving too. But the default incentives still point in one direction: learn from feedback, improve the product, reduce abuse, and keep the system safe.
So yes, turn off training.
Just don’t forget that a tiny thumbs up can quietly turn it back on, at least for that conversation.
Wishing all of you a safe, happy, and healthy Holiday Season!



