Claude flips the privacy default, opt out before September 28
Treat this like the “do not swim with the hair dryer” label for your data, funny until it isn’t
Here’s the blunt version: if you use Claude as a consumer, your chats and code are now set to train Anthropic’s models unless you say otherwise. The new policy takes effect on September 28, and the default choice is set to “yes” through a large Accept button with a small toggle that is initially set to On. I don’t like it. A company that built its reputation on privacy just moved the goalposts, and it matters for anyone who discusses work, clients, or personal details in AI tools.
What changed, in plain English
Anthropic is rolling out updated Consumer Terms and a new Privacy Policy. If you opt in, Anthropic will use your new or resumed chats and coding sessions to train future models, and it may keep that data for up to five years. If you opt out, Claude keeps the prior 30-day retention policy. To keep using Claude after September 28, you must make a choice. This applies to Claude Free, Pro, Max, and Claude Code. It does not apply to Claude for Work, Claude Gov, Education, or API usage via Bedrock or Vertex.
The UI choice is nudged. Existing users see a pop-up titled “Updates to Consumer Terms and Policies.” The large Accept button opts you in unless you flip the smaller “help improve” toggle off first. You can change your mind later, but data already used for training cannot be pulled back. Older chats are excluded unless you reopen them, and deleted chats are not used for training going forward.
Why this feels like a sea change: Anthropic spent years signaling a privacy-first posture, with short retention and clear lines around training. Now the default shifts from privacy to participation unless you intervene. For individuals, the risk is obvious: sensitive work threads and personal context can become training data. For leaders, the signal is bigger: even the privacy-minded players are pushing toward real-world data to compete on capability, safety classifiers, and coding performance. I understand the business pressure. I still think the default is wrong.
Who is affected
Affected: consumer accounts on Claude Free, Pro, Max, including Claude Code tied to those accounts. You must pick a setting by September 28 to keep using the product.
Not affected: Claude for Work and other commercial plans, Claude Gov, Claude for Education, and API use, including on AWS Bedrock and Google Vertex AI. Those remain out of training by default.
Do this now: opt-out steps
Option A, when the pop-up appears
Read the pop-up carefully.
Find the line that says something like “Allow the use of your chats and coding sessions to train and improve Anthropic AI models.”
Toggle it Off.
Then click Accept to proceed with the updated terms while keeping training disabled. You can revisit this later in settings if needed.
Option B, from Settings (desktop or mobile)
Open Claude and sign in.
Go to Settings.
Open Privacy, then Privacy Settings.
Turn Help improve Claude (or similarly named model-training control) Off. Changes apply to future chats and coding sessions.
Recommended hygiene, especially if you ever opted in
Do not reopen old sensitive chats if you want them excluded. Reopening converts them into “resumed” sessions that can be used per your setting.
Delete sensitive conversations you do not want used in future training. Deleted chats are not used going forward.
Remember irreversibility: if your data was already used while opted in, it cannot be removed from training that has occurred. Future data will respect your current setting.
For teams and clients
If you use Claude for Work or the API directly, or through Bedrock or Vertex, training on your prompts remains off by default. Confirm this posture in your DPA and vendor records, and keep using zero-data-retention keys where required.
If employees also have personal Claude accounts, circulate these steps and set a policy: no client data in consumer accounts, period.
Developers using Claude Code
Your account-level training toggle controls whether coding sessions can be used for training. If you want to limit additional telemetry in Claude Code, you can also set environment variables like DISABLE_TELEMETRY, DISABLE_ERROR_REPORTING, and DISABLE_BUG_COMMAND. These reduce operational logging, which is separate from training, but they tighten up what leaves your machine.
Quick answers to questions I’m getting from execs
What happens if I do nothing by September 28? You will be forced to choose a setting before you can continue using Claude.
Does this touch old chats? Not unless you resume them. Deleting a chat keeps it out of future training.
Can I change my mind later? Yes, anytime, for future data only. Past training runs are not undone.
I wish the default respected the privacy stance that drew many of us to Claude in the first place. Until that changes, protect yourself and your org with the steps above, and treat consumer AI accounts like you treat any unsecured channel: no client secrets, no regulated data, no exceptions.
If you enjoyed this article, please subscribe to my newsletter and share it with your network! Looking for help to really drive the adoption of AI in your organization? Want to use AI to transform your team’s productivity? Reach out to me at: steve@intelligencebyintent.com