Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
Anthropic’s growing appetite for user data has prompted a privacy policy change that will give the company’s AI a steady stream of the information you feed it.
Previously, Anthropic did not train its Claude AI models on user chats and committed to auto-deleting the data after 30 days, TechCrunch reports. Now, it’s asking you to “help improve Claude” by allowing your chats and coding sessions to train Anthropic’s AI.
It will also retain data for five years to remain “consistent” throughout the lengthy AI development cycle. “Models released today began development 18 to 24 months ago,” Anthropic says.
The next time you log in, you’ll see a pop-up menu asking you to accept an update to the company’s consumer terms and policies. The “you can help improve Claude” option will be toggled on by default; there’s no toggle for the data retention option. Anthropic asks that you accept the terms by Sept. 28.
Existing users will opt in or out of data retention through this pop-up (Credit: Anthropic)
If you accept the term and change your mind, you can revoke Claude’s access via Settings > Privacy > Help improve Claude.
(Credit: PCMag/Anthropic)
The changes will apply to new or resumed chats and coding sessions on Claude Free, Pro, and Max plans for regular chats and coding session tools like Claude Code. It will not apply to Claude for Work, Claude Gov, Claude for Education, or API use, including via third parties such as Amazon Bedrock and Google Cloud’s Vertex AI.
You can also opt in, but keep certain chats private by deleting them, which will prevent them from being used for “future model training,” Anthropic says.
Unsurprisingly, Anthropic is positioning the move as beneficial to users. Having more information will boost the safety of the models, the company says, and make them better at accurately flagging harmful conversations. It will also help create better models in the future, with stronger “coding, analysis, and reasoning” skills.
Recommended by Our Editors
“All large language models, like Claude, are trained using large amounts of data,” Anthropic says. “Data from real-world interactions provide valuable insights on which responses are most useful and accurate for users.”
Starting today, existing users will see a pop-up asking whether they’re willing to opt in. To Anthropic’s credit, it’s pretty clearly written, stating that the company will use your chats and coding sessions to improve its models. You can also make your choice within the Settings menu.
“To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” Anthropic says. “We do not sell users’ data to third parties.”
Chatbot privacy has come into focus this year. Due to an ongoing lawsuit brought by The New York Times, OpenAI is required to maintain records of all your deleted conversations. Meta, OpenAI, and Grok have also all made private conversations public—by accident and on purpose.

5 Ways to Get More Out of Your ChatGPT Conversations
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Emily Forlini
Senior Reporter
