Anthropic, the company behind the Claude AI, has announced updates to its Consumer Terms and Privacy Policy that give users more control over their data. The new changes let users decide if they want their data to be used to improve Claude and enhance protections against harmful activity, such as scams or abusive content.
The updates apply to all users on Claude Free, Pro and Max plans, including when they use Claude Code. However, they do not affect services under Anthropic’s Commercial Terms, such as Claude for Work, Claude Gov, Claude for Education, or API use through third-party platforms like Amazon Bedrock and Google Cloud’s Vertex AI.
Also read: Microsoft breaks free from OpenAI reliance, launches two homegrown AI models
By opting in, users can help Anthropic make Claude safer and more capable. The company says the shared data will help improve systems that detect harmful content, reducing the chances of mistakenly flagging harmless conversations. It will also help future Claude models get better at tasks like coding, analysing information, and reasoning.
Users have full control over this setting and can update their preferences at any time. New users will be asked about their choice during the signup process. Existing users will receive a notification prompting them to review the updated terms and make a decision.
Also read: Govt’s online gaming ban lands in court as Indian firm mounts legal challenge
Existing users have until September 28, 2025 to accept the new Consumer Terms and decide whether to allow their data to be used. If users accept the policies, they will go into effect immediately. After the September 28 deadline, users will need to make a choice in the model training setting to continue using Claude.
This move comes as more AI companies look to balance safety, usability, and privacy while continuing to enhance the capabilities of their models.