Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Where IBM Could Be by 2025, 2026, and 2030

The AI Vs. Junior Talent Dilemma

ROSE: Remove Objects with Side Effects in Videos – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Anthropic (Claude)

Anthropic on using Claude user data for training AI: Privacy policy explained

By Advanced AI EditorAugust 29, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Anthropic, the AI safety startup behind the Claude chatbot, has quietly redrawn its privacy boundaries. Starting this month, all consumer users of Claude from the free tier to premium subscriptions like Claude Pro and Max are being asked to make a choice: either allow their conversations to be used to train future AI models, or opt out and continue under stricter limits.

The change represents one of the most significant updates to Anthropic’s data policies since Claude’s launch. It also highlights a growing tension in the AI industry, companies need enormous volumes of conversational data to improve their models, but users are increasingly cautious about handing over their words, ideas, and code to corporate servers.

Also read: How xAI’s new coding model works and what it means for developers

What’s different this time

Until now, Claude’s consumer chats were automatically deleted after 30 days unless flagged for abuse, in which case they could be stored for up to two years. With the new policy, opting in means your conversations, including coding sessions, may be retained for up to five years.

When users log into Claude today, they see a large popup titled “Updates to Consumer Terms and Policies.” At first glance, the layout looks straightforward: a bold “Accept” button dominates the screen. But the crucial toggle that determines whether your data feeds into Claude’s training pipeline is smaller, placed beneath, and switched on by default.

New users face the same screen during signup. Anthropic insists that data use is optional and reversible in settings, but once conversations are included in training, they cannot be withdrawn retroactively.

Notably, this policy shift applies only to consumer-facing products like Claude Free, Pro, and Max. Enterprise services such as Claude for Work, Claude Gov, Claude Education, and API-based integrations on Amazon Bedrock or Google Cloud remain unaffected.

Why it matters

The company has framed this as a way to give users meaningful choice while advancing Claude’s capabilities. In a blog post, Anthropic argued that user data is essential for improving reasoning, coding accuracy, and safety systems. Sensitive information, the company says, is automatically filtered out, conversations are encrypted, and no data is sold to third parties.

For Anthropic, which has marketed itself as a “safety-first” lab, the update is also about credibility: making Claude better at understanding real-world use cases without compromising trust.

Not everyone is convinced. Privacy advocates and user communities, especially on forums like Reddit, have raised alarms over what they call “dark patterns” in the interface design, a default-on toggle that nudges people into sharing data without truly informed consent.

Also read: Anthropic updates Claude AI, gives users control over data-sharing for the first time

The debate has even spilled into Anthropic’s competitive landscape. Brad Lightcap, Chief Operating Officer at OpenAI, has previously criticized similar data policies as “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.”

While Lightcap was responding to a different set of obligations, his comments resonate with concerns now directed at Anthropic: that AI firms, despite public pledges of safety and restraint, are inevitably pulled toward the same hunger for data that drives their rivals.

The stakes for users

For everyday Claude users, the implications are immediate. By September 28, 2025, users must either accept the new terms or risk losing access, conversations that remain untouched won’t be used, but if reopened, they fall under the new rules. Data sharing can be disabled anytime in settings, though only future conversations are protected, once training data is absorbed into Claude’s models, it cannot be retrieved.

Anthropic built its brand by promising to tread cautiously in the race toward advanced AI. But this policy shift suggests the company is grappling with the same competitive pressures as its peers. Data is the fuel of large language models, and without enough of it, even a well-funded lab risks falling behind.

The question is whether users will accept this trade-off: more powerful AI assistants at the cost of longer, deeper data retention.

As one analyst put it, the optics may matter as much as the policy itself: “Anthropic sold itself as the company that wouldn’t play by Silicon Valley’s rules. This move makes it look like every other AI giant.”

The Claude privacy update is more than just fine print. It’s a signal of how AI companies are converging on a new normal: opt-out systems, long retention windows, and user data at the core of product evolution.

For users, the choice is clear but consequential. Opt in, and your words help shape the next generation of AI. Opt out, and you keep a measure of privacy but perhaps at the expense of slower progress for the tools you rely on.

Either way, September 28 marks a new chapter for Anthropic and its users, one where transparency, trust, and technology collide in real time.

Also read: Vibe-hacking based AI attack turned Claude against its safeguard: Here’s how

Follow Us on Google NewsFollow Us on Google News Follow Us

Vyom RamaniVyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile





Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMIT startup Commonwealth Fusion Systems raises $863 million
Next Article DeepSeek Fuels Return to Profit for Chinese Tech Champion Huawei
Advanced AI Editor
  • Website

Related Posts

Anthropic updates Claude AI, gives users control over data-sharing for the first time

August 29, 2025

Malware devs abuse Anthropic’s Claude AI to build ransomware

August 28, 2025

Anthropic acknowledges its Claude AI being misused for cyberattacks

August 28, 2025

Comments are closed.

Latest Posts

London Museum Secures Banksy’s Piranhas

Egyptian Antiquities Trafficker Sentenced to Six Months in Prison

Sotheby’s to Launch First Series of Luxury Auctions in Abu Dhabi

Nazi-Looted Painting Turns Up in Argentinean Real Estate Listing

Latest Posts

Where IBM Could Be by 2025, 2026, and 2030

August 29, 2025

The AI Vs. Junior Talent Dilemma

August 29, 2025

ROSE: Remove Objects with Side Effects in Videos – Takara TLDR

August 29, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Where IBM Could Be by 2025, 2026, and 2030
  • The AI Vs. Junior Talent Dilemma
  • ROSE: Remove Objects with Side Effects in Videos – Takara TLDR
  • DeepSeek Fuels Return to Profit for Chinese Tech Champion Huawei
  • Anthropic on using Claude user data for training AI: Privacy policy explained

Recent Comments

  1. UTLH on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. SumanamFaf on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Michaelcrich on Foundation AI: Cisco launches AI model for integration in security applications
  4. https://sw2002.ru/ on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. тонна арматуры цена казань on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.