Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Lifting Robotic Actions into VLM Representation Space for Decision Making

IBM and RIKEN debut IBM Quantum System Two in Japan

Walmart AI Foundry Ships First Apps: 3M Daily Queries, 67% Faster Planning

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » Generative AI and privacy are best frenemies – a new study ranks the best and worst offenders
Mistral AI

Generative AI and privacy are best frenemies – a new study ranks the best and worst offenders

Advanced AI EditorBy Advanced AI EditorJune 24, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


gettyimages-1467937769-cropped

TU IS/Getty

Most generative AI companies rely on user data to train their chatbots. For that, they may turn to public or private data. Some services are less invasive and more flexible at scooping up data from their users. Others, not so much. A new report from data removal service Incogni looks at the best and the worst of AI when it comes to respecting your personal data and privacy.

For its report “Gen AI and LLM Data Privacy Ranking 2025,” Incogni examined nine popular generative AI services and applied 11 different criteria to measure their data privacy practices. The criteria covered the following questions:

What data is used to train the models?Can user conversations be used to train the models?Can prompts be shared with non-service providers or other reasonable entities?Can the personal information from users be removed from the training dataset?How clear is it if prompts are used for training?How easy is it to find information on how models were trained?Is there a clear privacy policy for data collection?How readable is the privacy policy?Which sources are used to collect user data?Is the data shared with third parties?What data do the AI apps collect?

The providers and AIs included in the research were Mistral AI’s Le Chat, OpenAI’s ChatGPT, xAI’s Grok, Anthropic’s Claude, Inflection AI’s Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI did well with some questions and not as well with others.

Also: Want AI to work for your business? Then privacy needs to come first

As one example, Grok earned a good grade for how clearly it conveys that prompts are used for training, but didn’t do so well on the readability of its privacy policy. As another example, the grades given to ChatGPT and Gemini for their mobile app data collection differed quite a bit between the iOS and Android versions.

Across the group, however, Le Chat took top prize as the most privacy-friendly AI service. Though it lost a few points for transparency, it still fared well in that area. Plus, its data collection is limited, and it scored high points on other AI-specific privacy issues.

ChatGPT ranked second. Incogni researchers were slightly concerned with how OpenAI’s models are trained and how user data interacts with the service. But ChatGPT clearly presents the company’s privacy policies, lets you understand what happens with your data, and provides clear ways to limit the use of your data.

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Grok came in third place, followed by Claude and PI. Each had trouble spots in certain areas, but overall did fairly well at respecting user privacy.

“Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind,” Incogni said in its report. “These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.”

As for the bottom half of the list, DeepSeek took the sixth spot, followed by Copilot, and then Gemini. That left Meta AI in last place, rated the least privacy-friendly AI service of the bunch.

Also: How Apple plans to train its AI on your data without sacrificing your privacy

Copilot scored the worst of the nine services based on AI-specific criteria, such as what data is used to train the models and whether user conversations can be used in the training. Meta AI took home the worst grade for its overall data collection and sharing practices.

“Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft),” Incogni said. “Gemini, DeepSeek, Pi AI, and Meta AI don’t seem to allow users to opt out of having prompts used to train the models.”

Incogni's AI chatbot privacy rankings for 2025

Incogni

In its research, Incogni found that the AI companies share data with different parties, including service providers, law enforcement, member companies of the same corporate group, research partners, affiliates, and third parties.

“Microsoft’s privacy policy implies that user prompts may be shared with ‘third parties that perform online advertising services for Microsoft or that use Microsoft’s advertising technologies,'” Incogni said in the report. “DeepSeek’s and Meta’s privacy policies indicate that prompts can be shared with companies within its corporate group. Meta’s and Anthropic’s privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.”

With some services, you can prevent your prompts from being used to train the models. This is the case with ChatGPT, Copilot, Mistral AI, and Grok. With other services, however, stopping this type of data collection doesn’t seem to be possible, according to their privacy policies and other resources. These include Gemini, DeepSeek, Pi AI, and Meta AI. On this issue, Anthropic said that it never collects user prompts to train its models.

Also: Your data’s probably not ready for AI – here’s how to make it trustworthy

Finally, a transparent and readable privacy policy goes a long way toward helping you figure out what data is being collected and how to opt out.

“Having an easy-to-use, simply written support section that enables users to search for answers to privacy related questions has shown itself to drastically improve transparency and clarity, as long as it’s kept up to date,” Incogni said. “Many platforms have similar data handling practices, however, companies like Microsoft, Meta, and Google suffer from having a single privacy policy covering all of their products and a long privacy policy doesn’t necessarily mean it’s easy to find answers to users’ questions.”

Get the morning’s top stories in your inbox each day with our Tech Today newsletter.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHPE Unveils New AI Factory Solutions Built With NVIDIA to Accelerate AI Adoption at Global Scale
Next Article Paper page – VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed View Memory
Advanced AI Editor
  • Website

Related Posts

A look at its startup portfolio

June 23, 2025

Mistral just updated its open source Small model from 3.1 to 3.2: here’s why

June 22, 2025

Mistral just updated its open source Small model from 3.1 to 3.2: here’s why

June 22, 2025
Leave A Reply Cancel Reply

Latest Posts

Ezrom Legae And Art Under Apartheid At High Museum Of Art In Atlanta

Chanel Launches Arts & Culture Magazine

Publicity Wizard Jalila Singerff On The Vital PR Rules For 2025

Tourist Damaged 17th-Century Portrait at Florence’s Uffizi Galleries

Latest Posts

Lifting Robotic Actions into VLM Representation Space for Decision Making

June 25, 2025

IBM and RIKEN debut IBM Quantum System Two in Japan

June 25, 2025

Walmart AI Foundry Ships First Apps: 3M Daily Queries, 67% Faster Planning

June 25, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.