Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Unveiling the next wave of Startup Battlefield 200 VC judges at Disrupt 2025 | TechCrunch

Built for SF by SF: AI Solutions Helping Our City Thrive

AMD signs agreement with generative AI startup Cohere for expanded use of Instinct GPUs

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Search

People are using ChatGPT twice as much as they were last year. They’re still just as skeptical of AI in news.

By Advanced AI EditorOctober 9, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The generative AI wave isn’t coming — it’s already here, and it’s reshaping how the public finds information. In a new report, “Generative AI and News Report 2025: How People Think About AI’s Role in Journalism and Society,” my colleagues Richard Fletcher, Rasmus Kleis Nielsen, and I surveyed audiences across six countries, including the United States.

Our results show a public increasingly fluent in AI and happy to embrace this technology but at the same time deeply ambivalent about its role in the news, creating a critical challenge for newsrooms navigating a rapidly changing environment.

An explosion in use, led by information-seeking

It is undeniable that AI use is growing. Across the six countries we looked at (Argentina, Denmark, France, Japan, the United Kingdom, and the United States), the proportion of people who say they have ever used a generative AI tool jumped from 40% in 2024 to 61% in 2025. More significantly, weekly usage surged from 18% to 34%. In the U.S., where adoption was already higher, growth was more modest but still significant, rising from 31% to 36% weekly use. At the same time, it is worth remembering that a majority of people in all countries we looked at are not yet regular users of any AI tool or system.

While ChatGPT remains the dominant standalone product, we find that AI embedded in existing services, like Google’s Gemini or Microsoft’s Copilot, is driving broader exposure and use. Meanwhile, AI systems beloved by some professionals, like Claude and Perplexity, barely seem to cut through with the general population.

Crucially for the news industry, the primary reason people turn to AI has shifted. Last year, creating media, for example creating an image or a summary, was the top use case. This year, information-seeking has taken the lead, more than doubling from 11% to 24% weekly. People are using AI to research topics, answer factual questions, and ask for advice. They are, in essence, increasingly using it for tasks that were once the primary domain of search engines and, by extension, news publishers.

We also found that using AI for social interaction has increased, with 7% (8% in the U.S.) saying they had done so in the last week, a figure that is higher among younger people.

The search frontline: AI answers are now unavoidable

This shift is most visible in search. We find that seeing AI-generated search answers, like Google’s AI Overviews, has become commonplace across countries, including in the U.S. A majority of Americans (61%) report having seen an AI-generated answer in response to a search query in the last week — a finding that dovetails with recent research by Kirsten Eddy at the Pew Research Center. This passive exposure to AI-generated information in search is far higher than the active use of any single AI tool.

For publishers worried about declining referral traffic, our findings paint a worrying picture, in line with other recent findings in industry and academic research. Among those who say they have seen AI answers for their searches, only a third say they “always or often” click through to the source links, while 28% say they “rarely or never” do. This suggests a significant portion of user journeys may now end on the search results page.

Contrary to some vocal criticisms of these summaries, a good chunk of population do seem to find them trustworthy. In the U.S., 49% of those who have seen them express trust in them, although it is worth pointing out that this trust is often conditional.

In many of the long-form responses we received when we asked people to explain their trust, people replied that they see AI as a “first pass,” especially for low-stakes queries, and pointed to the fact that in their view the AI “knows more” because it has been trained on large amounts of data. Others said they see these responses as “good enough” answers. But people also explained that they remain cautious on complex topics like health or politics, and say they seek to verify information with traditional sources.

Obviously, what people say they do might differ from what they actually do, but these findings should at least give us some pause before we assume that people encounter such summaries uncritically or will be easily fooled by misleading or false answers that these systems sometimes provide.

The “comfort gap” and the verdict on AI in news

When the conversation turns specifically to journalism, public sentiment cools considerably. Our report identifies a clear and persistent “comfort gap” between human- and AI-led news production. Across countries, only 12% of respondents are comfortable with news made entirely by AI, rising slightly to 21% if there is a “human in the loop.” Comfort, however, jumps to 43% for news led by a human with some AI assistance and 62% for news produced entirely by a human journalist.

Americans are not outliers here. Their comfort levels mirror these global averages, showing a deep-seated preference for human oversight and authorship.

The public also continues to draw a sharp line between back-end and front-facing uses of (generative) AI in the news. People are broadly comfortable with AI being used for tasks like editing spelling and grammar (55% comfortable across countries, 60% in the U.S.) or for translation (53%, 51% in the U.S.). But comfort plummets for uses that directly shape the final product, such as creating a realistic image when no photo exists (26% across countries and in the U.S.) or, most notably, creating an artificial presenter or author (19%, 20% in the U.S.).

And, just as when we did this research last year, the perception is that AI will primarily benefit publishers, not the public. While respondents believe AI will make news cheaper to produce and more up-to-date, they also firmly expect it to be less trustworthy (-19 net score) and less transparent (-8 net score).

A pessimistic outlook on AI’s societal impact

This skepticism is part of a broader pessimism in the United States about AI’s societal role. While people in four of the six countries we surveyed are optimistic about AI making their personal lives better, the U.S. is one of three where pessimism dominates when it comes to society as a whole.

A striking 42% of Americans believe generative AI will make society worse, compared to just 30% who think it will make it better. This likely reflects a deep distrust in how some powerful institutions — including the news media, but also governments, or politicians — will wield the technology. This is reinforced by the fact that only 27% of Americans believe journalists “always or often” check AI outputs before publication, a figure lower than in Japan or Argentina.

For news organizations, our findings are in some ways bitter medicine. The public is already using AI to find information and content with its increasing use by various other actors, but they are wary of newsrooms using the technology. Then again, this does not mean that everything is lost. At least when asked, people seem to see a premium on human judgment and reporting, and welcome a commitment to the responsible use of AI in news.

The path forward for news media then, maybe, isn’t to hide AI usage, but to lean into transparency and original journalism that stands out not just from the “AI slop” increasingly permeating the internet but also from some of the old-fashioned “churnalism” that some outlets are pursuing with even greater vigor with the help of AI. None of this will necessarily save news organizations from the greater upheaval that the use of AI, especially by platforms, brings to digital infrastructures around the world. But it might just help the news to make a case for why they should still matter in this brave new world.

The full report, “Generative AI and News Report 2025: How people think about AI’s role in journalism and society,” can be found on the website of the Reuters Institute for the Study of Journalism.

Felix M. Simon is the research fellow in AI and news at the Reuters Institute for the Study of Journalism and a research associate at the Oxford Internet Institute.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleZendesk Reveals AI Innovations To Transform Customer And Employee Services
Next Article Developer State Of The Union
Advanced AI Editor
  • Website

Related Posts

USPTO to use AI tool for automated search pilot

October 8, 2025

Google AI announcements from September

October 8, 2025

How to Use Search Live Feature, Real-Time Camera Search, and Support for 7 Indian Languages

October 8, 2025

Comments are closed.

Latest Posts

Matthiesen Gallery Files Lawsuit Over Gustave Courbet Painting

MoMA Partners with Mattel for Van Gogh Barbie, Monet and Dalí Figures

Underground Film Legend and Artist Dies at 92

Artwork Forfeited by Inigo Philbrick’s Partner Flops at Sotheby’s

Latest Posts

Unveiling the next wave of Startup Battlefield 200 VC judges at Disrupt 2025 | TechCrunch

October 9, 2025

Built for SF by SF: AI Solutions Helping Our City Thrive

October 9, 2025

AMD signs agreement with generative AI startup Cohere for expanded use of Instinct GPUs

October 9, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Unveiling the next wave of Startup Battlefield 200 VC judges at Disrupt 2025 | TechCrunch
  • Built for SF by SF: AI Solutions Helping Our City Thrive
  • AMD signs agreement with generative AI startup Cohere for expanded use of Instinct GPUs
  • AMD and OpenAI Unveil Massive Chip Deal for AI Inference
  • Margin Adaptive DPO: Leveraging Reward Model for Granular Control in Preference Optimization – Takara TLDR

Recent Comments

  1. DestinyVoiceH6Nalay on Marc Raibert: Boston Dynamics and the Future of Robotics | Lex Fridman Podcast #412
  2. can i get In Trouble for online gambling on IBM Selectric typewriters blamed for cancer
  3. Bryanassit on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. russian roulette game download on Building a Community Around Two Minute Papers
  5. Brainroter4Nalay on Study: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.