Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

For Now, AI Helps IBM’s Bottom Line More Than Its Top Line

Why AI is making us lose our minds (and not in the way you’d think)

China PM warns against a global AI ‘monopoly’

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Writing Tools

Readers struggle to understand AI’s role in news writing, study suggests

By Advanced AI EditorJune 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

A new study published in Communication Reports has found that readers interpret the involvement of artificial intelligence in news writing in varied and often inaccurate ways. When shown news articles with different byline descriptions—some noting that the article was written with or by AI—participants offered a wide range of explanations about what that meant. Most did not see AI as acting independently. Instead, they constructed stories in their minds to explain how AI and human writers might have worked together.

As AI technologies become more integrated into journalism, understanding how people interpret AI’s role becomes increasingly important. Generative artificial intelligence refers to tools that can produce human-like text, images, or audio based on prompts or data. In journalism, this often means AI is used to summarize information, generate headlines, or even write full articles based on structured data.

Since 2014, some newsrooms have used AI to automate financial and sports stories. But the release of more advanced tools, such as ChatGPT in late 2022, has expanded the possibilities and made AI much more visible in everyday news production. For example, in 2023, a large media company in the United Kingdom hired reporters whose work includes AI assistance and noted this in their bylines. However, readers are not always told exactly how AI contributed, which can create confusion or suspicion.

The researchers behind the new study wanted to know how people understand bylines that mention AI and whether their interpretations are influenced by their familiarity with media and attitudes toward artificial intelligence. They were especially interested in whether people could accurately infer what AI did during the creation of a news article just based on the wording in the byline. This is important because trust in journalism depends on transparency, and previous controversies—such as Sports Illustrated being accused of using AI-generated content without disclosure—have shown that unclear authorship can damage credibility.

To explore these questions, the research team designed an online study involving 269 adult participants. The sample closely reflected the U.S. population in terms of age, gender, and ethnicity. Participants were recruited through Prolific, an online platform often used for social science research, and were paid for their time. After giving consent, participants completed a short questionnaire measuring their media literacy and general attitudes toward artificial intelligence. Then, each person was randomly assigned to read a slightly edited Associated Press article about a health story. The article was the same for everyone, except for one line at the top—the byline.

This byline varied in five different ways: some said the story was written by a “staff writer,” while others said it was written “by staff writer with AI tool,” “with AI assistance,” “with AI collaboration,” or simply “by AI.” After reading the article, participants were asked to explain what they thought the byline meant and how they interpreted the role of AI in writing the article.

The responses showed that readers tried to make sense of the byline even when it wasn’t entirely clear. This act of constructing meaning from limited information is known as “sensemaking”—a process where people use what they already know or believe to understand something new or ambiguous. In this case, people relied on their personal experiences, assumptions about journalism, and existing knowledge of AI.

Many participants assumed that AI helped in some way, even if they couldn’t say exactly how. Some thought the AI wrote most of the article, with a human editor stepping in to clean things up. Others believed that a human wrote the bulk of the article, but used AI for smaller tasks, such as checking facts or suggesting better wording.

One person imagined the journalist typed in a few keywords, and AI pulled together text from the internet to generate the article. Another described a collaborative effort where AI gathered background information, and the human writer then evaluated its accuracy. These mental models—often called “folk theories”—illustrate how readers try to fill in the gaps when information is missing or vague.

Interestingly, even when the byline said the article was written “by AI,” many participants still assumed a human had been involved in some way. This suggests that most people do not see AI as a fully independent writer. Instead, they believe human oversight is necessary, whether for guidance, supervision, or final editing.

Some participants expressed skepticism or even frustration with the byline. When the article said it was written by a “staff writer,” but didn’t include a name, some assumed that this was an attempt to hide the fact that AI had actually written it. Others said the writing quality was poor, and attributed that to AI involvement—even when the article had been described as written by a human. In both cases, the absence of a named author led to negative judgments. This finding supports earlier research showing that readers expect transparency in authorship, and when those expectations are not met, they may distrust the content.

To further understand what influenced these interpretations, the researchers grouped participants based on their media literacy and their general attitudes toward AI. Media literacy refers to how well people understand the media they consume, including how news is produced.

The researchers found that participants with higher media literacy were more likely to believe that AI had done most of the writing. Those with lower media literacy were more likely to assume that a human wrote the article, or that the work was a human-AI collaboration. Surprisingly, prior attitudes toward AI did not significantly affect how participants interpreted the byline.

This suggests that how much people know about the media may matter more than how they feel about artificial intelligence when trying to figure out who wrote a story. It also shows that simply including a phrase like “with AI assistance” is not enough to give readers a clear understanding of AI’s role. The study found that people often misinterpret or overthink these statements, and the lack of standard language around AI involvement only adds to the confusion.

The study has some limitations. Because the researchers did not include a named author in any of the byline conditions, it’s possible that participants reacted negatively because they missed seeing a real person’s name—something they expect from journalism. It’s also worth noting that the article used in the study was based on science reporting, which tends to be more objective and less interpretive. Reactions to AI involvement might be stronger for topics like politics or opinion writing. Future studies could explore how these findings apply to other types of journalism and examine how people respond when articles include a full disclosure or transparency statement about AI use.

Despite these limitations, the study raises important questions for news organizations. As AI becomes more common in the newsroom, it is not enough to say that a story was produced “with AI.” Readers want to know what exactly the AI did—did it write the first draft, summarize data, suggest edits, or merely spellcheck the final copy? Without this clarity, readers are left to guess, and those guesses often lean toward suspicion or confusion.

The researchers argue that greater transparency is needed, not only as a matter of ethics but as a way to maintain trust in journalism. According to guidelines from the Society of Professional Journalists, journalists are expected to explain their processes and decisions to the public. This expectation should extend to AI use. As with human sources, AI contributions need to be clearly cited and described.

The study, “Who Wrote It? News Readers’ Sensemaking of AI/Human Bylines,” was authored by Steve Bien-Aimé, Mu Wu, Alyssa Appelman, and Haiyan Jia.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleRyan Hall: Value of Competition | Take It Uneasy Podcast
Next Article What is an Autoencoder? | Two Minute Papers #86
Advanced AI Editor
  • Website

Related Posts

How to spot AI writing — 5 telltale signs to look for

July 25, 2025

AI suggestions make writing more generic, Western

July 21, 2025

Writeless AI Launches its Detection-Proof Academic Writing Platform | Upper Peninsula ABC 10

July 20, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

For Now, AI Helps IBM’s Bottom Line More Than Its Top Line

July 27, 2025

Why AI is making us lose our minds (and not in the way you’d think)

July 26, 2025

China PM warns against a global AI ‘monopoly’

July 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • For Now, AI Helps IBM’s Bottom Line More Than Its Top Line
  • Why AI is making us lose our minds (and not in the way you’d think)
  • China PM warns against a global AI ‘monopoly’
  • MIT faces backlash for not expelling anti-Israel protesters over ‘visa issues’: ‘Who is in charge?’
  • New QWEN 3 Coder : Did the Benchmark’s Lie?

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.