A new study published in Communication Reports has found that readers interpret the involvement of artificial intelligence in news writing in varied and often inaccurate ways. When shown news articles with different byline descriptions—some noting that the article was written with or by AI—participants offered a wide range of explanations about what that meant. Most did not see AI as acting independently. Instead, they constructed stories in their minds to explain how AI and human writers might have worked together.
As AI technologies become more integrated into journalism, understanding how people interpret AI’s role becomes increasingly important. Generative artificial intelligence refers to tools that can produce human-like text, images, or audio based on prompts or data. In journalism, this often means AI is used to summarize information, generate headlines, or even write full articles based on structured data.
Since 2014, some newsrooms have used AI to automate financial and sports stories. But the release of more advanced tools, such as ChatGPT in late 2022, has expanded the possibilities and made AI much more visible in everyday news production. For example, in 2023, a large media company in the United Kingdom hired reporters whose work includes AI assistance and noted this in their bylines. However, readers are not always told exactly how AI contributed, which can create confusion or suspicion.
The researchers behind the new study wanted to know how people understand bylines that mention AI and whether their interpretations are influenced by their familiarity with media and attitudes toward artificial intelligence. They were especially interested in whether people could accurately infer what AI did during the creation of a news article just based on the wording in the byline. This is important because trust in journalism depends on transparency, and previous controversies—such as Sports Illustrated being accused of using AI-generated content without disclosure—have shown that unclear authorship can damage credibility.
To explore these questions, the research team designed an online study involving 269 adult participants. The sample closely reflected the U.S. population in terms of age, gender, and ethnicity. Participants were recruited through Prolific, an online platform often used for social science research, and were paid for their time. After giving consent, participants completed a short questionnaire measuring their media literacy and general attitudes toward artificial intelligence. Then, each person was randomly assigned to read a slightly edited Associated Press article about a health story. The article was the same for everyone, except for one line at the top—the byline.
This byline varied in five different ways: some said the story was written by a “staff writer,” while others said it was written “by staff writer with AI tool,” “with AI assistance,” “with AI collaboration,” or simply “by AI.” After reading the article, participants were asked to explain what they thought the byline meant and how they interpreted the role of AI in writing the article.
The responses showed that readers tried to make sense of the byline even when it wasn’t entirely clear. This act of constructing meaning from limited information is known as “sensemaking”—a process where people use what they already know or believe to understand something new or ambiguous. In this case, people relied on their personal experiences, assumptions about journalism, and existing knowledge of AI.
Many participants assumed that AI helped in some way, even if they couldn’t say exactly how. Some thought the AI wrote most of the article, with a human editor stepping in to clean things up. Others believed that a human wrote the bulk of the article, but used AI for smaller tasks, such as checking facts or suggesting better wording.
One person imagined the journalist typed in a few keywords, and AI pulled together text from the internet to generate the article. Another described a collaborative effort where AI gathered background information, and the human writer then evaluated its accuracy. These mental models—often called “folk theories”—illustrate how readers try to fill in the gaps when information is missing or vague.
Interestingly, even when the byline said the article was written “by AI,” many participants still assumed a human had been involved in some way. This suggests that most people do not see AI as a fully independent writer. Instead, they believe human oversight is necessary, whether for guidance, supervision, or final editing.
Some participants expressed skepticism or even frustration with the byline. When the article said it was written by a “staff writer,” but didn’t include a name, some assumed that this was an attempt to hide the fact that AI had actually written it. Others said the writing quality was poor, and attributed that to AI involvement—even when the article had been described as written by a human. In both cases, the absence of a named author led to negative judgments. This finding supports earlier research showing that readers expect transparency in authorship, and when those expectations are not met, they may distrust the content.
To further understand what influenced these interpretations, the researchers grouped participants based on their media literacy and their general attitudes toward AI. Media literacy refers to how well people understand the media they consume, including how news is produced.
The researchers found that participants with higher media literacy were more likely to believe that AI had done most of the writing. Those with lower media literacy were more likely to assume that a human wrote the article, or that the work was a human-AI collaboration. Surprisingly, prior attitudes toward AI did not significantly affect how participants interpreted the byline.
This suggests that how much people know about the media may matter more than how they feel about artificial intelligence when trying to figure out who wrote a story. It also shows that simply including a phrase like “with AI assistance” is not enough to give readers a clear understanding of AI’s role. The study found that people often misinterpret or overthink these statements, and the lack of standard language around AI involvement only adds to the confusion.
The study has some limitations. Because the researchers did not include a named author in any of the byline conditions, it’s possible that participants reacted negatively because they missed seeing a real person’s name—something they expect from journalism. It’s also worth noting that the article used in the study was based on science reporting, which tends to be more objective and less interpretive. Reactions to AI involvement might be stronger for topics like politics or opinion writing. Future studies could explore how these findings apply to other types of journalism and examine how people respond when articles include a full disclosure or transparency statement about AI use.
Despite these limitations, the study raises important questions for news organizations. As AI becomes more common in the newsroom, it is not enough to say that a story was produced “with AI.” Readers want to know what exactly the AI did—did it write the first draft, summarize data, suggest edits, or merely spellcheck the final copy? Without this clarity, readers are left to guess, and those guesses often lean toward suspicion or confusion.
The researchers argue that greater transparency is needed, not only as a matter of ethics but as a way to maintain trust in journalism. According to guidelines from the Society of Professional Journalists, journalists are expected to explain their processes and decisions to the public. This expectation should extend to AI use. As with human sources, AI contributions need to be clearly cited and described.
The study, “Who Wrote It? News Readers’ Sensemaking of AI/Human Bylines,” was authored by Steve Bien-Aimé, Mu Wu, Alyssa Appelman, and Haiyan Jia.