AI-generated writing lacks personal touches that enhance an argument. (Image by © BPawesome – stock.adobe.com)
In a nutshell
Human essays use over three times more engagement techniques than ChatGPT, including personal asides, rhetorical questions, and reader mentions that help build a connection with the audience.
ChatGPT-generated essays were found to be grammatically correct but impersonal and less persuasive, lacking features like logical appeals and interactive commentary that are common in student writing.
The study suggests that while AI tools like ChatGPT can assist with writing, they fall short in mimicking the intuitive audience awareness and rhetorical flair that make human writing effective.
NORWICH, England — Your college professor can probably tell when you’ve used ChatGPT to write your essay, and now science explains why. While AI can mimic grammar and structure, it’s missing a fundamental human quality: the ability to genuinely connect with readers. A new international study shows that when ChatGPT writes, it creates a one-sided conversation that lacks the natural engagement humans instinctively build into their arguments.
The study, published in the journal Written Communication, discovered that while AI can produce grammatically correct and coherent academic texts, it falls significantly short in creating the kind of personal, engaging writing that human students produce. This difference might explain why some professors can still spot AI-written assignments despite technological advancements.
“The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don’t yet have tools to reliably detect AI-created texts,” says study author Ken Hyland from the University of East Anglia, in a statement.
How Students Engage Readers
The researchers found that essays written by students contain a greater number and variety of engagement techniques, resulting in writing that’s more interactive and persuasive. The study examined hundreds of essays to find clear patterns in how humans naturally craft arguments compared to AI.
The study specifically focused on “engagement markers,” the rhetorical devices writers use to connect with readers, bring them into the conversation, and guide them toward certain conclusions. Think of these as the conversational elements in writing that make you feel like the author is speaking directly to you or including you in their thought process.
When comparing 145 essays written by British university students against 145 similar essays generated by ChatGPT on the same topics, the researchers found that students used over three times more engagement features than the AI. Students frequently employed questions, personal asides, and reader mentions to create a sense of shared exploration with their audience.
For example, student writers often inserted questions about whether scientists should bear global burdens, or made personal observations about British identity and geographical separation from continental Europe. These elements create a conversational relationship with readers that the AI-generated texts consistently lacked.
Where AI Writing Falls Short
While ChatGPT can produce technically competent writing, it struggles with the human elements of persuasion. The AI model relies heavily on factual statements and appeals to shared knowledge but rarely employs the personal touches that make academic arguments compelling.
Research co-author Hyland, a professor with over 300 published articles and 97,000 citations, explains that human writers consciously build a mental model of their readers and adjust their writing accordingly. ChatGPT, despite its impressive capabilities, cannot truly understand its audience or anticipate reader objections without specific prompting.
“The AI essays mimicked academic writing conventions, but they were unable to inject text with a personal touch or to demonstrate a clear stance,” says Hyland.
The AI completely avoided using personal asides, those brief digressions where a writer shares a personal thought or comment. This absence creates what the researchers describe as a more “dialogically closed” text that reads as impersonal or “empty.” The research team believes this limitation stems from ChatGPT’s training, which emphasizes coherence and conciseness over conversational authenticity.
The study also reveals that ChatGPT failed to use appeals to logical reasoning in its essays, suggesting it may be better at reproducing factual information than developing complex ideas or concepts. This aligns with previous research indicating that AI models struggle with higher-order thinking skills.
Teaching With AI
For students worried about detection, the study offers clear evidence that current AI writing lacks the natural human elements that professors unconsciously expect. For educators, it provides potential markers to identify AI-generated content without relying solely on detection software, which has proven inconsistent.
Rather than viewing AI as a threat, the researchers suggest that tools like ChatGPT could become valuable teaching aids. By comparing AI-generated drafts with human writing, students could learn to identify and incorporate effective engagement strategies, developing their unique voice while leveraging AI assistance.
“When students come to school, college, or university, we’re not just teaching them how to write, we’re teaching them how to think – and that’s something no algorithm can replicate,” added Prof Hyland.
For now, at least, truly engaging academic writing remains a human art. While ChatGPT can arrange facts and follow structures, it lacks the intuitive understanding that writing is ultimately a conversation. Until AI can genuinely anticipate and address the human on the other side of the page, the most persuasive arguments will still come from people, not machines.
Paper Summary
Methodology
The researchers analyzed two corpora of argumentative essays: 145 written by second-year British university students (from the Louvain Corpus of Native English Essays) and 145 essays generated by ChatGPT 4.0 on the same topics. They used corpus linguistics tools to tag and search for specific engagement features, examining approximately 100 different items of reader engagement. They manually checked each instance to confirm it performed an engagement function. The data was normalized to occurrences per 1,000 words to allow fair comparison between the corpora, and statistical significance was determined using log-likelihood tests.
Results
The study found that student essays contained significantly more engagement markers (16.99 per 1,000 words) compared to ChatGPT essays (5.40 per 1,000 words). While both sets of essays used reader mentions and directives in similar proportions, students employed far more questions and personal asides. ChatGPT relied more heavily on appeals to shared knowledge, particularly tradition and typicality, but completely avoided logical reasoning appeals. The AI produced no personal asides and very few questions, showing limitations in building interactive arguments. The standard deviations and dispersion proportions were also much narrower for the AI texts, indicating less variation in engagement style.
Limitations
The researchers acknowledge several limitations to their study. They focused only on interactional elements of academic writing, an area where AI might be expected to have limitations. They also note that undergraduate students are not expert writers and might potentially overuse engagement markers. Additionally, the caliber of the data used to train ChatGPT serves as a constraint on its responses. While the model is trained on a sizable amount of text data, these data may be skewed toward certain registers, demographics, or subject areas, giving an incomplete picture of authentic academic writing.
Funding/Disclosures
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. They also received no financial support for the research, authorship, and/or publication of this article.
Publication Information
The study “Does ChatGPT Write Like a Student? Engagement Markers in Argumentative Essays” was published in Written Communication, SAGE Publications, in 2025. The research was conducted by Feng (Kevin) Jiang from the School of Foreign Languages at Beihang University (China) and Ken Hyland from the School of Education and Lifelong Learning at the University of East Anglia (UK).