Dennis Anthony, a 400-level Mass Communication student at Kaduna State University, was punished when his lecturer discovered he had used Artificial Intelligence for his assignment. Although Anthony admitted that he relied on the AI to beat the deadline, the lecturer easily detected it because “AI speaks a different English” from the way students would normally write.
“Our lecturers just assume no one can write a perfect piece, without using AI, they use AI too, and that’s a big threat to our writing skills,” Anthony said.
This is also a common scenario for Alkasim Isa, a journalist in Kano State. He encountered a hurdle when his editor rejected his piece, suspecting it was AI-generated due to its overly polished and uniform language.
SPONSOR AD
As AI-generated content becomes increasingly common, from essays to news stories, humans are deploying other AI models to detect AI-generated content. Tools like GPTZero, Turnitin, and Copyleaks are used to detect whether a human or a machine wrote something. Ironically, these detectors themselves are AI-powered.
Journalists, students and writers report that they now deliberately avoid using vivid or structured phrases that were once normal in their writing, because such patterns are increasingly marked as AI-written by detectors.
Editors and professionals use AI-detection tools to verify article originality in newsrooms or publishing platforms. However, they often operate with low transparency, as their accuracy rates vary widely depending on the language style and input length.
Among the most widely used are Copyleaks, Originality.AI, GPTZero, Turnitin, Winston AI, and newer entrants like Sapling and AI Detector Pro.
Copyleaks, for instance, advertises itself as a highly accurate detector with over 99% precision and a very low false positive rate. It claims to recognise human writing patterns using a mix of AI Source Match and AI Phrases, trained on trillions of pages of text. The platform also supports more than 30 languages and says it can detect content from major AI models like ChatGPT, Gemini, and Claude. GPTZero generates probability scores that flag sections as AI, human, or mixed.
Despite their claims of objectivity, Jibril Aruna, Lead, AI engineering at Seismic Consulting Group, warned that AI-detection tools are opaque, biased, and fundamentally flawed. He explained that these detectors work as classifiers, trained on datasets of both human-written and AI-generated text to spot patterns in word choice and linguistic variation.
Aruna criticised the lack of transparency in these tools, which often present a percentage score without disclosing their methodologies, datasets, or verified accuracy rates. He added that this happens especially against non-native English speakers whose writing styles may deviate from the patterns in the training data.
“The result punishes the most vulnerable writers and students,” he said. “Detectors cannot tell the difference between full AI-generated essays and AI-assisted work, such as grammar checks or brainstorming support.”
Journalists struggle to compete with AI writers
Isa, who works with an online news platform, decided to be transparent with his editor after admitting that he had actually used AI to help structure his article. However, after that incident, it led to a new challenge. “The editor began to suspect that any piece I submitted might be AI-written, regardless of the actual content,” he said.
Sani Modibbo, a freelance journalist in Nigeria, shared that he uses AI to generate headlines, but he always ensures that the main body of his articles is his work. Yet, he too faced scepticism from an editor who assumed AI involvement based on writing patterns.
“This has made me wary of using such tools,” said Modibbo.
Sunday Michael Ugwu, the Editor of Pinnacle Daily, a digital news platform, said editors can easily detect AI-generated content by recognising a sudden shift in a reporter’s writing style and quality. He warned that publishing AI-fabricated stories could carry grave consequences for a reporter’s career.
He said, “Editors must rely on experience and look for signs of overly mechanical writing and verify facts independently.”
However, Ugwu stressed that AI is not inherently bad, but it must never replace creativity or a journalist’s storytelling style. While noting the challenge of AI detection tools, he noted that some programmes designed to catch machine-generated text are already being outsmarted by tools that humanise AI content.
Lecturers are using AI too
According to an article published by The New York Times, a student at Northeastern University in the U.S. demanded her tuition back, surprised to find that a professor had used ChatGPT to assemble their course materials
“He’s telling us not to use it, and then he’s using it himself,” one student said. But Professors say AI tools make them better at their jobs.
In Nigeria, students face similar pushback from their lecturers. Bashira Shu’aibu, a final-year Mass Communication student, who is a victim of this pushback, said some lecturers wrongly assume that any well-written work is AI-generated. She complained that even after spending hours researching, a lecturer still dismisses her submissions as “too perfect” to be human. “But it is what they taught us,” she said.
One 300-level student, who asked to remain anonymous, admitted he and his group used AI in their assignments but said they tried to mix it with their own creativity. Yet, their lecturer still caught them, leading to reduced marks and embarrassment.
Farida Ahmed Bala, a student at the same university, said she was once flagged after her assignment was found to be similar to a coursemate’s, which both had unknowingly generated with AI. She warned students against using AI especially for their final projects, noting that it risks plagiarism. “If we have AI doing everything for us, why then are we in school?” She questioned.

However, another student said she has learned to use AI without getting flagged for plagiarism by combining software checks through tools like Turnitin and other plagiarism checkers with manual editing. “I paraphrase, cite properly, and rework AI language so it reflects my own style,” she revealed.
Lecturers, however, look at these issues raised from a different perspective. Dr. Ismail Muhammad Anchau, Chief Lecturer and Director of Policy and Transparency Division at Kaduna Polytechnic, admitted that the use of AI among students is rising quickly, especially for assignments, projects, and theses.
He argued that many students now rely on it as a shortcut instead of reading or visiting libraries, a situation he described as both a development and a threat. “It is a development in the sense that it’s technological advancement, but it is also a threat in the sense that it might continue to undermine the ability of students to quest for knowledge,” he said.
Dr. Anchau believed lecturers can easily spot AI use without the use of detection tools. “It actually depends on the scholarship of the lecturer. Verbal tests and close reading of a student’s ability can reveal whether their work was truly their own,” he concluded.
Dr. Babayo Sule of the Department of Political and Administrative Studies at the National University of Lesotho, worried that AI is eroding originality in academia and fading away people’s talent.
He explained that his institution has discovered rising use of AI among students, which prompted the university to introduce detection tools for lecturers. According to him, if a student’s work shows a small percentage of AI use, it can sometimes be reworked, but when the percentage is high, the work risks outright dismissal.
Dr. Sule, who said these tools may be fair, explained that one of the simplest ways to detect AI-generated work is to look for perfection. “When you see a mistake, you determine the work is original. When work is too clean, that’s the work of AI,” he noted.
Does humanising AI-generated content solve the issue?
In the quest to make AI-generated content more human-like, writers often turn to AI humanisers. These tools promise to make texts almost identical to human writing. Some writers believe they are safe when they use this logic. However, several tests done by this reporter revealed that these tools are not as foolproof as they seem.
Experimenting with a journalist’s career summary generated by ChatGPT, an AI detection tool, GPTZero, flagged the content as largely AI-generated, with a detection rate of around 92.25%. Its feedback reads “Highlighted text is suspected to be most likely generated by AI.”
However, GPTZero suggested that this can be bypassed by humanising the text with another tool, Undetectable AI. Despite doing that, it still flagged the text as likely 79% AI and 21% human-generated, citing the use of GPTZero, Writer, QuillBot, Copyleaks, Sapling and Grammarly.

Surprisingly, when the same text was humanised by one of the trending tools, Humanize.AI, and pasted in another AI detection tool, Copyleaks, it says “All Clear — Nothing Flagged”. However, after manually rewriting the text, the detection accuracy stood at 60%.
Ibrahim Zubairu, a technical product manager and founder of Malamiromba, a virtual tech community in Northern Nigeria, explained that these AI content detectors fail because they only look for patterns that come from data, which is subject to their training data set.
“The tools are trained on data that reflects human biases. They learn patterns, but the patterns aren’t perfect,” he said. According to him, they assume there is a fixed idea of what human-like writing is and can always be the same. “But writing is not the same; writing changes,” he added.
Zubairu said, “AI content detection tools operate on principles similar to the large language models (LLMs) they aim to detect.”

To further illustrate these points, Zubairu independently conducted a brief test using the two prominent AI content detection tools, GPTZero and Copyleaks. For this test, he used a piece of text that was, in fact, generated by an AI model, describing the history of the internet.
To make it unique, he styled the script with his own acronym: “I Can Work On Everything” (ICWOE), which stood for Introduction, Core Functionality, Ways of Working, Output, and Exception Handling.
However, these tools failed to catch what was obviously AI-generated. Copyleaks marked the result, an AI-written piece, as 100% human, while GPTZero rated it 99% human.

Zubairu concluded that these systems struggle when the AI output is heavily edited or made to look natural. “These tools can be fooled,” he added.
Scholars can figure out AI text
Grema Alhaji Yahaya, an AI educator and researcher, believes that scholars can often detect AI-generated content without needing AI-detection tools. In a recent article published on his platforms, Yahaya outlined several linguistic and stylistic clues that give away machine-written text.
“You don’t always need software to identify AI writing. If you pay close attention, the patterns will speak for themselves,” he explained. He added that, AI outputs may feature perfect punctuation but with strange overuse of em dashes or semicolons.
One of the most telling signs, according to Yahaya, is the overuse of formal and repetitive language. Words like “delve,” “intricate,” and “realm” may be used too frequently, making the text feel more like an academic thesaurus than natural writing. “Human writing, even when polished, has its quirks—those quirks are often missing in AI text,” he said.
Going forward
Dr. Najeeb G. Abdulhamid, an AI researcher and OpenSchool Initiative volunteer, cautioned that AI detection tools are far from foolproof. He noted that OpenAI had shut down its own AI-text classifier for “low accuracy” while Turnitin warns that low-percentage scores cannot be fully trusted.
“Detector outputs should be treated as weak signals, not proof,” he said, stressing that human review and corroborating evidence are essential before taking disciplinary action.
He warned that false positives remain a serious risk, with universities and journalists documenting cases where students were wrongly penalised. To address this, Abdulhamid recommended strict policies banning sole reliance on detector scores, mandatory human review, and a clear appeals process.
On accountability, he proposed policies aligned with UNESCO standards, including impact assessments, explainability, audits, and governance boards with student representation.
This report was produced with support from the Centre for Journalism Innovation and Development (CJID) and Luminate