Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Reinforcement Learning Scaling Trends: Insights from Andrej Karpathy on AI Business Opportunities in 2025 | AI News Detail

Google’s Newest AI Model Acts Like a Satellite to Track Climate Change

‘Could fundamentally change how we power our world’

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Writing Tools

Can AI tools detect machine-generated content? – Daily Trust

By Advanced AI EditorAugust 24, 2025No Comments11 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Dennis Anthony, a 400-level Mass Communication student at Kaduna State University, was punished when his lecturer discovered he had used Artificial Intelligence for his assignment. Although Anthony admitted that he relied on the AI to beat the deadline, the lecturer easily detected it because “AI speaks a different English” from the way students would normally write. 

“Our lecturers just assume no one can write a perfect piece, without using AI, they use AI too, and that’s a big threat to our writing skills,” Anthony said.

This is also a common scenario for Alkasim Isa, a journalist in Kano State. He encountered a hurdle when his editor rejected his piece, suspecting it was AI-generated due to its overly polished and uniform language.

SPONSOR AD

As AI-generated content becomes increasingly common, from essays to news stories, humans are deploying other AI models to detect AI-generated content. Tools like GPTZero, Turnitin, and Copyleaks are used to detect whether a human or a machine wrote something. Ironically, these detectors themselves are AI-powered. 

Journalists, students and writers report that they now deliberately avoid using vivid or structured phrases that were once normal in their writing, because such patterns are increasingly marked as AI-written by detectors.

Editors and professionals use AI-detection tools to verify article originality in newsrooms or publishing platforms. However, they often operate with low transparency, as their accuracy rates vary widely depending on the language style and input length.

Among the most widely used are Copyleaks, Originality.AI, GPTZero, Turnitin, Winston AI, and newer entrants like Sapling and AI Detector Pro.

Copyleaks, for instance, advertises itself as a highly accurate detector with over 99% precision and a very low false positive rate. It claims to recognise human writing patterns using a mix of AI Source Match and AI Phrases, trained on trillions of pages of text. The platform also supports more than 30 languages and says it can detect content from major AI models like ChatGPT, Gemini, and Claude. GPTZero generates probability scores that flag sections as AI, human, or mixed.

Despite their claims of objectivity, Jibril Aruna, Lead, AI engineering at Seismic Consulting Group, warned that AI-detection tools are opaque, biased, and fundamentally flawed. He explained that these detectors work as classifiers, trained on datasets of both human-written and AI-generated text to spot patterns in word choice and linguistic variation. 

Aruna criticised the lack of transparency in these tools, which often present a percentage score without disclosing their methodologies, datasets, or verified accuracy rates. He added that this happens especially against non-native English speakers whose writing styles may deviate from the patterns in the training data.

“The result punishes the most vulnerable writers and students,” he said. “Detectors cannot tell the difference between full AI-generated essays and AI-assisted work, such as grammar checks or brainstorming support.”

Journalists struggle to compete with AI writers

Isa, who works with an online news platform, decided to be transparent with his editor after admitting that he had actually used AI to help structure his article. However, after that incident, it led to a new challenge. “The editor began to suspect that any piece I submitted might be AI-written, regardless of the actual content,” he said.

Sani Modibbo, a freelance journalist in Nigeria, shared that he uses AI to generate headlines, but he always ensures that the main body of his articles is his work. Yet, he too faced scepticism from an editor who assumed AI involvement based on writing patterns.

“This has made me wary of using such tools,” said Modibbo.

Sunday Michael Ugwu, the Editor of Pinnacle Daily, a digital news platform, said editors can easily detect AI-generated content by recognising a sudden shift in a reporter’s writing style and quality. He warned that publishing AI-fabricated stories could carry grave consequences for a reporter’s career.

He said, “Editors must rely on experience and look for signs of overly mechanical writing and verify facts independently.”

However, Ugwu stressed that AI is not inherently bad, but it must never replace creativity or a journalist’s storytelling style. While noting the challenge of AI detection tools, he noted that some programmes designed to catch machine-generated text are already being outsmarted by tools that humanise AI content.

Lecturers are using AI too

According to an article published by The New York Times,  a student at Northeastern University in the U.S. demanded her tuition back, surprised to find that a professor had used ChatGPT to assemble their course materials

“He’s telling us not to use it, and then he’s using it himself,” one student said.  But Professors say AI tools make them better at their jobs.

In Nigeria, students face similar pushback from their lecturers. Bashira Shu’aibu, a final-year Mass Communication student, who is a victim of this pushback, said some lecturers wrongly assume that any well-written work is AI-generated. She complained that even after spending hours researching, a lecturer still dismisses her submissions as “too perfect” to be human. “But it is what they taught us,” she said.

One 300-level student, who asked to remain anonymous, admitted he and his group used AI in their assignments but said they tried to mix it with their own creativity. Yet, their lecturer still caught them, leading to reduced marks and embarrassment. 

Farida Ahmed Bala, a student at the same university, said she was once flagged after her assignment was found to be similar to a coursemate’s, which both had unknowingly generated with AI. She warned students against using AI especially for their final projects, noting that it risks plagiarism. “If we have AI doing everything for us, why then are we in school?” She questioned.

Humanize AI, claims to be the Top 1 AI tool for human-like content. Photo: Muslim Yusuf
Humanize AI, claims to be the Top 1 AI tool for human-like content. Photo: Muslim Yusuf

However, another student said she has learned to use AI without getting flagged for plagiarism by combining software checks through tools like Turnitin and other plagiarism checkers with manual editing. “I paraphrase, cite properly, and rework AI language so it reflects my own style,” she revealed.

Lecturers, however, look at these issues raised from a different perspective. Dr. Ismail Muhammad Anchau, Chief Lecturer and Director of Policy and Transparency Division at Kaduna Polytechnic, admitted that the use of AI among students is rising quickly, especially for assignments, projects, and theses.

He argued that many students now rely on it as a shortcut instead of reading or visiting libraries, a situation he described as both a development and a threat. “It is a development in the sense that it’s technological advancement, but it is also a threat in the sense that it might continue to undermine the ability of students to quest for knowledge,” he said.

Dr. Anchau believed lecturers can easily spot AI use without the use of detection tools. “It actually depends on the scholarship of the lecturer. Verbal tests and close reading of a student’s ability can reveal whether their work was truly their own,” he concluded.

Dr. Babayo Sule of the Department of Political and Administrative Studies at the National University of Lesotho, worried that AI is eroding originality in academia and fading away people’s talent.

He explained that his institution has discovered rising use of AI among students, which prompted the university to introduce detection tools for lecturers. According to him, if a student’s work shows a small percentage of AI use, it can sometimes be reworked, but when the percentage is high, the work risks outright dismissal.

Dr. Sule, who said these tools may be fair, explained that one of the simplest ways to detect AI-generated work is to look for perfection. “When you see a mistake, you determine the work is original. When work is too clean, that’s the work of AI,” he noted.

Does humanising AI-generated content solve the issue?

In the quest to make AI-generated content more human-like, writers often turn to AI humanisers. These tools promise to make texts almost identical to human writing. Some writers believe they are safe when they use this logic. However, several tests done by this reporter revealed that these tools are not as foolproof as they seem.

Experimenting with a journalist’s career summary generated by ChatGPT, an AI detection tool, GPTZero, flagged the content as largely AI-generated, with a detection rate of around 92.25%. Its feedback reads “Highlighted text is suspected to be most likely generated by AI.”

However, GPTZero suggested that this can be bypassed by humanising the text with another tool, Undetectable AI. Despite doing that, it still flagged the text as likely 79% AI and 21% human-generated, citing the use of GPTZero, Writer, QuillBot, Copyleaks, Sapling and Grammarly.

Result from the Undetectable AI flagging the humanised text as likely 79% AI and 21% human. Photo: Muslim Yusuf
Result from the Undetectable AI flagging the humanised text as likely 79% AI and 21% human. Photo: Muslim Yusuf

Surprisingly, when the same text was humanised by one of the trending tools, Humanize.AI, and pasted in another AI detection tool, Copyleaks, it says “All Clear — Nothing Flagged”.  However, after manually rewriting the text, the detection accuracy stood at 60%.

Ibrahim Zubairu, a technical product manager and founder of Malamiromba, a virtual tech community in Northern Nigeria, explained that these AI content detectors fail because they only look for patterns that come from data, which is subject to their training data set.

“The tools are trained on data that reflects human biases. They  learn patterns, but the patterns aren’t perfect,” he said. According to him, they assume there is a fixed idea of what human-like writing is and can always be the same. “But writing is not the same; writing changes,” he added.

Zubairu said, “AI content detection tools operate on principles similar to the large language models (LLMs) they aim to detect.”

L-R: Result from Copyleaks judging the AI-generated text as all cleared, nothing flagged; ZeroGPT flagged the same humanised content 98.25% AI-generated. Photo: Muslim Yusuf
L-R: Result from Copyleaks judging the AI-generated text as all cleared, nothing flagged; ZeroGPT flagged the same humanised content 98.25% AI-generated. Photo: Muslim Yusuf

To further illustrate these points, Zubairu independently conducted a brief test using the two prominent AI content detection tools, GPTZero and Copyleaks. For this test, he used a piece of text that was, in fact, generated by an AI model, describing the history of the internet.

To make it unique, he styled the script with his own acronym: “I Can Work On Everything” (ICWOE), which stood for Introduction, Core Functionality, Ways of Working, Output, and Exception Handling. 

However, these tools failed to catch what was obviously AI-generated. Copyleaks marked the result,  an AI-written piece, as 100% human, while GPTZero rated it 99% human.

AI detection tool, GPTZero judged an AI-generated text as 99% human. Photo: Ibrahim Zubairu
AI detection tool, GPTZero judged an AI-generated text as 99% human. Photo: Ibrahim Zubairu

Zubairu concluded that these systems struggle when the AI output is heavily edited or made to look natural. “These tools can be fooled,” he added.

Scholars can figure out AI text

Grema Alhaji Yahaya, an AI educator and researcher, believes that scholars can often detect AI-generated content without needing AI-detection tools. In a recent article published on his platforms, Yahaya outlined several linguistic and stylistic clues that give away machine-written text. 

“You don’t always need software to identify AI writing. If you pay close attention, the patterns will speak for themselves,” he explained. He added that, AI outputs may feature perfect punctuation but with strange overuse of em dashes or semicolons.

One of the most telling signs, according to Yahaya, is the overuse of formal and repetitive language. Words like “delve,” “intricate,” and “realm” may be used too frequently, making the text feel more like an academic thesaurus than natural writing. “Human writing, even when polished, has its quirks—those quirks are often missing in AI text,” he said.

Going forward

Dr. Najeeb G. Abdulhamid, an AI researcher and OpenSchool Initiative volunteer, cautioned that AI detection tools are far from foolproof. He noted that OpenAI had shut down its own AI-text classifier for “low accuracy” while Turnitin warns that low-percentage scores cannot be fully trusted. 

“Detector outputs should be treated as weak signals, not proof,” he said, stressing that human review and corroborating evidence are essential before taking disciplinary action.

He warned that false positives remain a serious risk, with universities and journalists documenting cases where students were wrongly penalised. To address this, Abdulhamid recommended strict policies banning sole reliance on detector scores, mandatory human review, and a clear appeals process. 

On accountability, he proposed policies aligned with UNESCO standards, including impact assessments, explainability, audits, and governance boards with student representation.

This report was produced with support from the Centre for Journalism Innovation and Development (CJID) and Luminate



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTesla offers new feature to save battery and reduce phantom drain
Next Article ‘Could fundamentally change how we power our world’
Advanced AI Editor
  • Website

Related Posts

This AI tool is helping students become better writers

August 21, 2025

Pixel 10 Phones Get Their Own Journal App and AI Writing Tools

August 21, 2025

Why AI Detection Tools Matter in a World Full of Content

August 20, 2025

Comments are closed.

Latest Posts

Mütter Museum in Philadelphia Announces New Policy for Human Remains

Inigo Philbrick, Art Dealer Convicted of Fraud, Appears in BBC Film

Links for August 22, 2025

White House Targets Specific Artworks at Smithsonian Museums

Latest Posts

Reinforcement Learning Scaling Trends: Insights from Andrej Karpathy on AI Business Opportunities in 2025 | AI News Detail

August 24, 2025

Google’s Newest AI Model Acts Like a Satellite to Track Climate Change

August 24, 2025

‘Could fundamentally change how we power our world’

August 24, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Reinforcement Learning Scaling Trends: Insights from Andrej Karpathy on AI Business Opportunities in 2025 | AI News Detail
  • Google’s Newest AI Model Acts Like a Satellite to Track Climate Change
  • ‘Could fundamentally change how we power our world’
  • Can AI tools detect machine-generated content? – Daily Trust
  • Tesla offers new feature to save battery and reduce phantom drain

Recent Comments

  1. 1:1 replica bags on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Jeffreyflorp on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. BrianMot on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. AndrewMuh on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Brianvew on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.