Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Baidu AI drive to boost jobs

Nvidia boosts European sovereignty with AI infra push

Meta’s Llama AI Team Has Been Bleeding Talent. Many Joined Mistral.

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Revealed: Thousands of UK university students caught cheating using AI | Higher education
Writing Tools

Revealed: Thousands of UK university students caught cheating using AI | Higher education

Advanced AI BotBy Advanced AI BotJune 15, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.

A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.

Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.

The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

In 2019-20, before the widespread availability of generative AI, plagiarism accounted for nearly two-thirds of all academic misconduct. During the pandemic, plagiarism intensified as many assessments moved online. But as AI tools have become more sophisticated and accessible, the nature of cheating has changed.

The survey found that confirmed cases of traditional plagiarism fell from 19 per 1,000 students to 15.2 in 2023-24 and is expected to fall again to about 8.5 per 1,000, according to early figures from this academic year.

A series of charts showing proven misconduct cases per 1,000 students. Plagiarism rises from 2019-20 to 2022-23 then drops back again, while AI-related misconduct rises from 2022-23 to almost the same level as plagiarism. ‘Other misconduct’ remains fairly stable.

The Guardian contacted 155 universities under the Freedom of Information Act requesting figures for proven cases of academic misconduct, plagiarism and AI misconduct in the last five years. Of these, 131 provided some data – though not every university had records for each year or category of misconduct.

More than 27% of responding universities did not yet record AI misuse as a separate category of misconduct in 2023-24, suggesting the sector is still getting to grips with the issue.

Many more cases of AI cheating may be going undetected. A survey by the Higher Education Policy Institute in February found 88% of students used AI for assessments. Last year, researchers at the University of Reading tested their own assessment systems and were able to submit AI-generated work without being detected 94% of the time.

Dr Peter Scarfe, an associate professor of psychology at the University of Reading and co-author of that study, said there had always been ways to cheat but that the education sector would have to adapt to AI, which posed a fundamentally different problem.

He said: “I would imagine those caught represent the tip of the iceberg. AI detection is very unlike plagiarism, where you can confirm the copied text. As a result, in a situation where you suspect the use of AI, it is near impossible to prove, regardless of the percentage AI that your AI detector says (if you use one). This is coupled with not wanting to falsely accuse students.

“It is unfeasible to simply move every single assessment a student takes to in-person. Yet at the same time the sector has to acknowledge that students will be using AI even if asked not to and go undetected.”

Students who wish to cheat undetected using generative AI have plenty of online material to draw from: the Guardian found dozens of videos on TikTok advertising AI paraphrasing and essay writing tools to students. These tools help students bypass common university AI detectors by “humanising” text generated by ChatGPT.

Dr Thomas Lancaster, an academic integrity researcher at Imperial College London, said: “When used well and by a student who knows how to edit the output, AI misuse is very hard to prove. My hope is that students are still learning through this process.”

Harvey* has just finished his final year of a business management degree at a northern English university. He told the Guardian he had used AI to generate ideas and structure for assignments and to suggest references, and that most people he knows used the tool to some extent.

“ChatGPT kind of came along when I first joined uni, and so it’s always been present for me,” he said. “I don’t think many people use AI and then would then copy it word for word, I think it’s more just generally to help brainstorm and create ideas. Anything that I would take from it, I would then rework completely in my own ways.

“I do know one person that has used it and then used other methods of AI where you can change it and humanise it so that it writes AI content in a way that sounds like it’s come from a human.”

Amelia* has just finished her first year of a music business degree at a university in the south-west. She said she had also used AI for summarising and brainstorming, but that the tools had been most useful for people with learning difficulties. “One of my friends uses it, not to write any of her essays for her or research anything, but to put in her own points and structure them. She has dyslexia – she said she really benefits from it.”

The science and technology secretary, Peter Kyle, told the Guardian recently that AI should be deployed to “level up” opportunities for dyslexic children.

Technology companies appear to be targeting students as a key demographic for AI tools. Google offers university students a free upgrade of its Gemini tool for 15 months, and OpenAI offers discounts to college students in the US and Canada.

Lancaster said: “University-level assessment can sometimes seem pointless to students, even if we as educators have good reason for setting this. This all comes down to helping students to understand why they are required to complete certain tasks and engaging them more actively in the assessment design process.

“There’s often a suggestion that we should use more exams in place of written assessments, but the value of rote learning and retained knowledge continues to decrease every year. I think it’s important that we focus on skills that can’t easily be replaced by AI, such as communication skills, people skills, and giving students the confidence to engage with emerging technology and to succeed in the workplace.”

A government spokesperson said it was investing more than £187m in national skills programmes and had published guidance on the use of AI in schools.

They said: “Generative AI has great potential to transform education and provides exciting opportunities for growth through our plan for change. However, integrating AI into teaching, learning and assessment will require careful consideration and universities must determine how to harness the benefits and mitigate the risks to prepare students for the jobs of the future.”

*Names have been changed.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleC3 AI Stock Surges 24% On $450 Million Defense Deal
Next Article A New Trick Could Block the Misuse of Open Source AI
Advanced AI Bot
  • Website

Related Posts

Announcing StudyPro: The All-in-One AI Academic Platform Now in Complimentary Beta

June 12, 2025

Empower Your Learning with AI: StudyPro Launches Free Beta

June 12, 2025

StudyPro Announces Free Beta Launch of AI-Powered Writing Platform

June 12, 2025
Leave A Reply Cancel Reply

Latest Posts

Roger Director’s New Novel Is Killing In Havana

Zegna’s SS ‘26 Dubai Show Is A Vision For A Slow, Quiet Luxury Legacy

Love At First Stitch – One Woman’s Journey Preserving The Art Of Ralli

Los Angeles’ ‘No Kings’ Rally Showcases Handmade Protest Art

Latest Posts

Baidu AI drive to boost jobs

June 16, 2025

Nvidia boosts European sovereignty with AI infra push

June 16, 2025

Meta’s Llama AI Team Has Been Bleeding Talent. Many Joined Mistral.

June 16, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.