Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Nebius Stock Soars on $1B AI Funding, Analyst Sees 75% Upside

AI disruption rises, VC optimism cools in H1 2025

Humans provide necessary ‘checks and balances’ for AI, says Lattice CEO

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Should Instructors Ask Students to Show Document Histories to Guard Against AI Cheating?
Education AI

Should Instructors Ask Students to Show Document Histories to Guard Against AI Cheating?

Advanced AI BotBy Advanced AI BotDecember 20, 2024No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


‘Show your work’ has taken on a new meaning — and importance — in the age of ChatGPT.

As teachers and professors look for ways to guard against the use of AI to cheat on homework, many have started asking students to share the history of their online documents to check for signs that a bot did the writing. In some cases that means asking students to grant access to the version history of a document in a system like Google Docs, and in others it involves turning to new web browser extensions that have been created for just this purpose.

Many educators who use the approach, which is often called “process tracking,” do so as an alternative to running student work through AI detectors, which are prone to falsely accusing students, especially those who don’t speak English as their first language. Even companies that sell AI detection software admit that the tools can misidentify student-written material as AI around 4 percent of the time. Since teachers grade so many papers and assignments, many educators see that as an unacceptable level of error. And some students have pushed back in viral social media posts or even sued schools over what they say are false accusations of AI cheating.

The idea is that a quick look at a version history can reveal whether a huge chunk of writing was suddenly pasted in from ChatGPT or other chatbot, and that the method can be more reliable than using an AI detector.

But as process tracking has gained adoption, a growing number of writing teachers are raising objections, arguing that the practice amounts to surveillance and violates student privacy.

“It inserts suspicion into everything,” argues Leonardo Flores, a professor and chair of the English department at Appalachian State University, in North Carolina. He was one of several professors who outlined their objections to the practice on a blog post last month of a joint task force on AI and writing organized by two prominent academic groups — the Modern Language Association and the Conference on College Composition and Communication.

Can process tracking turn out to be the answer to checking student work for authenticity?

Time-Lapse History

Anna Mills, an English instructor at the College of Marin in Oakland, California, has used process tracking in her writing classes.

For some assignments, she has asked students to install an extension for their web browser called Revision History and then grant her access. With the tool, she can see a ribbon of information on top of documents that students turn in that shows how much time was spent and other details of the writing process. The tool can even generate a time-lapse video of all the typing that went into the document that the teacher can see, giving a rich behind-the-scenes view of how the essay was written.

Mills has also had students make use of a similar browser plug-in feature that Grammarly released in October, called Authorship. Students can use that tool to generate a report about a given document’s creation that includes details about how many times the author pasted material from another website, and whether any pasted material is likely AI-generated. It can create a time-lapse video of the document’s creation as well.

The instructor tells students that they can opt out of the tracking if they have concerns about the approach — and in those cases she would find an alternative way to check the authenticity of their work. No student has yet taken her up on that, however, and she wonders whether they worry that asking to do so would seem suspicious.

Most of her students seem open to the tracking, she says. In fact, some students in the past even called for more robust checking for AI cheating. “Students know there’s a lot of AI cheating going on, and that there’s a risk of the devaluation of their work and their degree as a result,” she says. And while she believes that the vast majority of her students are doing their own work, she says she has caught students turning in AI-generated work as their own. “I think some accountability makes sense,” she says.

Other educators, however, argue that making students show the complete history of their work will make them self-conscious. “If I knew as a student I had to share my process or worse, to see that it was being tracked and that information was somehow in the purview of my professor, I probably would be too self-conscious and worried that my process was judging my writing,” wrote Kofi Adisa, an associate professor of English at Maryland’s Howard Community College, in the blog post by the academic committee on AI in writing.

Of course, students may well be moving into a world where they use these AI tools in their jobs and also have to show employers which part of the work they’ve created. But for Adisa, “as more and more students use AI tools, I believe some faculty may rely too much on the surveillance of writing than the actual teaching of it.”

Another concern raised about process tracking is that some students may do things that look suspicious to a process tracking tool but are innocent, like draft a section of a paper and then paste it into a Google Doc.

To Flores, of Appalachian State, the best way to combat AI plagiarism is to change how instructors design assignments, so that they embrace the fact that AI is now a tool students can use rather than something forbidden. Otherwise, he says, there will just be an “arms race” of new tools to detect AI and new ways students devise to circumvent those detection methods.

Mills doesn’t necessarily disagree with that argument, in theory. She says she sees a big gap between what experts suggest that teachers do — to totally revamp the way they teach — and the more pragmatic approaches that educators are scrambling to adopt to make sure they do something to root out rampant cheating using AI.

“We’re at a moment when there are a lot of possible compromises to be made and a lot of conflicting forces that teachers don’t have much control over,” Mills says. “The biggest factor is that the other things we recommend require a lot of institutional support or professional development, labor and time” that most educators don’t have.

Product Arms Race

Grammarly officials say they are seeing a high demand for process tracking.

“It’s one of the fastest-growing features in the history of Grammarly,” says Jenny Maxwell, head of education at the company. She says customers have generated more than 2 million reports using the process-tracking tool since it was released about two months ago.

Maxwell says that the tool was inspired by the story of a university student who used Grammarly’s spell-checking features for a paper and says her professor falsely accused her of using an AI bot to write it. The student, who says she lost a scholarship due to the cheating accusation, shared details of her case in a series of TikTok videos that went viral, and eventually the student became a paid consultant to the company.

“Marley is sort of the North Star for us,” says Maxwell. The idea behind Authorship is that students can use the tool as they write, and then if they are ever falsely accused of using AI inappropriately — as Marley says she was — they can present the report as a way to make the case to the professor. “It’s really like an insurance policy,” says Maxwell. “If you’re flagged by any AI detection software, you actually have proof of what you’ve done.”

As for student privacy, Maxwell stresses that the tool is designed to give students control over whether they use the feature, and that students can see the report before passing it along to an instructor. That’s in contrast to the model of professors running student papers through AI detectors; students rarely see the reports of which sections of their work were allegedly written by AI.

The company that makes one of the most popular AI detectors, Turnitin, is considering adding process tracking features as well, says Annie Chechitelli, Turnitin’s chief product officer.

“We are looking at what are the elements that it makes sense to show that a student did this themselves,” she says. The best solution might be a combination of AI detection software and process tracking, she adds.

She argues that leaving it up to students whether they activate a process-tracking tool may not do much to protect academic integrity. “Opting in doesn’t make sense in this situation,” she argues. “If I’m a cheater, why would I use this?”

Meanwhile, other companies are already selling tools that claim to help students defeat both AI detectors and process trackers.

Mills, of the College of Marin, says she recently heard of a new tool that lets students paste a paper generated by AI into a system that simulates typing the paper into a process-tracking tool like Authorship, character by character, even adding in false keystrokes to make it look more authentic.

Chechitelli says her company is closely watching a growing number of tools that claim to “humanize” writing that is generated by AI so that students can turn it in as their own work without detection.

She says that she is surprised by the number of students who post TikTok videos bragging that they’ve found a way to subvert AI detectors.

“It helps us, are you kidding me, it’s great,” says Chechitelli, who finds such social media posts the easiest way to learn about techniques and alter their products accordingly. “We can see which ones are getting traction.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle ushers in the agentic AI era
Next Article Mobile County Public Schools using AI to catch students using it to cheat
Advanced AI Bot
  • Website

Related Posts

Emergent Bilingual Students Find Their Voice With Real-Time Translation

June 4, 2025

Researchers Turn to AI to Help Diagnose Children’s Speech Disorders

May 28, 2025

One District’s Approach to Successful AI Integration

April 30, 2025
Leave A Reply Cancel Reply

Latest Posts

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Original Prototype for Jane Birkin’s Hermes Bag Consigned to Sotheby’s

Viral Trump Vs. Musk Feud Ignites A Meme Chain Reaction

UK Art Dealer Sentenced To 2.5 Years In Jail For Selling Art to Suspected Hezbollah Financier

Latest Posts

Nebius Stock Soars on $1B AI Funding, Analyst Sees 75% Upside

June 7, 2025

AI disruption rises, VC optimism cools in H1 2025

June 7, 2025

Humans provide necessary ‘checks and balances’ for AI, says Lattice CEO

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.