Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Delta Activations: A Representation for Finetuned Large Language Models – Takara TLDR

Fraud Remote Jobs Exposed: The Hidden Role of Claude AI

Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Education AI

I Want My Students’ Effort, Not AI’s Shortcut to Perfect Writing

By Advanced AI EditorSeptember 3, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This story was published by a Voices of Change fellow. Learn more about the fellowship here.

It’s my ninth grade English class, and we are at the end of our unit on “A Raisin in the Sun.” We were writing an essay on the American Dream and the barriers marginalized people experience as they strive to achieve it. We focused on themes such as institutional oppression as it relates to housing in Chicago and interpersonal oppression as it relates to how Walter treats Ruth and Beneatha.

This process of evidence selection and connecting it to real world experiences provides an opportunity to push oneself and dig deeper into the physical text to surface relevant material, but because we are reading a play — which is difficult to navigate due to stage and actor directions — most of my students were trying to find a way out of the researching process once they had to find textual evidence to support their ideas.

During our essay writing, one of my students excitedly approached me with their essay’s introduction. I read it over, and immediately, I could tell something was off. This student and I had been working all year on their spelling and grammar, and suddenly, both were perfect. The structure of each sentence flowed smoothly, using language I have not known this student to use in their writing in class. I asked him to tell me what the words in his writing meant, and he could not. I asked him to summarize his writing, but he could not.

Then, I took a deep breath and asked the dreaded question: “Did you use AI?” I watched as he shrank in embarrassment in front of me.

I did not feel angry but worried, and honestly, sad. I explained to him that I would rather his most fragmented, incoherent writing than this. I want his voice and his effort, regardless of what that looks like. I could tell he was frustrated with me because, at the end of the day, I asked him to push past the discomfort of returning to the text. I wanted him to be resilient and to see the challenge of familiarizing himself with the plot as an opportunity to fortify his memory, especially because his IEP explicitly stated that he needs support with this skill.

Academic resilience is when a student sees challenges as opportunities for growth; emotions such as frustration, impatience and doubt are replaced with self-belief, determination and confidence. A student who has academic resilience will see any provided task, with teacher support, as an opportunity to grow in some capacity. As I worked with this student, I recognized that the resilience wasn’t there. As he realized he could not recall the plot, he did not want to turn to the book, ask a classmate or ask me. Each of these actions is a skill in and of itself — turning to the book would require patience and determination as he reviews the material and asking a classmate or me requires bravery and listening skills. All these skills are useful in the real world, but artificial intelligence did not allow for any of these skills to be sharpened.

When students are given a myriad of digital ways to avoid opportunities to build their academic resilience, it becomes our responsibility to teach them the power and importance of their full abilities.

Reading and writing offer opportunities for academic resilience through the challenges they present. Turning a page, placing a Post-it note or underlining important evidence allows for motor skill development. Stopping at the end of the page to summarize enables a student to strengthen their short-term memory recall.

On top of these more granular developmental skills, the process of finding evidence itself can be frustrating because it means rereading and, at times, relearning to create the connections needed to support one’s argument. Building frustration tolerance is key to all aspects of life: whether it’s completing chores, driving or navigating conflict, many aspects of life can be frustrating. With the expediency of AI, low frustration tolerance is enabled, as any task perceived as difficult has an easy out. This has severe implications as young people, and especially those from disenfranchised backgrounds, are not given the critical thinking skills to thoughtfully analyze the world around them.

High frustration tolerance is key in areas outside of essay writing: reading a lease or a contract and identifying what raises concerns about one’s rights. Understanding local legislation during voting seasons requires the same skills of researching and asking questions about how their lives will be affected. These skills allow students to go on to be active agents in their lives and their communities. Without high frustration tolerance, we outsource our power, our insight and our capacity for making connections.

When we talk about technology, we talk about it as an exclusively digital experience. Yet books such as the serials by Charles Dickens were first popularized in the 1800s as a result of the Industrial Revolution. The pervasive printing press allowed for texts of all kinds to be printed more rapidly and expansively.

Helping students engage with physical texts to build their motor skills and academic resilience is also a technological tool that can support student learning. Reading physical books, holding a pencil and writing on paper is not an aberration from technology but an equally legitimate participation in it. While AI might have its benefits, our task as teachers is not to provide it as a regular tool, but instead to teach discernment: when will it support my brain development, and when will it not?

Our brains are so useful to us, but they can only continue to be so if we engage with our thoughts by building discipline and discernment. AI is unavoidable, but instead of denying its presence and enforcing consequences when students use it, I believe we should teach students the power of their innate skills as human beings and why those skills are relevant to their lives.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWhy AI-Native CLM is Here to Stay – Artificial Lawyer
Next Article ‘Most people are not that good at…’: Google DeepMind’s Jeff Dean says today’s AI is already ahead of average humans | Trending News
Advanced AI Editor
  • Website

Related Posts

Under Siege: How Schools Are Fighting Back Against Rising Cyber Threats

September 4, 2025

Teachers Try to Take Time Back Using AI Tools

August 25, 2025

More Schools Are Considering Education-Focused AI Tools. What’s the Best Way to Use Them?

August 22, 2025

Comments are closed.

Latest Posts

Basquiats Linked to 1MDB Scandal Auctioned by US Government

US Ambassador to UK Fills Residence with Impressionist Masters

New Code of Ethics Implores UK Museums to End Fossil Fuel Sponsorships

Art Basel Paris Director Clément Delépine to Lead Lafayette Anticipations

Latest Posts

Delta Activations: A Representation for Finetuned Large Language Models – Takara TLDR

September 6, 2025

Fraud Remote Jobs Exposed: The Hidden Role of Claude AI

September 6, 2025

Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment

September 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Delta Activations: A Representation for Finetuned Large Language Models – Takara TLDR
  • Fraud Remote Jobs Exposed: The Hidden Role of Claude AI
  • Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment
  • Towards a Unified View of Large Language Model Post-Training – Takara TLDR
  • OpenAI boss Sam Altman dons metaphorical hot dog suit as he realises, huh, there sure are a lot of annoying AI-powered bots online these days

Recent Comments

  1. goofybeetle9Nalay on Marc Raibert: Boston Dynamics and the Future of Robotics | Lex Fridman Podcast #412
  2. zestycrow4Nalay on Study: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks
  3. goofybeetle9Nalay on Study: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks
  4. flickergoose3Nalay on Marc Raibert: Boston Dynamics and the Future of Robotics | Lex Fridman Podcast #412
  5. flickergoose3Nalay on Study: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.