Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI says GPT-6 is coming and it’ll be better than GPT-5 (obviously)

ByteDance releases new open source Seed-OSS-36B model

You can now talk to Google Photos to make your edits

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Education AI

How Sci-Fi Taught Me to Embrace AI in My Classroom

By Advanced AI EditorAugust 6, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This story was published by a Voices of Change fellow. Learn more about the fellowship here.

Growing up as a sci-fi geek, the promise of humanity’s future among the stars was bolstered by artificial intelligence. In “Star Trek,” the ship’s omnipresent computer was a font of knowledge, advice, and could even make a cup of Earl Grey. However, in today’s society, the specter of AI is often portrayed as a villain by the media and general society, particularly when it comes to AI in the classroom.

In my history class, one of my favorite projects I give to students is to have them learn about something deeply, create a lesson and teach back what they learned to their class. It’s a fun way to start the lessons on how to research well and how to communicate learning with others. This year, when rolling out the project, I gave a small lesson on how to find good sources. I gave the usual spiel about Wikipedia, how to use online libraries and databases, and even how, sometimes, YouTube can provide good learning.

What truly shocked some students was when I said, “Use ChatGPT or other AI services to find sources.” One student very loudly said, “What? We can use AI on this?” Now, obviously, my intent was not for them to use AI to do the project for them, but simply as a way to point them in the right direction of sources of information. Thinking back on this interaction gave me pause: Why did my student act so surprised when I said AI is useful? She simply couldn’t believe I had even said those letters, wondering out loud, “Is this a trick?”

I believe a lot of her reaction lies in how we, the adults, educators, parents and media have presented AI. Students, just like us, have been seeing all the articles and news stories about AI in the classroom. Every story sounds a bit the same. AI is the new “big bad” of education, and students are using it to cheat! There is little wiggle room. AI is the Death Star, and its aim is on our students’ abilities to think for themselves.

If the popular sentiment around AI and education is to be believed, there are few to no redeeming qualities to this emerging technology. While these sentiments may hold true for some, I also believe we are responsible for the way we frame the benefits and utility of AI. If we only present to students that AI is a tool for cheating, then students will only ever see AI as a tool for cheating. So, how can we reframe AI in the classroom for our students?
Still, and somewhat contentiously, I have hope. But if AI is to become something more than just a tool for cheating, it is up to us as educators to educate ourselves and our students on its other uses.

“I do not fear computers. I fear the lack of them” — Isaac Asimov

The fear of AI being used as only a tool for cheating and “dumbing down” students in writing and humanities classes, like social studies, reminded me of growing up in the age of calculators and later, the internet. When I was a student in the early 1990s, calculators were demonized as simply a way to skip the hard part of math, and would lead to students never learning math basics. We have seen how the calculator age has changed mathematics education with shifts toward new math principles like understanding how the calculation works, why it works the way it does, and applications of math, rather than simply “finding the answer.”

I would argue that this new way of thinking in math has been a net positive for our students. They are now much more understanding of the underlying theories behind all sorts of mathematical problems and empowered to use their critical thinking to understand how best to solve the problem. Now, ask an older person to do a simple math problem. They may be able to find the answer, but they have no idea how or why it works.

The fact that students don’t actually have to calculate every number by hand (or in their heads) doesn’t stop them from being fantastic mathematicians and strong critical thinkers. Could a similar phenomenon happen with AI?

Earlier, I talked about the ship’s computer in the Star Trek series as an example of the hope for AI. In this example, the AI computer is a vastly intelligent machine that can make any calculation in seconds, provide background information on topics and species and provide statistics and probabilities for the ship’s crew. However, the ship is still manned by teams of professionals. This is because the AI is viewed as a tool for information, but not for decision-making. That is up to the captain (“Engage!”) and the crew to take the computer’s information, double-check it and then make what they think is the best choice. We can try to use this as a model for how we talk about AI to our students.

I have told my students that AI is a new technology that could be a super powerful tool for them, but it is ultimately a really smart child. AI is easily influenced by bad information, doesn’t always think about whether what it says is right or wrong, and ultimately, is still just learning. “Would you let a small child write your report for you?” I once asked a student.
What AI can do is point our students in the direction of information. If we show them how to dig into AI, look at where it got its information from, verify it and make our own decisions, then we can use our own powerful computers (brains) to make decisions on what is good information and bad.

“Progress doesn’t come from early risers — progress is made by lazy men looking for easier ways to do things” — Robert Heinlein

Even if you agree with AI being a useful information system, that doesn’t solve the problem of simple laziness. Some students will admit that they know the material, but are simply too busy or too lazy to go the extra mile and actually do the writing or creation of projects to show it. It is simply easier and more efficient to plug what they know into an AI program and let it put it together in whatever form the teacher asked for.

We can help guide students around this behavior by being clear about what our intentions are for the assignment. Say I want my students to write a short, persuasive essay about role-playing as someone trying to get people to buy war bonds during World War II. Outside of simply completing the work, I have specific standards and concepts I am looking to see if my student understands. AI could be used to find primary sources to quote, summaries of the effectiveness of the war bond programs and even provide an outline for how effective persuasive essays can be formatted.

If a student uses AI for all those things, and then does the writing themselves using their own voice and words, have they met the criteria for solid and original work? How much work can we sacrifice to efficiency before it becomes cheating or unoriginal work? This is up to each teacher and maybe even by assignment. There are plenty of places where I think giving up some authority over how much “work” the student did versus how much understanding they have of the concept is perfectly reasonable.

As a teacher, I will freely admit to using a multitude of quality AI programs to help speed up tedious tasks. Things like text leveling, checks for understanding on videos, and even writing emails can all be made much easier and efficient for teachers using AI. If I can use those tools and still be an effective and quality teacher, we can teach our students to use similar tools to be efficient and quality learners.

“The future is not set. There is no fate but what we make for ourselves” — John Connor

I’m not too naive to understand that, no matter how we present it, some students will always be tempted by “the dark side” of AI. What I also believe is that the future of AI in education is not decided. It will be decided by how we, as educators, embrace or demonize it in our classrooms.

My argument is that setting guidelines and talking to our students honestly about the pitfalls and amazing benefits that AI offers us as researchers and learners will define it for the coming generations.

Can AI be the next calculator? Something that, yes, changes the way we teach and learn, but not necessarily for the worse? If we want it to be, yes.

How it is used, and more importantly, how AI is perceived by our students, can be influenced by educators. We have to first learn how AI can be used as a force for good. If we continue to let the dominant voice be that AI is the Terminator of education and critical thinking, then that will be the fate we have made for ourselves.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticlePaper page – AlignGuard-LoRA: Alignment-Preserving Fine-Tuning via Fisher-Guided Decomposition and Riemannian-Geodesic Collision Regularization
Next Article Only 2 days left to save $675 on your Disrupt 2025 ticket
Advanced AI Editor
  • Website

Related Posts

What Stanford Learned By Crowdsourcing AI Solutions for Students With Disabilities

August 19, 2025

As Data Centers Expand, Should That Concern Schools?

August 14, 2025

Are High School Counselors Encouraging AI for College Applications?

July 24, 2025

Comments are closed.

Latest Posts

Dallas Museum of Art Names Brian Ferriso as Its Next Director

Rapa Nui’s Moai Statues Threatened by Rising Sea Levels, Flooding

Mickalene Thomas Accused of Harassment by Racquel Chevremont

AI Impact on Art Galleries, and More Art News

Latest Posts

OpenAI says GPT-6 is coming and it’ll be better than GPT-5 (obviously)

August 21, 2025

ByteDance releases new open source Seed-OSS-36B model

August 21, 2025

You can now talk to Google Photos to make your edits

August 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI says GPT-6 is coming and it’ll be better than GPT-5 (obviously)
  • ByteDance releases new open source Seed-OSS-36B model
  • You can now talk to Google Photos to make your edits
  • Second day of U.S. tech-sell off — but don’t panic
  • Anaconda Report Links AI Slowdown to Gaps in Data Governance

Recent Comments

  1. ArturoJep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Charlescak on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. RaymondSwedo on Foundation AI: Cisco launches AI model for integration in security applications
  4. ArturoJep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Charlescak on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.