Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Will AI ‘Dumb Down’ The Legal World? – Artificial Lawyer

Paper page – Noise Consistency Training: A Native Approach for One-Step Generator in Learning Additional Controls

China’s Ant Group Boosts R&D Spending To Record $3.26 Billion In 2024 Amid AI Push – Alibaba Gr Hldgs (NYSE:BABA), Microsoft (NASDAQ:MSFT)

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
MIT News

MIT study raises concerns over AI’s impact. Some experts warn against fear, see creative benefits

Advanced AI EditorBy Advanced AI EditorJune 30, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Though researchers caution that this study and others across the field have not drawn hard conclusions on whether AI is reshaping our brains in pernicious ways, the MIT work and other small studies published this year offer unsettling suggestions.

MIT researcher Nataliya Kosmyna shares a picture of a subject in her recent study. Photo / Sophie Park, The Washington Post
MIT researcher Nataliya Kosmyna shares a picture of a subject in her recent study. Photo / Sophie Park, The Washington Post

One British study of more than 600 people published in January found “significant negative correlation between the frequent use of AI tools and critical thinking abilities”, as younger users in particular often relied on the programmes as substitutes, not supplements, for routine tasks.

The University of Pennsylvania’s Wharton School published a study last week which showed that high school students in Turkey with access to a ChatGPT-style tutor performed significantly better at solving practice math problems.

AdvertisementAdvertise with NZME.

When the programme was taken away, they performed worse than students who had used no AI tutor.

And the MIT study that garnered massive attention – and some backlash – involved researchers who measured brain activity of mostly university students as they used ChatGPT to write test-style essays during three sessions.

Their work was compared to others who used Google or nothing at all. Researchers outfitted 54 essay writers with caps covered in electrodes that monitor electrical signals in the brain.

Kosmyna’s recent study found lower brain engagement in the people who used ChatGPT than those who used Google or no technology to write their essays. Photo / Sophie Park, The Washington Post
Kosmyna’s recent study found lower brain engagement in the people who used ChatGPT than those who used Google or no technology to write their essays. Photo / Sophie Park, The Washington Post

The EEG data revealed that writers who used ChatGPT exhibited the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioural levels,” according to the study.

Ultimately, they delivered essays that sounded alike and lacked personal flourishes. English teachers who read the papers called them “soulless”.

The “brain-only” group showed the greatest neural activations and connections between regions of the brain that “correlated with stronger memory, greater semantic accuracy, and firmer ownership of written work”.

In a fourth session, members from the ChatGPT group were asked to rewrite one of their previous essays without the tool but participants remembered little of their previous work.

Sceptics point to myriad limitations.

AdvertisementAdvertise with NZME.

They argue that neural connectivity measured by EEG doesn’t necessarily indicate poor cognition or brain health.

For the study participants, the stakes were also low – entrance to university, for example, didn’t depend on completing the essays. Also, only 18 participants returned for the fourth and final session.

Lead MIT researcher Nataliya Kosmyna acknowledges that the study was limited in scope and, contrary to viral internet headlines about the paper, was not gauging whether ChatGPT is making us dumb.

The paper has not been peer-reviewed but her team released preliminary findings to spark conversation about the impact of ChatGPT, particularly on developing brains, and the risks of the Silicon Valley ethos of rolling out powerful technology quickly.

“Maybe we should not apply this culture blindly in the spaces where the brain is fragile,” Kosmyna said in an interview.

OpenAI, the California company that released ChatGPT in 2022, did not respond to requests for comment. (The Washington Post has a content partnership with OpenAI.)

Michael Gerlich, who spearheaded the United Kingdom survey study, called the MIT approach “brilliant” and said it showed that AI is supercharging what is known as “cognitive off-loading”, where we use a physical action to reduce demands on our brain.

However, instead of off-loading simple data – like phone numbers we once memorised but now store in our phones – people relying on LLMs off-load the critical thinking process.

His study suggested younger people and those with less education are quicker to off-load critical thinking to LLMs because they are less confident in their skills. (“It’s become a part of how I think,” one student later told researchers.)

“It’s a large language model. You think it’s smarter than you. And you adopt that,” said Gerlich, a professor at SBS Swiss Business School in Zurich.

Kosmyna at the MIT Media Lab in Cambridge, Massachusetts. Photo / Sophie Park, The Washington Post
Kosmyna at the MIT Media Lab in Cambridge, Massachusetts. Photo / Sophie Park, The Washington Post

Still, Kosmyna, Gerlich, and other researchers warn against drawing sweeping conclusions – no long-term studies have been completed on the effects on cognition of the nascent technology.

Researchers also stress that the benefits of AI may ultimately outweigh risks, freeing our minds to tackle bigger and bolder thinking.

Deep-rooted fears and avenues for creativity

Fear of technology rewiring our brains is nothing new.

Socrates warned that writing would make humans forgetful.

In the mid-1970s, teachers fretted that cheap calculators might strip students of their abilities to do simple maths.

More recently, the rise of search engines spurred fears of “digital amnesia”.

“It wasn’t that long ago that we were all panicking that Google is making us stupid and now that Google is more part of our everyday lives, it doesn’t feel so scary,” said Sam J. Gilbert, professor of cognitive neuroscience at University College London.

“ChatGPT is the new target for some of the concerns. We need to be very careful and balanced in the way that we interpret these findings” of the MIT study.”

The MIT paper suggests that ChatGPT essay writers illustrate “cognitive debt”, a condition in which relying on such programmes replaces the effortful cognitive processes needed for independent thinking.

Essays become biased and superficial. In the long run, such cognitive debt might make us easier to manipulate and stifle creativity.

Gilbert argues that the MIT study of essay writers could also be viewed as an example of what he calls “cognitive spillover” or discarding some information to clear mental bandwidth for potentially more ambitious thoughts.

“Just because people paid less mental effort to writing the essays that the experimenters asked them to do, that’s not necessarily a bad thing,” he said. “Maybe they had more useful, more valuable things they could do with their minds.”

Experts suggest that perhaps AI, in the long run and deployed right, will prove to augment, not replace critical thinking.

Equipment, including electrodes, electrode gel, a syringe and an amplifier, which are used to monitor the brain activity of subjects. Photo / Sophie Park, The Washington Post
Equipment, including electrodes, electrode gel, a syringe and an amplifier, which are used to monitor the brain activity of subjects. Photo / Sophie Park, The Washington Post

The Wharton School study on nearly 1000 Turkish high school students also included a group that has access to a ChatGPT-style tutor programme with built-in safeguards that provided teacher-designed hints instead of giving away answers.

Those students performed extremely well and did roughly the same as students who did not use AI when they were asked to solve problems unassisted, the study showed.

More research is needed into the best ways to shape user behaviours and create LLM programmes to avoid damaging critical thinking skills, said Aniket Kittur, professor at Carnegie Mellon University’s Human-Computer Interaction Institute. He is part of a team creating AI programmes designed to light creative sparks, not churn out finished but bland outputs.

One programme, dubbed BioSpark, aims to help users solve problems through inspiration in the natural world – say, for example, creating a better bike rack to mount on cars. Instead of a bland text interface, the programme might display images and details of different animal species to serve as inspiration, such as the shape of frog legs or the stickiness of snail mucus that could mirror a gel to keep bicycles secure. Users can cycle through relevant scientific research, saving ideas a la Pinterest, then asking more detailed questions of the AI programme.

“We need both new ways of interacting with these tools that unlocks this kind of creativity,” Kittur said.

“And then we need rigorous ways of measuring how successful those tools are. That’s something that you can only do with research.”

Research into how AI programmes can augment human creativity is expanding dramatically but doesn’t receive as much attention because of the technology-wary zeitgeist of the public, said Sarah Rose Siskind, a New York-based science and comedy writer who consults with AI companies.

Siskind believes the public needs better education on how to use and think about AI – she created a video on how she uses AI to expand her joke repertoire and reach newer audiences. She said she also has a forthcoming research paper exploring ChatGPT’s usefulness in comedy.

“I can use AI to understand my audience with more empathy and expertise than ever before,” Siskind said.

“So there are all these new frontiers of creativity. That really should be emphasised.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWhat is Optimization? + Learning Gradient Descent | Two Minute Papers #82
Next Article Federal judge denies OpenAI bid to keep deleting data amid newspaper copyright lawsuit | Business
Advanced AI Editor
  • Website

Related Posts

MIT CSAIL’s new vision system helps robots understand their bodies

June 29, 2025

Meltwater Ponds May Have Cradled Complex Life When Earth Froze Over

June 28, 2025

MIT cultivated antisemitic environment for Jewish students and faculty, federal lawsuit alleges

June 27, 2025
Leave A Reply Cancel Reply

Latest Posts

Newly Released Wildlife Images Winners Of BigPicture Photo Competition

Tituss Burgess Teams Up With Lyft To Offer Pride Weekend Discounts

‘Squid Game’ Director Hwang Dong-Hyuk On Making Seasons 2 And 3

Nathan Fielder’s The Rehearsal is One of Many Genre-Defying Projects.

Latest Posts

Will AI ‘Dumb Down’ The Legal World? – Artificial Lawyer

June 30, 2025

Paper page – Noise Consistency Training: A Native Approach for One-Step Generator in Learning Additional Controls

June 30, 2025

China’s Ant Group Boosts R&D Spending To Record $3.26 Billion In 2024 Amid AI Push – Alibaba Gr Hldgs (NYSE:BABA), Microsoft (NASDAQ:MSFT)

June 30, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Will AI ‘Dumb Down’ The Legal World? – Artificial Lawyer
  • Paper page – Noise Consistency Training: A Native Approach for One-Step Generator in Learning Additional Controls
  • China’s Ant Group Boosts R&D Spending To Record $3.26 Billion In 2024 Amid AI Push – Alibaba Gr Hldgs (NYSE:BABA), Microsoft (NASDAQ:MSFT)
  • Google DeepMind: 2025 TIME100 Most Influential Companies
  • ‘Some Broke Into Our Home And Stole’ OpenAI Reacts To Meta Poaching Top Talents; Company To Give Week Off To Recharge

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.