Though researchers caution that this study and others across the field have not drawn hard conclusions on whether AI is reshaping our brains in pernicious ways, the MIT work and other small studies published this year offer unsettling suggestions.

One British study of more than 600 people published in January found “significant negative correlation between the frequent use of AI tools and critical thinking abilities”, as younger users in particular often relied on the programmes as substitutes, not supplements, for routine tasks.
The University of Pennsylvania’s Wharton School published a study last week which showed that high school students in Turkey with access to a ChatGPT-style tutor performed significantly better at solving practice math problems.
When the programme was taken away, they performed worse than students who had used no AI tutor.
And the MIT study that garnered massive attention – and some backlash – involved researchers who measured brain activity of mostly university students as they used ChatGPT to write test-style essays during three sessions.
Their work was compared to others who used Google or nothing at all. Researchers outfitted 54 essay writers with caps covered in electrodes that monitor electrical signals in the brain.

The EEG data revealed that writers who used ChatGPT exhibited the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioural levels,” according to the study.
Ultimately, they delivered essays that sounded alike and lacked personal flourishes. English teachers who read the papers called them “soulless”.
The “brain-only” group showed the greatest neural activations and connections between regions of the brain that “correlated with stronger memory, greater semantic accuracy, and firmer ownership of written work”.
In a fourth session, members from the ChatGPT group were asked to rewrite one of their previous essays without the tool but participants remembered little of their previous work.
Sceptics point to myriad limitations.
They argue that neural connectivity measured by EEG doesn’t necessarily indicate poor cognition or brain health.
For the study participants, the stakes were also low – entrance to university, for example, didn’t depend on completing the essays. Also, only 18 participants returned for the fourth and final session.
Lead MIT researcher Nataliya Kosmyna acknowledges that the study was limited in scope and, contrary to viral internet headlines about the paper, was not gauging whether ChatGPT is making us dumb.
The paper has not been peer-reviewed but her team released preliminary findings to spark conversation about the impact of ChatGPT, particularly on developing brains, and the risks of the Silicon Valley ethos of rolling out powerful technology quickly.
“Maybe we should not apply this culture blindly in the spaces where the brain is fragile,” Kosmyna said in an interview.
OpenAI, the California company that released ChatGPT in 2022, did not respond to requests for comment. (The Washington Post has a content partnership with OpenAI.)
Michael Gerlich, who spearheaded the United Kingdom survey study, called the MIT approach “brilliant” and said it showed that AI is supercharging what is known as “cognitive off-loading”, where we use a physical action to reduce demands on our brain.
However, instead of off-loading simple data – like phone numbers we once memorised but now store in our phones – people relying on LLMs off-load the critical thinking process.
His study suggested younger people and those with less education are quicker to off-load critical thinking to LLMs because they are less confident in their skills. (“It’s become a part of how I think,” one student later told researchers.)
“It’s a large language model. You think it’s smarter than you. And you adopt that,” said Gerlich, a professor at SBS Swiss Business School in Zurich.

Still, Kosmyna, Gerlich, and other researchers warn against drawing sweeping conclusions – no long-term studies have been completed on the effects on cognition of the nascent technology.
Researchers also stress that the benefits of AI may ultimately outweigh risks, freeing our minds to tackle bigger and bolder thinking.
Deep-rooted fears and avenues for creativity
Fear of technology rewiring our brains is nothing new.
Socrates warned that writing would make humans forgetful.
In the mid-1970s, teachers fretted that cheap calculators might strip students of their abilities to do simple maths.
More recently, the rise of search engines spurred fears of “digital amnesia”.
“It wasn’t that long ago that we were all panicking that Google is making us stupid and now that Google is more part of our everyday lives, it doesn’t feel so scary,” said Sam J. Gilbert, professor of cognitive neuroscience at University College London.
“ChatGPT is the new target for some of the concerns. We need to be very careful and balanced in the way that we interpret these findings” of the MIT study.”
The MIT paper suggests that ChatGPT essay writers illustrate “cognitive debt”, a condition in which relying on such programmes replaces the effortful cognitive processes needed for independent thinking.
Essays become biased and superficial. In the long run, such cognitive debt might make us easier to manipulate and stifle creativity.
Gilbert argues that the MIT study of essay writers could also be viewed as an example of what he calls “cognitive spillover” or discarding some information to clear mental bandwidth for potentially more ambitious thoughts.
“Just because people paid less mental effort to writing the essays that the experimenters asked them to do, that’s not necessarily a bad thing,” he said. “Maybe they had more useful, more valuable things they could do with their minds.”
Experts suggest that perhaps AI, in the long run and deployed right, will prove to augment, not replace critical thinking.

The Wharton School study on nearly 1000 Turkish high school students also included a group that has access to a ChatGPT-style tutor programme with built-in safeguards that provided teacher-designed hints instead of giving away answers.
Those students performed extremely well and did roughly the same as students who did not use AI when they were asked to solve problems unassisted, the study showed.
More research is needed into the best ways to shape user behaviours and create LLM programmes to avoid damaging critical thinking skills, said Aniket Kittur, professor at Carnegie Mellon University’s Human-Computer Interaction Institute. He is part of a team creating AI programmes designed to light creative sparks, not churn out finished but bland outputs.
One programme, dubbed BioSpark, aims to help users solve problems through inspiration in the natural world – say, for example, creating a better bike rack to mount on cars. Instead of a bland text interface, the programme might display images and details of different animal species to serve as inspiration, such as the shape of frog legs or the stickiness of snail mucus that could mirror a gel to keep bicycles secure. Users can cycle through relevant scientific research, saving ideas a la Pinterest, then asking more detailed questions of the AI programme.
“We need both new ways of interacting with these tools that unlocks this kind of creativity,” Kittur said.
“And then we need rigorous ways of measuring how successful those tools are. That’s something that you can only do with research.”
Research into how AI programmes can augment human creativity is expanding dramatically but doesn’t receive as much attention because of the technology-wary zeitgeist of the public, said Sarah Rose Siskind, a New York-based science and comedy writer who consults with AI companies.
Siskind believes the public needs better education on how to use and think about AI – she created a video on how she uses AI to expand her joke repertoire and reach newer audiences. She said she also has a forthcoming research paper exploring ChatGPT’s usefulness in comedy.
“I can use AI to understand my audience with more empathy and expertise than ever before,” Siskind said.
“So there are all these new frontiers of creativity. That really should be emphasised.”