The federal government has been diving headfirst into artificial intelligence lately, testing it for use in everything from healthcare to human resources. This is happening as tech companies continue to lobby for the increased deployment of advanced AI tools in government, like agentic or generative AI, and legislators try to weigh the potential negative consequences of doing so. Government is also wrestling with the question of whether or not to put any guardrails around the technology. And into this maelstrom of potential AI pros and cons comes a new study from MIT that might just provide a unique new reason to pump the brakes on AI use, at least a little bit.
The study by MIT’s Media Lab tested the cognitive functions of different groups of students divided up according to how much they used AI tools like ChatGPT to accomplish key tasks — such as writing essays — over a period of several months. The study looked to find out how people’s brains responded to using AI tools over time. And according to the study, the answer to that question is: not great. Participants who exclusively used the AI to help write essays showed weaker brain connectivity, lower memory retention and a fading sense of ownership over their work. Basically, their brains got lazy. And even when they stopped using AI tools later on, the effects lingered.
The study involved 54 students from five Boston area universities who were all wired up with Electroencephalography headsets to monitor brainwave activity. After getting a baseline of their thought patterns and brainwaves, they were assigned the task of researching and writing various essays over a period of four months.
To accomplish their essay writing goals, the students were divided up into three groups. The first group was told to use a large language model like ChatGPT to write their essays for them. They interacted with the AI in order to help it craft each essay, but let it do all the heavy lifting. The second group wrote their own essays, but were allowed to use Google and other search engines to help conduct research and find sources for citation. The final group was supposed to go fully old school, writing their papers on their own and only conducting hands-on research, like one might do at a library. The EEG headsets monitored the brain activity of all three groups over time to see how the various technologies or writing methods affected them compared with their baselines.
The group that used Google for research showed a moderate amount of cognitive activity while the group that worked alone without any technological help showed the highest. Perhaps not too surprisingly, the group that exclusively used ChatGPT-4 to write their papers demonstrated the least amount of brainwave activity. In fact, cognitive function decreased in key areas of their brains over time. And incidentally, the ChatGPT group also had the weakest connection to their work: 83% of the students couldn’t recall key points in their essays, nor could any of them provide accurate quotes from their papers.
According to the authors of the MIT experiment, “In this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLMs [large language models] had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of four months, the LLM group’s participants performed worse than their counterparts in the brain-only group at all levels: neural, linguistic and scoring.”
Perhaps more concerning is the fact that the cognitive declines and the lessening of brainwave activity measured in the AI-only group continued long after the study was completed. Even after they stopped using ChatGPT, participants still showed sluggish brain activity. It turns out that once you start outsourcing your thinking, your brain doesn’t exactly leap at the chance to take back the wheel.
The report from the MIT experiment doesn’t suggest that people stop using AI, far from it. In fact, the ChatGPT group was able to complete their tasks much more quickly than the other two. AI tools can absolutely help with efficiency, especially for time-consuming tasks like data entry or summarizing long documents. But it does suggest that how we use those tools matters quite a bit, not just to ensure accuracy, but to keep up with cognitive health. That advice dovetails with another key study conducted by Stanford University and detailed in their newly released 2025 AI Index. It emphasizes the critical need to always keep humans in the loop as a way to improve AI governance.
In government, the solution might be to create guidelines that encourage employees to think through a problem before turning to AI. That could be as simple as jotting down a rough draft before asking for a rewrite, or outlining ideas manually before letting an AI polish them up. And much like the cybersecurity playbooks or data governance frameworks already in place at most agencies, there might be room for a little cognitive hygiene too in order to make sure that humans stay in the loop, and on the ball.
Because while AI can help us do more, faster, the MIT study warns that it probably shouldn’t come at the cost of forgetting how to do things altogether.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys