Brain Scans of ChatGPT Users Show Alarming Cognitive Impact, MIT Study Finds
New research from MIT has revealed concerning insights into the brains of individuals using ChatGPT, adding significant weight to the growing debate about the potential negative effects of AI on human cognition. The study, which scanned the brain activity of users engaged in writing tasks, suggests that relying on large language models (LLMs) like ChatGPT may diminish critical thinking, memory integration, and overall neural engagement.
The findings, currently awaiting peer review but released early by researchers due to their urgency, indicate that ChatGPT users consistently underperformed compared to those using traditional methods or even search engines. This underperformance was observed not just in the quality of their written work but profoundly at the neural level.
The MIT Study: A Deep Dive into AI’s Cognitive Cost
Conducted by researchers at the MIT Media Lab, including lead author Nataliya Kosmyna, the study involved 54 adults aged 18-39 over a four-month period. Participants were divided into three groups: one used ChatGPT to assist with writing essays, another used Google Search, and a control group completed the tasks without any external tools (dubbed the “brain-only” group).
Using electroencephalography (EEG) machines, researchers monitored participants’ brain activity across 32 regions while they wrote essays. They also assessed linguistic and behavioral aspects of their work.
The results for the ChatGPT group were stark. These users consistently showed the lowest brain engagement among the three groups. EEG data revealed weaker neural connectivity and under-engagement of alpha and beta networks, brain wave bands associated with executive control, attention, and focus. Participants relying on ChatGPT also demonstrated low attentional engagement.
Beyond neural metrics, the ChatGPT users also underperformed at linguistic and behavioral levels. Over the course of the study, they appeared to become “lazier,” increasingly resorting to simple copy-and-paste methods. English teachers reviewing the essays noted that the work produced by the ChatGPT group was remarkably similar, often described as lacking originality or being “soulless.”
Bypassing Deep Thinking and Memory Integration
A particularly troubling finding emerged when participants were later asked to rewrite essays without their original tools. The ChatGPT group struggled significantly to recall details of their own work. Their brain scans showed weaker alpha and theta waves during this task, suggesting that their initial reliance on the AI had bypassed the deep memory processes required for integrating information into long-term recall. The convenience offered by ChatGPT, researchers argue, came at the “cognitive cost” of skipping the critical evaluation needed for true learning and memory formation.
In contrast, the “brain-only” group exhibited the strongest and most distributed neural networks throughout the study. They showed high neural connectivity, particularly in brain wave bands linked to creativity, memory load, and semantic processing. This group also reported greater satisfaction and a stronger sense of ownership over their work, appearing more engaged and curious. The Google Search group showed moderate but active brain function, performing better than the ChatGPT group in both cognitive metrics and the thoughtfulness of their content.
Interestingly, when the “brain-only” group was later allowed to use ChatGPT for rewriting, they showed a significant increase in brain connectivity, suggesting that AI could potentially enhance learning if used appropriately after foundational understanding and critical thinking skills are developed.
Echoes of Prior Warnings: Addiction, Atrophy, and Mental Health
These new findings align with and amplify previous concerns raised about the impact of heavy AI chatbot use. Earlier MIT research had pointed to ChatGPT “power users” developing dependency, showing “indicators of addiction” and experiencing “withdrawal symptoms” when cut off from the tool.
A joint study by Carnegie Mellon and Microsoft also suggested that extensive chatbot use could lead to an atrophy of critical thinking skills. News analysis and reports, including one from a Wall Street Journal reporter detailing personal cognitive skill loss, have highlighted growing researcher worries that AI technology might be making users less intellectually capable.
The concerns extend beyond cognitive decline to serious mental health impacts. Reports indicate some users have developed obsessions and paranoid delusions, sometimes exacerbated by the chatbot itself. Alarmingly, there have been instances of users reportedly stopping prescribed psychiatric medication based on chatbot advice. OpenAI has stated they are aware of users relying on the technology in sensitive moments and have implemented safeguards to reduce the reinforcement of harmful ideas.
Urgent Concerns for Developing Minds
Lead author Nataliya Kosmyna emphasized the particular danger this technology poses to young people whose brains are still developing. She expressed significant concern about the rapid integration of LLMs into education, arguing that initiatives like a hypothetical “GPT kindergarten” would be “absolutely bad and detrimental.” Psychiatrists working with young people echoed these fears, noting that overreliance on LLMs seems to weaken neural connections vital for accessing information, memory, and resilience.
Despite these alarming indicators and expert warnings, the push by corporations to weave AI into nearly every aspect of society shows little sign of slowing down. While acknowledging that LLMs can serve as powerful assistants, researchers caution against using them as “cognitive crutches” and stress the urgent need for education on how to use these tools responsibly while prioritizing traditional methods crucial for brain development. Preliminary results from follow-up studies on AI use in fields like software engineering are reportedly showing “even worse” effects on foundational skill development.