As Large Language Models (LLMs) powering artificial intelligence (AI) services increasingly take over a significant portion of human cognitive tasks, researchers are sounding the alarm about the profound cost of what they label “mental outsourcing.”
Dr. Nataliya Kosmyna, a researcher studying human-computer interaction at the Massachusetts Institute of Technology (MIT), first grew suspicious while sifting through intern applications. She noticed a striking similarity among many cover letters: they were lengthy, meticulously structured, and shared an identical flow—starting with an introduction before illogically jumping to the applicant’s connection with the target job. Kosmyna concluded that these applicants were using LLMs from AI-driven chatbots like ChatGPT, Google Gemini, and Claude to craft their applications.
This initial observation soon extended to her students at MIT, where Kosmyna began noticing a concerning trend: students appeared to forget recently taught material more easily than in previous years. This concern resonated with other researchers as well, fueling fears that an over-reliance on AI could impair language abilities and fundamental cognitive functions such as critical thinking, problem-solving, and concentration.
Moreover, a growing body of research indicates that this “cognitive load shifting” to AI is having a detrimental impact on mental faculties. The consequences are worrisome and can even contribute to a decline in overall cognitive function. This is because the tools we use inherently shape our thought processes. The internet, for instance, transformed how we approach research, reducing tasks that once required deep investigation to simple keyword searches.
With the increasing use of search engines, various studies have revealed a diminished tendency to recall detailed information—a phenomenon dubbed the “Google effect.” While some argue that the internet acts as an external memory system, effectively freeing the human brain for other simultaneous tasks, a different concern emerges. When a dominant portion of mental effort is delegated to LLMs or other forms of AI, there’s a significant risk of a sharp decline in memory retention and problem-solving capabilities.
Indeed, artificial intelligence can convincingly compose poetry, offer financial advice, and even provide companionship. Students, too, are increasingly entrusting their assignments to AI. However, this convenience comes with inherent dangers. Numerous studies indicate that young individuals are particularly vulnerable to the negative impacts of AI use, especially concerning core cognitive skills like critical thinking. Aligned with this rising dependence on AI, Kosmyna observes a notable influence of this popular technological development on her students’ cognition, prompting her to delve deeper for a better understanding.
How do the research findings related to AI use stack up?
To investigate further, Kosmyna and her colleagues at the MIT Media Lab recruited 54 university students to write short essays. They were divided into three groups: one using ChatGPT, another allowed Google search but with AI summaries disabled, and a third group that used no technology at all. Brainwave activity was meticulously measured as each student completed the task. The essay topics for this research were deliberately open-ended, requiring minimal research and focusing instead on general themes like loyalty, happiness, or everyday life decisions.
Although the results are yet to be published in a peer-reviewed journal, Kosmyna reveals the study unearthed significant findings. The third group, relying solely on their own thoughts without any technological assistance, displayed brains “lighting up” with widespread activity across many areas. In contrast, the second group, using only a search engine, showed intense activity primarily in the visual areas of the brain. However, the group utilizing ChatGPT exhibited significantly lower brain activity—a striking 55% reduction.
“Their brains weren’t asleep, but there was a decrease in activity in areas associated with creativity and information processing,” Kosmyna stated. ChatGPT also impacted participants’ memory; after submitting their essays, members of the AI-using group were unable to quote parts of their own writing, with some lacking a sense of ownership over their work. Other studies further corroborate that people lose the ability to store and recall information when employing AI tools like ChatGPT.
While these findings are still undergoing peer review, they align with other emerging research. A study by experts from the University of Pennsylvania, for instance, found that some individuals experience a condition termed “cognitive surrender” when using AI chatbots. This means they tend to accept AI’s output with minimal scrutiny, allowing its interpretations to override their own intuition. Similar detrimental effects extend beyond AI chatbots, even into high-stakes scenarios. A multinational research team recently discovered that medical professionals who used AI for colorectal cancer screening over three months became less capable of detecting tumors without the tool’s assistance.
Consequently, Kosmyna reiterated the potential negative risks, including a loss of creativity crucial for original work. This was starkly evident in the essays produced by students using ChatGPT in her study. The essays from the ChatGPT group were remarkably similar to each other, and evaluating lecturers described them as “soulless” due to their lack of originality and depth. “One lecturer even questioned if the students were sitting next to each other, given the essays’ uncanny resemblance,” she recalled. While such studies illustrate the short-term impact of LLMs on the brain, the long-term consequences remain less clear, though Kosmyna and her colleagues’ research offers an initial glimpse.
Four months after the initial study, students were asked to write another essay. Crucially, those previously in the ChatGPT group were instructed to work without LLM support. The neural connectivity in their brains was found to be lower compared to those who had relied on self-thinking or search engines before transitioning to ChatGPT, suggesting they had not adequately engaged with the topics during the earlier process.
AI can have a positive impact, provided…
LLMs can indeed be a positive tool for stimulating thought, provided there isn’t complete reliance on them for mental tasks, according to computational neuroscientist Vivienne Ming. However, Ming, author of “Robot Proof,” observes that the majority of people interacting with this technology tend to surrender their mental and cognitive tasks to these devices.
She encountered this during her own research. Ming asked a group of University of Berkeley students to predict various outcomes, such as oil prices. She found that most participants immediately turned to AI and copied its answers. Simultaneously, she measured gamma wave activity in their brains, an indicator of cognitive effort. The results showed remarkably low cognitive activity. Similar to Kosmyna’s findings, Ming’s research is yet to be published. However, she worries that if these results are further studied, they could reveal concerning long-term implications.
In other studies, weak gamma wave activity has been linked to cognitive decline in later stages of life. “That is very worrying. This tool is just a means, not to be used carelessly. It concerns the future of human intelligence,” Ming emphasized. “If we don’t train it, the long-term implications for cognitive health are immense. Deep thinking is our superpower. Cognitive effort is essential for a healthy brain,” Ming stated.
Nonetheless, Ming observed that less than 10% of participants used AI as a tool to gather data, which they then analyzed themselves. These individuals made more accurate predictions than other participants and also exhibited greater brain activity.
Nearly two decades ago, Ming predicted a statistically significant increase in dementia rates within 20 to 30 years, directly linked to an over-reliance on Google Maps. “My intention was to spark critical analysis. If you don’t need to think about how to navigate, there will be a detectable effect,” she explained. Indeed, the increasing use of GPS has been associated with a decline in spatial memory over time, according to a three-year study involving 13 individuals. Furthermore, poor spatial navigation can be a potential contributing factor to Alzheimer’s disease, another study suggests.
Therefore, the more active the human brain, the greater its protection against cognitive decline. Ming added that the use of LLMs not only erodes creativity but also harms cognition and potentially increases the risk of dementia. As AI tools become more prevalent, humans must understand how to use them beneficially rather than detrimentally in the long run.
Ming suggests the ultimate goal might be a form of “hybrid intelligence,” where humans and machines “tackle difficult tasks” collaboratively. This implies humans should first engage in independent thought before utilizing the tool to test their ideas. Kosmyna concurs with this approach, advising to study various subjects without AI tools in the initial stages to build a strong foundation, only then considering AI’s integration.
Ming recommends using “nemesis instructions” to challenge one’s own reasoning. This method involves asking the AI to explain in detail why our ideas are flawed and how to improve them. Through this process, we are compelled to defend and refine our arguments, rather than simply accepting the tool’s answers. Another proposed technique is to prioritize “productive friction,” by asking the AI to provide only context and pose questions, instead of directly offering answers. When testing this method, she observed users demonstrating higher levels of engagement and participation.
Ultimately, we all need to remain vigilant against cognitive shortcuts, something Kosmyna notes our brains “love very much.” However, to ensure long-term brain health, it is imperative to continually sharpen our minds by challenging their performance to consistently think and create.
- China’s youth seek warmth and closeness from ‘virtual parents’
- ‘I talk to ChatGPT 8 times a day’ – The loneliness ‘crisis’ of Gen Z
- People who use AI to talk to God
- Has AI changed university learning for the better?
- Artificial intelligence is consuming our drinking water supply, why?
- Testimonies of South Korean teachers who became victims of deepfake pornography – ‘I’m depressed, I have to take five pills a day’
- ‘DeepSeek made me shed tears’ – Stories of young Chinese choosing psychological therapy with AI
- ‘It feels like cheating, but I need it’ – Chinese women and their love stories with AI
- How China created DeepSeek amid US obstacles?
- China’s youth seek warmth and closeness from ‘virtual parents’
- The story of a Lego-style AI video creator fighting US rhetoric in the Iran War – When AI becomes a propaganda tool
- People who use AI to talk to God
Summary
Growing reliance on AI, especially Large Language Models (LLMs),