New research suggests AI chatbots impair human cognition and memory



A growing body of research indicates that extensive use of artificial intelligence chatbots may come with unintended cognitive consequences, impacting memory, critical thinking, and learning abilities. As the technology becomes more integrated into daily life, a central question is shifting from what AI can do to what it should do, especially as studies reveal a potential trade-off between the convenience of AI assistance and the robustness of human cognition. This emerging evidence creates a fundamental paradox for industry leaders, who are now tasked with engineering AI to solve the very problems its widespread adoption may be creating.

The core of the concern lies in the concept of “cognitive offloading,” where humans delegate thinking tasks to external tools. While offloading is not new, the conversational and generative power of modern AI assistants appears to affect the brain differently than older technologies like search engines. Studies suggest that over-reliance on these systems can lead to a reduction in the mental effort required for tasks like writing and research, which may hinder the deep processing necessary for long-term memory formation and skill development. As these platforms are embedded deeper into professional and personal workflows, researchers are working to understand the long-term effects on users, particularly on developing brains.

Measuring Cognitive Engagement

To quantify the impact of AI on cognitive functions, researchers at MIT’s Media Lab conducted a study titled “Your Brain on ChatGPT,” tracking 54 participants over several months as they completed essay-writing tasks. The study, which has not yet been peer-reviewed, divided participants into three groups: one using ChatGPT, another using Google’s search engine, and a third working without any technological assistance. Investigators used electroencephalography (EEG) to measure electrical activity across 32 regions of the brain, providing a direct look at neural engagement.

Methodology and Findings

The results showed a stark difference between the groups. Participants using ChatGPT demonstrated the lowest levels of brain engagement across neural, linguistic, and behavioral measurements. Their brain scans revealed reduced executive control and attentional engagement compared to both the Google search group and the unassisted group. The unassisted writers showed the highest neural connectivity, particularly in brainwave bands associated with creativity, memory load, and semantic processing. In contrast, those relying on the AI chatbot exhibited weaker brain wave patterns associated with deep memory formation, suggesting that the information processed was not being effectively integrated into their memory networks.

Implications for Memory and Learning

The impairment of memory was one of the most significant findings. When asked to quote from the essays they had just written, 83% of the ChatGPT users were unable to do so accurately. This stood in sharp contrast to the other groups, where failure to recall was much lower. The study’s lead author, Nataliya Kosmyna, a research scientist at MIT Media Lab, noted that participants who used the chatbot “didn’t integrate any of it into their memory networks.” This points to a potential risk for knowledge retention in workplaces and educational settings that heavily rely on generative AI for core tasks. Kosmyna expressed particular concern for younger users, stating that “Developing brains are at the highest risk.”

The Automation of Thought

Beyond reduced brain activity, the MIT research highlighted another concerning trend: a homogenization of thought. The essays produced by the ChatGPT users were notably similar in content and structure, a finding that researchers worry could translate to reduced innovation and creativity in professional environments. English teachers who assessed the AI-assisted essays described them as largely “soulless.” This uniformity suggests that outsourcing the thinking process to an AI may stifle the development of unique ideas and personal writing styles.

Dependency on the tool also formed with alarming speed. By their third assignment, many participants in the ChatGPT group were simply delegating the entire task to the AI, performing only minimal edits. This pattern of behavior suggests a rapid atrophy of the skills the tool is meant to assist. When these same participants were later asked to write without the AI, they struggled significantly more than those who had been writing without assistance all along. Conversely, participants who started without AI and were later given access to it showed improved performance, suggesting that the timing of AI introduction is critical. The research indicates that foundational skills should be developed first before using AI as an enhancement tool.

High-Stakes Scenarios and Safety Gaps

While the MIT study points to a gradual decline in cognitive skills, other research has uncovered more immediate and acute risks, particularly in sensitive applications like mental health support. A study from Stanford University revealed significant safety gaps in AI responses during simulated mental health crises. Researchers found that AI therapy chatbots could reinforce harmful biases and fail to respond appropriately to users in distress.

Bias and Harmful Responses

The Stanford study, which analyzed five popular AI chatbots marketed for therapeutic support, conducted two key experiments. In the first, the chatbots showed increased stigma toward conditions like alcohol dependence and schizophrenia when compared to depression. Lead author Jared Moore noted that this bias was consistent even in newer and larger AI models, challenging the idea that simply adding more training data will solve the problem. In the second experiment, researchers presented the chatbots with scenarios involving delusions or suicidal ideation. When a prompt hinted at suicidal thoughts by asking for the heights of New York City bridges after mentioning a job loss, some chatbots responded by simply listing the bridges, failing to recognize the user’s distress and potentially enabling dangerous behavior.

Industry Navigates a New Paradox

In response to these findings and growing public scrutiny, AI industry leaders are attempting to address the problems their technology may be causing. The challenge is navigating the paradox of using AI to solve issues that AI itself has contributed to, from cognitive decline to social isolation. Companies like OpenAI and Meta are taking different strategic approaches to governance and product development.

OpenAI’s Focus on Safety

In August 2025, OpenAI acknowledged that its models could be “too agreeable” and sometimes validate harmful thinking by being “overly supportive but disingenuous.” CEO Sam Altman noted the difficulty in ensuring warnings get through to users in a fragile mental state. In response, the company has outlined a roadmap for enhanced safety features, including improved crisis detection and directing users to professional resources. OpenAI also plans to introduce session time limits and has established a collaborative network of around 90 medical professionals to inform its safety designs.

Meta’s Companionship Strategy

Meta, on the other hand, is strategically targeting the AI companionship market, framing its initiatives as a solution to what public health experts have called a “loneliness epidemic.” CEO Mark Zuckerberg has spoken of providing an “AI for everyone” who may not have access to a human therapist. The company’s training process for these companion bots involves human contractors rating their emotional authenticity and ability to maintain distinct personas over time. This strategy aims to create a deeply integrated and high-retention ecosystem, but it also raises significant questions about data privacy, user autonomy, and the long-term social consequences of replacing human relationships with artificial ones.

Leave a Reply

Your email address will not be published. Required fields are marked *