Is AI making us dumber? The cognitive costs of generative tools
In a thought-provoking piece, The Economist delves into the burgeoning debate on how generative artificial intelligence (AI) tools like ChatGPT are reshaping human cognition—and not always for the better. While these AI assistants can dramatically ease the mental burden of complex tasks, emerging research suggests their use may dull creativity, weaken attention, and ultimately undermine critical thinking skills. The article confronts a growing paradox: the short-term gains of AI convenience might come at a steep long-term cognitive cost.
Central to the discussion is a recent study from the Massachusetts Institute of Technology (MIT), which monitored students’ brain activity via EEGs as they wrote essays, both with and without ChatGPT’s help. Strikingly, AI-assisted writing corresponded with markedly reduced neural activation in brain regions linked to creativity and focus. Furthermore, students relying on the chatbot struggled to accurately recall passages from their own AI-influenced essays. This points to a subtle but significant erosion of cognitive engagement.
These findings align with broader research trends. A Microsoft Research survey of 319 knowledge workers revealed that many tasks completed with generative AI required minimal critical thought—over a third were essentially “mindless” operations. Similarly, a UK study led by Michael Gerlich linked frequent AI use with lower critical-thinking scores among 666 participants. Teachers across the globe have echoed these concerns, observing that heavy AI dependence risks cultivating “cognitive miserliness”: a tendency to offload complex thought processes onto technology rather than exercising the brain’s full capacities.
However, the article cautions that the evidence remains preliminary and nuanced. The MIT study’s limited sample size and narrow task focus leave open questions about causality—do weaker critical thinkers lean more heavily on AI, or does AI use itself blunt cognitive abilities? Moreover, history shows that technological aids—from calculators to navigation apps—often free mental bandwidth rather than diminish it outright. Psychology experts like Evan Risko note that “cognitive offloading” is a longstanding human strategy, but AI’s capacity to supplant complex reasoning presents new challenges.
The article highlights potential consequences for creativity and productivity. At the University of Toronto, participants exposed to AI-generated ideas produced less diverse and imaginative solutions compared to those working unaided. The fear is that overreliance on AI could dull innovation and reduce competitiveness in the workforce. Barbara Larson of Northeastern University warns that “long-term critical-thinking decay” could erode essential skills over time.
Encouragingly, The Economist spotlights emerging strategies to mitigate these risks. Experts suggest using AI as a collaborative “assistant” rather than a full solution provider—breaking problems into incremental steps and prompting users to engage actively with each stage. Microsoft researchers are experimenting with AI that challenges users with “provocations” to stimulate deeper thought, while teams from Emory and Stanford advocate redesigning chatbots as “thinking assistants” that ask probing questions instead of handing over ready-made answers.
Nonetheless, practical challenges remain. Cognitive forcing measures—such as requiring users to formulate their own answers before accessing AI or introducing deliberate delays—may improve engagement but risk user frustration and noncompliance. Surveys indicate many users would circumvent restrictions, valuing convenience over cognitive rigor.
By Vugar Khalilov