AI use raises concerns over learning, critical thinking, studies find
As artificial intelligence tools such as ChatGPT become embedded in everyday study and work, researchers are increasingly questioning their impact on human thinking and learning.
While AI can boost efficiency and improve outputs, a growing body of research suggests that overreliance on these tools may weaken critical thinking, problem-solving, and long-term learning, Caliber.Az reports via foreign media.
Earlier this year, researchers at the Massachusetts Institute of Technology (MIT) published a study examining how AI affects cognitive engagement. Using electroencephalography (EEG) to monitor brain activity, they found that participants who used ChatGPT to write essays showed reduced activity in brain networks associated with cognitive processing. These users also struggled to recall or quote from their own essays compared with participants who completed the task without AI assistance. The researchers warned of a potential “decrease in learning skills” if such tools are used uncritically.
Similar concerns emerged from a joint study by Carnegie Mellon University and Microsoft, which surveyed 319 white-collar workers who regularly use AI tools. The findings showed that higher confidence in AI’s ability to complete tasks was linked to lower levels of critical thinking effort. While AI improved efficiency, researchers cautioned that it could lead to long-term overreliance and diminished independent problem-solving skills.
The issue is also evident in schools. A study published by Oxford University Press (OUP) found that six in ten UK schoolchildren believed AI had negatively affected their school-related skills. However, the picture is mixed. Dr Alexandra Tomescu, a generative AI specialist at OUP, noted that nine in ten students said AI had helped them develop at least one skill, such as creativity, revision, or problem-solving. At the same time, around a quarter admitted that AI made it “too easy” to complete their work.
Experts say the key challenge lies in how AI is used. Professor Wayne Holmes of University College London argues that there is still no large-scale independent evidence proving that AI tools are effective or safe in education. He warns of “cognitive atrophy”, drawing parallels with studies in medicine where AI assistance improved some clinicians’ performance while harming others. According to Holmes, students may achieve better grades with AI help, but at the cost of deeper understanding: “Their outputs are better, but actually their learning is worse.”
OpenAI, the company behind ChatGPT, acknowledges the debate. Jayna Devani, who leads international education at OpenAI, told the BBC that students should not use ChatGPT to outsource work. Instead, she said it should function as a tutor—breaking down questions, guiding understanding, and supporting learning when human help is unavailable.
Most experts agree on one point: AI is not simply a more advanced calculator. Its growing role in education and work demands careful use, greater transparency, and more research to ensure that it enhances human learning rather than undermining it.
By Vugar Khalilov







