Saturday, July 19, 2025

How AI changes the way you think

 

[Editor’s Note: Over the last decade, business owners and leaders have routinely commented about the lack of critical thinking skills in young people entering the workforce. This article from The Economist examines the impact of AI on critical thinking . -dpm]

How AI changes the way you think (The Economist, July 19, 2025) 

AS ANYBODY WHO has ever taken a standardised test will know, racing to answer an expansive essay question in 20 minutes or less takes serious brain power. Having unfettered access to  (AI) would certainly lighten the mental load. But as a recent study by researchers at the Massachusetts Institute of Technology (MIT) suggests, that help may come at a cost.

Over the course of a series of essay-writing sessions, students working with (as well as without) ChatGPT were hooked up to electroencephalograms (EEGs) to measure their brain activity as they toiled. Across the board, the AI users exhibited markedly lower neural activity in parts of the brain associated with creative functions and attention. Students who wrote with the chatbot’s help also found it much harder to provide an accurate quote from the paper that they had just produced.

The findings are part of a growing body of work on the potentially detrimental effects of AI use for creativity and learning. This research points to important questions about whether the impressive short-term gains afforded by generative AI may incur a hidden long-term debt.

The MIT study augments the findings of two other high-profile studies on the relationship between AI use and critical thinking. The first, by researchers at Microsoft Research, surveyed 319 knowledge workers who used generative AI at least once a week. The respondents described undertaking more than 900 tasks, from summarising lengthy documents to designing a marketing campaign, with the help of AI. According to participants’ self-assessments, only 555 of these tasks required critical thinking, such as having to review an AI output closely before passing it to a client, or revising a prompt after the AI generated an inadequate result on the first go. The rest of the tasks were deemed essentially mindless. Overall, a majority of workers reported needing either less or much less cognitive effort to complete tasks with generative-AI tools such as ChatGPT, Google Gemini or Microsoft’s own Copilot AI assistant, compared with doing those tasks without AI.

Another study, by Michael Gerlich, a professor at SBS Swiss Business School, asked 666 individuals in Britain how often they used AI and how much they trusted it, before posing them questions based on a widely used critical-thinking assessment. Participants who made more use of AI scored lower across the board. Dr Gerlich says that after the study was published he was contacted by hundreds of high-school and university teachers dealing with growing AI adoption among their students who, he says, “felt that it addresses exactly what they currently experience”.

Whether AI will leave people’s brains flabby and weak in the long term remains an open question. Researchers for all three studies have stressed that further work is needed to establish a definitive causal link between elevated AI use and weakened brains. In Dr Gerlich’s study, for example, it is possible that people with greater critical-thinking prowess are just less likely to lean on AI. The MIT study, meanwhile, had a tiny sample size (54 participants in all) and focused on a single narrow task.

Moreover, generative-AI tools explicitly seek to lighten people’s mental loads, as many other technologies do. As long ago as the 5th century BC, Socrates was quoted as grumbling that writing is not “a potion for remembering, but for reminding”. Calculators spare cashiers from computing a bill. Navigation apps remove the need for map-reading. And yet few would argue that people are less capable as a result.

There is little evidence to suggest that allowing machines to do users’ mental bidding alters the brain’s inherent capacity for thinking, says Evan Risko, a professor of psychology at the University of Waterloo who, along with a colleague, Sam Gilbert, coined the term “cognitive offloading” to describe how people shrug off difficult or tedious mental tasks to external aids.

The worry is that, as Dr Risko puts it, generative AI allows one to “offload a much more complex set of processes”. Offloading some mental arithmetic, which has only a narrow set of applications, is not the same as offloading a thought process like writing or problem-solving. And once the brain has developed a taste for offloading, it can be a hard habit to kick. The tendency to seek the least effortful way to solve a problem, known as “cognitive miserliness”, could create what Dr Gerlich describes as a feedback loop. As AI-reliant individuals find it harder to think critically, their brains may become more miserly, which will lead to further offloading. One participant in Dr Gerlich’s study, a heavy user of generative AI, lamented “I rely so much on AI that I don’t think I’d know how to solve certain problems without it.”

Many companies are looking forward to the possible productivity gains from greater adoption of ai. But there could be a sting in the tail. “Long-term critical-thinking decay would likely result in reduced competitiveness,” says Barbara Larson, a professor of management at Northeastern University. Prolonged AI use could also make employees less creative. In a study at the University of Toronto, 460 participants were instructed to propose imaginative uses for a series of everyday objects, such as a car tyre or a pair of trousers. Those who had been exposed to ideas generated by AI tended to produce answers deemed less creative and diverse than a control group who worked unaided.

When it came to the trousers, for instance, the chatbot proposed stuffing a pair with hay to make half of a scarecrow—in effect suggesting trousers be reused as trousers. An unaided participant, by contrast, proposed sticking nuts in the pockets to make a novelty bird feeder.

There are ways to keep the brain fit. Dr Larson suggests that the smartest way to get ahead with AI is to limit its role to that of “an enthusiastic but somewhat naive assistant”. Dr Gerlich recommends that, rather than asking a chatbot to generate the final desired output, one should prompt it at each step on the path to the solution. Instead of asking it “Where should I go for a sunny holiday?”, for instance, one could start by asking where it rains the least, and proceed from there.

Members of the Microsoft team have also been testing AI assistants that interrupt users with “provocations” to prompt deeper thought. In a similar vein, a team from Emory and Stanford Universities have proposed rewiring chatbots to serve as “thinking assistants” that ask users probing questions, rather than simply providing answers. One imagines that Socrates might heartily approve.

Potential measures to keep people’s brains active  

Such strategies might not be all that useful in practice, however, even in the unlikely event that model-builders tweaked their interfaces to make chatbots clunkier, or slower. They could even come at a cost. A study by Abilene Christian University in Texas found that AI assistants which repeatedly jumped in with provocations degraded the performance of weaker coders on a simple programming task.

Other potential measures to keep people’s brains active are more straightforward, if also rather more bossy. Overeager users of generative AI could be required to come up with their own answer to a query, or simply wait a few minutes, before they’re allowed to access the AI. Such “cognitive forcing” may lead users to perform better, according to Zana Buçinca, a researcher at Microsoft who studies these techniques, but will be less popular. “People do not like to be pushed to engage,” she says. Demand for workarounds would therefore probably be high. In a demographically representative survey conducted in 16 countries by Oliver Wyman, a consultancy, 47% of respondents said they would use generative-AI tools even if their employer forbade it.

The technology is so young that, for many tasks, the human brain is still the sharpest tool in the toolkit. But in time both the consumers of ai and its regulators will have to assess whether its wider benefits outweigh any cognitive costs. If stronger evidence emerges that ai makes people less intelligent, will they care? 

                                                            _________________________

What do you think?

Best regards,      

Dave Mead

No comments:

Post a Comment