Morson Group Newsroom

AI and the cognitive misers: The productivity problem caused by the rise of convenience

Thought leadership Uncommon Sense

22.12.2025

Is desktop AI ruining our brains?

Possibly the most concerning part of the rise of AI is exactly one of the things that makes it so popular… it’s widespread, free availability. Most people can identify someone in their life that relies on it perhaps that little bit too much, be it for writing emails or even generating talking points in meetings. However, as with most quick fixes, the short-term benefit is threatening to be overshadowed by the medium- and long-term problems that they can cause, most notably exacerbating the ‘cognitive misers’ and the ‘cognitive sloths.’

What are cognitive misers and cognitive sloths?

Cognitive misers, a term coined by Susan Fiske and Shelley Taylor, are people who tend to conserve energy by relying on shortcuts (known as heuristics) and simplified strategies rather than more analytical ones, relying on efficiency rather than accuracy.

Cognitive sloth is a term that we can use to describe the persistent tendency that some individuals can have to avoid effortful thinking all together, even when a situation clearly demands care, reflection or analysis. The instinct to default to the easy, habitual or superficial response, or the path of least resistance, can be problematic when it comes to dealing with more complex problems and issues.

Fiske and Taylor argue that the social world is overwhelmingly complex and human cognitive capacity is limited, so people economize effort by using these simplified strategies. Instead of carefully analysing all available information, people default to quick, efficient processing that favours ease.

The slower mode involves individuated, data-rich analysis of people and situations, but is used only when stakes, motivation, or accountability are high. This dual-mode view shows that cognitive sloth is conditional: people shift to more effortful processing when sufficiently motivated or when shortcuts clearly fail.

So ultimately, is AI creating more of these misers and sloths?

Freely available AI might on the surface sound like a good thing for tackling productivity problems, allowing workers to expedite tasks and thus achieve more with the same amount of time. However, this is only part of the story. While individual tasks are feeling faster, organisational learning, judgement and clever human thinking are eroding. The short-term gain of a rapidly completed task might be its own reward in terms of overall efficiency, but this can leave workers shallow of the deeper understanding as to the workings of a particular activity. It’s something of a workplace version of a ‘sugar fix’ – pleasant in the short time but leading to a crash later, and organisations as a whole risk being blinded by the benefits of short-term gain at the expense of longer-term benefits.

There is also the issue of accuracy. While AI grows increasingly intelligent all the time, it can be liable to hallucinate or just outright get things wrong. This can be problematic across all sectors and industries and could in theory have disastrous consequences if left unsupervised.

AI use isn’t going anywhere. But how we monitor its use is going to be key. But how do we do it?

As we step ever further into an AI augmented world, organisations must redesign the human checkpoints that act as safeguards. Most businesses have invested in AI technology over the past few years, so there’s no doubt that it’s here to stay, but businesses need to be explicit about where the ‘human in the loop’ comes in.

Many companies explicitly ban the use of some AI platforms like ChatGPT, especially where sensitive information and data is a part of everyday work, since the platform stores and uses input data to help build out its intelligence.

But it’s not all just about putting safeguards in.

A new report, “AI skills for the UK workforce,” from Skills England finds that AI is transforming jobs across sectors, but many employers cannot keep pace with the skills needed to use it effectively.

When the productivity benefits are so tangible and extensive, proper integration is more important than ever.

Clearly there is a need to manage the behavioural element of AI usage in the workplace. This is where organisations can step in and map out the reality of a situation, such as who’s a detractor, a supporter or a risk profile. Identifying these key traits is important in managing overall AI use.

For example, the Zoomer generation (born roughly from the late 90s to the early 2010s) are digitally native and the most likely to utilise emerging technologies like AI. They’re also a high risk profile. Their heavy use of AI makes them especially vulnerable to classic cognitive miser tendencies. Their comfort with delegating tasks to systems means AI often becomes the default first step for thinking, rather than a support for thinking already underway. When a tool is always available, fast, and frictionless, the temptation is to outsource effortful reasoning, drafting, or problem‑solving instead of engaging deeply with the material. If AI is always there to summarise, explain, and generate, there is less perceived need to memorise or build internal mental knowledge and capacity. This encourages what’s known as surface familiarity: they can navigate tasks and assignments effectively without truly owning the underlying knowledge. Over time, this can erode confidence in their unaided abilities, reinforcing a loop where AI is consulted for even simple tasks that they could manage alone. This in turn brings risk, because if the AI makes a mistake, they won’t be cognitively equipped to spot it.

Hence, we need to step in and identify these people, and crucially that’s where organisations can look at where the training opportunity is. This provides an opportunity to strengthen the ‘human in the loop’ element, and bring in standard checkpoints as standard operating procedures. This can stop the loss of analytical thought, since the human involvement is still there.

There is also another training opportunity when it comes to effectively using prompts, and training people to spot bias that can be inherent in AI systems. An AI literacy programme could work as follows:

Structured AI literacy programme

Technical: learn the fundamental systems themselves, and the differences between them. The starting block for any AI learning journey.

Prompt: how to write prompts, and understanding the ways to get the best out of AI systems.

Reflection: possibly the most important part – the human element. This deals with detecting bias, stress-testing outputs, looking for hallucinating, and knowing when to turn it off completely and use the human brain

How much of each segment of the training would be required by each stakeholder would depend on their experience.

AI has immense power in the journey towards solving productivity problems. Ensuring we use it, and it doesn’t use us, is critical in shaping that journey.

Luciana Rousseau leads the development of human-centred strategies that connect behavioural research with organisational transformation. With deep expertise in the psychology of work, Luciana helps leaders understand the motivations, behaviours, and cultural dynamics that shape performance. Get in touch with her at Luciana.rousseau@morson.com

Let’s talk about how we can help you shape smarter, more inclusive ways to attract and retain specialist talent. Because at the sharp end, there’s no time to stand still.

To top