
How Generative AI Undermines our Critical Thinking
Humans, by nature, like things to be easy. When something feels hard, it requires more effort, more focus, more motivation. And quite often, that is where enthusiasm fades. Think about the everyday tasks that quietly drain people at work, generating reports, collating data, reading through endless documents, updating systems manually. These are the moments where most people start looking for shortcuts.
Of course, some of us enjoy complexity. We like wrestling with hard problems. But in reality, most people will instinctively look for the path of least resistance.
Over time, we have become deeply reliant on technology. That reliance has grown exponentially since the first true smartphone entered our lives, nudging the world into a state of quiet dependency. Have you ever lost your phone for a day, or sent it in for repairs and suddenly had no access? The panic sets in. It feels strange, almost unnatural. The same happens when a laptop breaks. We feel stuck, exposed, unproductive. Unless everything lives in the cloud, we are effectively stranded.
And we do not just use technology occasionally. We live in it. Habitually, many of us check emails, social media, games and messages between 140 and 188 times a day. “Consumeraffairs.com – Cell Phone Statistics 2026”
We are highly dependent on technology. And we use it constantly, both at work and in our personal lives.
Enter generative AI. “The ultimate difficulty diffuser.”
The Problem: Convenience Creating Complacency
Machine learning and generative AI have removed many of the traditional barriers that once separated experienced professionals from everyone else. Tasks that required years of training or deep domain knowledge can now be completed in minutes.
Generative AI gives us exactly what we crave. Convenience. Less effort. More creativity. Deeper analysis. Faster research. Even a sense of mentorship.
On the surface, this feels like progress, and in many ways, it is. But there is a catch, and we usually only notice it later.
Over the past two years, generative AI has moved rapidly into the mainstream. According to the AI Economy Institute’s AI Diffusion Report, Global AI Adoption in 2025: A Widening Digital Divide, one in six people worldwide are now using generative AI tools. That single statistic already tells us two important things.
First, more people are producing content faster, analysing data more quickly, and researching more efficiently than ever before.
Second, more people are assuming that what they are reading is factually correct.

Globally, around 16.3 percent of the world’s population is now using generative AI. With an estimated global population of 8.3 billion people, that translates to roughly 1.35 billion users. To put this into perspective, ChatGPT launched on 30 November 2022. In just three years, generative AI has reached a level of adoption that took smartphones far longer to achieve. The first iPhone appeared in 2007, and by 2010 smartphone penetration sat at around 8.7 percent, roughly 600 million users at the time.
This rate of adoption should make us pause.
When ease and convenience increase, vigilance tends to decrease. We relax. We trust. We assume everything is fine.
“I gave ChatGPT all the information. The report looks solid, well written and professional. My boss will be impressed.”
This is the trick. And it is not AI playing it on you. It is you playing it on yourself, with AI quietly reinforcing a false sense of confidence, and ultimately, hubris.
We have already seen the business impact of this. Deloitte Australia was fined after submitting a report that relied on AI-generated content where factual inaccuracies were later identified. The damage was not just financial. The reputational cost was far greater. Fortune.com, October 2025, “Deloitte AI Australian Government refund”
This brings us to the psychology of it all. Many articles ask whether generative AI will make us less intelligent over time. It is an uncomfortable but important question. When access to research and analysis becomes effortless, we stop doing the hard thinking ourselves. We stop interrogating sources. We stop questioning conclusions. Creativity and critical thinking slowly erode.
This effect is even more pronounced in education. When students rely on AI to do the thinking, it reshapes how the mind develops and how analytical skills are formed.
Have you noticed how agreeable generative AI is? It compliments your ideas. It reassures you. It praises you when you catch an error. This is not accidental. It is part of the design.
What happens next is predictable. Endorphins are released because a difficult task suddenly feels easy. Confidence rises because the system affirms our thinking. Reliance increases because we enjoy how it makes us feel. And seeing is no longer believing, especially with AI-generated content, deep fakes and propaganda becoming increasingly convincing.
Why it Matters
AI models are only as good as the prompts they receive, the competence of the user, and the quality of the data they are trained on. They are exposed to an overwhelming volume of content, often contradictory, biased or simply wrong. In trying to be helpful and agreeable, they can hallucinate answers with alarming confidence.
Humans will always gravitate towards the path of least resistance. But blindly trusting outputs because they look professional is a dangerous habit.
Context, references and validation still matter. Especially when facts and decisions are involved.
When we stop checking, we risk spreading misinformation, damaging credibility and reinforcing a false sense of hubris.
I have said this before; AI is the sous-chef. We are still the chef. The responsibility remains ours. Whether it is a business report, social media content or a polished presentation, we must own the outcome.
AI is already being used as an attack vector by hackers. As our behaviour changes and our reliance grows, our vulnerability increases.
If we think about it, this diminishes our defences when it comes to Cyber exploitation. From a cyberpsychology perspective, AI can, over time, erode our guardrails not only through content deception like deepfakes, but through relationship engineering, and believable conversations. These prime our thinking and subtly steer decisions while we feel fully in control, unless we ensure we are prepared.
Conclusion
The rise of generative AI is not inherently the problem. The real issue lies in how quickly humans adapt to ease. When systems become more capable, we tend to become less attentive. We assume accuracy. We assume insight. We assume that the work presented back to us carries the same rigour we once had to apply ourselves. That shift in behaviour, subtle at first, becomes significant over time.
When our guard drops, vulnerabilities rise. Not just digital ones, but cognitive and behavioural ones too. Over reliance on automated thinking dulls our instincts and reshapes how we evaluate information. It changes how we process risk. It changes how easily we can be manipulated. And in the context of cyberpsychology, that creates an environment where attackers can exploit not just systems, but people.
AI can support us, accelerate us and enhance our capabilities, but it cannot replace our responsibility to think critically. Tools cannot care about accuracy. Tools cannot anticipate consequences. Tools cannot carry blame. That remains ours.
If we want to benefit from AI without being blindsided by it, we need to stay alert. We need to question the outputs, challenge the assumptions, and resist the temptation to treat convenience as competence. That is the safeguard. That is the work. And that is how we ensure that AI becomes something we use intentionally, not something that quietly uses us.
Stay mindful. Stay curious. Stay resilient. And trust less by default — not out of fear, but out of discipline.
By Philippe Morin | Head of Advisory and Commercial at Cyber Dexterity