Skip to main content

Speed vs Thought: How Real-Time AI is Short-Circuiting Reflection 

The message flashes across your screen, “I need that proposal completed by close of business today.” 

Your heart rate spikes. Another request, “We’re presenting to the board tomorrow. Have you analysed the reports and compiled the presentation?” 

This is the reality for 83% of US workers who suffer from work-related stress, according to a study by American Institute for Stress (AIS). In these moments, adrenaline and cortisol flood your system, and fight-or-flight kicks in. You either push yourself to rush completion, or you freeze, procrastinating until the anxiety becomes unbearable. 

We’ve always wished for that silver bullet. The magic solution that could make everything easier, faster, better. 

And now we have it: AI. 

When deadlines loom, pressure mounts, and the need for speed creates an irresistible pull toward the first answer. We are, after all, wired to trust by nature. With AI readily accessible on every device, information arrives at a pace that would have seemed impossible just years ago. 

AI enables instant responses. Instant summaries. Instant opinions. 

Reflection? That becomes optional. A luxury we can’t afford. 

But surely the AI is correct, isn’t it? 

These instant responses give us hope. Hope that we can meet impossible deadlines. Hope that we’ll impress our peers, our bosses, our clients. So, we dive in headfirst. We read quickly, or not at all, copy-paste into Word, PowerPoint, Google Docs. Submit. Done. 

Except it’s not really our work, is it? 

Recent research tracking over 1,372 participants across 9,593 decision trials reveals something startling: people now engage in what scientists call “cognitive surrender.” When participants consulted AI assistants, their accuracy rose by 25%  when the AI was correct—but plummeted by 15% when the AI made errors (Shaw & Nave, 2026)

Introduced by Daniel Kahneman, we traditionally recognise two systems of cognitive processing and decision-making, System 1 is fast, emotional, automatic, and intuitive, and  System 2, is slow, logical, deliberate, and effortful. 

We’ve created what researchers now term “System 3”, artificial cognition, that operates externally to us but increasingly replaces our internal reasoning. This poses a fundamental question: when we rely on intelligent systems to analyse our behaviours and mental states, can we truly understand ourselves? Can AI replicate the complexity of subjective experience?  

The unsettling answer is that we’re finding out in real-time, often without realising the experiment is happening. 

According to Drainpipe’s “The Reality of AI Hallucinations in 2025” report, the cost we’re not counting is that knowledge workers now spend an average of 4.3 hours per week fact-checking AI outputs (Roman & Gaskins, 2025). Four hours. Every week. Undoing the damage of blind trust. More alarming is that 47% of enterprise AI users admitted to making at least one major business decision based on hallucinated content in 2024. That’s nearly half. Making critical decisions. Based on fabrications. 

Urgency is a manipulation tool. It pushes us toward poor judgment we recognise only in hindsight. “I could’ve done it better.” “I should’ve checked those figures.” “I forgot something crucial.” 

Constant AI assistance doesn’t just reduce pause, doubt, and challenge. It eliminates them. We trust so completely that critical thinking becomes an afterthought—if it happens at all. 

Most AI models express zero uncertainty in their answers, even when they’re completely wrong, according to an article by Jenna Ross on Visual Capitalist from November 2025. Confidence and correctness have been decoupled. The system doesn’t say, “I’m not sure.” It states fiction as fact

In my early days using generative AI, I was guilty of this. Completely, embarrassingly guilty. 

I was swept up in the magic of it, this incredible interface that understood my questions, provided articulate answers, and seemed utterly confident in its conclusions. It’s AI. Advanced technology. Trained on vast datasets. How could it be wrong? 

The revelation came slowly, then all at once. I needed to slow down. To remember that I am the expert. I am the chef in the kitchen, and AI is the sous-chef, not the master. 

There is a need now, an urgent, paradoxical need, to slow our thinking deliberately in an AI-accelerated world. 

Professors Riva and Ubiali warn, in their article on Neurosciencenews.com, “If we passively accept the solutions offered by AI, we might lose our ability to think autonomously and develop innovative ideas”.  

This is the critical insight: the trick isn’t in the content AI provides, it’s in the pace at which it arrives. 

Information that appears instantaneously bypasses “cognitive friction”—-cognitive friction is what produces wisdom, makes us smarter, more critical thinkers.  

When everything seems true, seems accurate, promises to eliminate stress and pressure, we need to do the hardest thing imaginable: 

Stop. Take stock. Think. Think critically, because if you never stop to think, you never question. And if you never question, you’ve surrendered the one thing that makes you irreplaceable: your judgment. 

The future of human-AI collaboration won’t be determined by how fast AI can generate answers. It will be determined by how wisely we choose when to slow down, when to question, and when to trust the expertise we’ve spent years developing. 

In an age of instant everything, our most radical act may be the decision to pause. To reflect. To think. 

It feels as though this article is all doom and gloom about the pitfalls of generative AI in a world of pressure and stress. 

Generative AI is an incredible tool when used correctly and for the right use cases. Try not to think of AI as a “silver bullet” to solve an immediate problem  you’re faced with. Because it is so easily accessible, it is hugely tempting to use AI to alleviate that stress. Rather, break up how you prompt, instead of asking for all the answers in one go. 

What do I mean by this? In your mind, plan what you need to do first. This sets the framework for your critical thinking and starts the process without diving in too fast. 

Provide AI with the context of what you want to achieve. Make it concise, clear, and control the tone of the output. Then ask for specific elements that enhance or bolster your task. 

Know what it is that you need AI for and where it can enhance what you are doing, and not replace. 

This is a balance: you use AI for efficiency in tasks that would take a long time, research is a good example. Get citations, validate the quotes and statistics. In doing so, you are still learning and creating cognitive friction. This is what will make you more effective in the long run.  

Efficiency and efficacy should never be sacrificed for speed. Yes, there is a time management aspect, but that also changes behaviour. Use the tools at your disposal to make you better at what you do, not to do what you do, often not as well as you can do it. 


Philippe Morin | Head of Advisory and Commercial

Subscribe to the Cyber Dexterity NewsRoom?

Receive updates, articles and fresh insights from Tony’s Blog Theme – “My Two Cents Worth.”