The Slow Death of Thinking

Written by: Faryal Batool
INTRODUCTION
I watched the sitcom, The Office, years ago. In one of the episodes, the protagonist, Michael Scott, drives his car into a lake because his GPS told him so. His colleague yells at him to stop, but Michael insists: “The machine knows something we don’t.” We laughed because it was stupid, an exaggeration for comedic purposes.
However, maybe it’s not that funny anymore. Fast-forward to today: a man travels to Chile without a visa because ChatGPT told him Australians don’t need one. He believed it. Why? Same reason: “because the machine knows something we don’t.”
We can still laugh and call it stupidity, but sadly, it isn’t an isolated mistake. It’s becoming a part of a growing pattern.
We are gradually outsourcing basic cognitive functions to artificial systems without even noticing. The cost? Mental laziness, eroded self-reliance, and diminished critical thinking.
Let’s unpack what’s actually happening and how it might affect our futures.
THE HIDDEN COST OF CONVENIENCE
AI is useful, which makes its side effects easy to overlook. But efficiency aside, it brings risks that touch everything from how we think to how much energy we consume. Here are some of the downsides worth paying attention to:
- AI Hallucinations: AI can produce false or fabricated information with total confidence. The problem is not that AI is prone to error but that it looks convincing, making users less likely to double-check.
- Algorithmic Complacency: Instead of questioning, users accept what they are given. This reduces scrutiny and encourages passivity in thinking.
- Echo Chambers: Personalized algorithms and AI summaries reinforce biases, narrowing exposure to diverse perspectives and limiting critical thought.
- AI as Fact-Checker: Many now rely on AI to verify truth, even though we know these models are prone to hallucinations. Using a fallible system as an authority on accuracy is risky.
- AI-Induced Dependency: Some users have developed emotional reliance on chatbots, treating them as companions or advisors. This can distort judgment and blur boundaries between tool and authority.
- Environmental Burden: Training and running large AI models require massive computing power. A single AI query can use up to ten times more energy than a standard Google search, and data centers consume enormous amounts of water for cooling. These hidden costs rarely enter everyday conversations but represent a significant impact.
However, one of the most damaging side effects of AI, second only to its environmental footprint, is its role in eroding our ability to think. The very act that defines us as humans is being dulled.
THINKING BECOMES OPTIONAL WITH AI
Why use my fingers to count when a calculator answers instantly? Why memorize phone numbers when a smartphone does it for me? Why remember a shopping list when I can jot it in an app? These are all forms of cognitive offloading, and on their own, they’re not harmful.
The problem is that with AI, offloading goes deeper. It doesn’t just store facts or crunch numbers; it drafts arguments, makes decisions, and generates strategies. In other words, it doesn’t support critical thinking; it starts replacing it.
A recent study published by Dr Michael Gerlich found that frequent cognitive offloading to AI tools is significantly linked to reduced critical thinking. Participants who relied heavily on AI for decision-making showed lower performance in tasks requiring independent analysis and judgment.
Similarly, a study highlighted in Psychreg reported that 58.1% of users admitted greater dependence on AI for decision-making. One of the researchers, Dr. Sham Singh, warns that this reliance fragments attention, weakens memory, and erodes problem-solving skills over time.
To cement the fact completely, MIT’s recent “Your Brain on ChatGPT” study makes the risk even clearer. They tested three groups: one wrote essays with no tools, another used search engines, and the third used ChatGPT. Results: the AI group showed the lowest brain activity, the least originality, and poor recall of their own work.
In short, this shift shows us that we are making thinking itself optional, and once a habit of offloading forms, recovery is slow. It’s a long-term decline in reasoning, creativity, and independent judgment. Short-term gains, long-term atrophy. That’s cognitive debt.
STAYING SHARP IN THE AGE OF AI
Let’s face it, AI is not going anywhere, but professionals and organizations can use it without giving up their own capacity to think.
- Use AI as a draft, not a decision: Start with your own outline or idea, then let AI refine. Don’t let it replace the act of generating original thoughts.
- Build checks into the process: Require verification against reliable sources. Encourage second looks, especially on factual or high-stakes content.
- Separate tasks: Avoid making AI systems responsible for everything: writing, researching, and validating. Dividing and spreading tasks reinforces engagement and collaboration.
- Alternate AI and non-AI work: Rotate between tasks done with and without AI. Like physical exercise, mental strength requires practice.
- Train awareness: Teach employees how AI works, its limitations, and where it fails. Awareness itself reduces blind trust.
- Reward critical effort: Value original thinking and questioning, not just clean, AI-polished outputs. Incentivize employees to think first, check second.
USE THE TOOL, DON’T BECOME IT
When Black Mirror first came out, it felt like dystopian fiction. Today, many of those scenarios read less like fantasy and more like case studies.
AI can process data and automate tasks, but it cannot replace human judgment. Handing over too much thinking to machines erodes the very skills that make us effective in critical thinking, creativity, and independence.
The answer isn’t to reject AI but to draw the line wisely, let it handle the repetitive work, while you safeguard the decisions that require judgment and originality. That balance is the only way to keep your edge in a world where it’s easier than ever to stop thinking.