I know ChatGPT can sometimes feel like a superpower. But a new study from Carnegie Mellon and Microsoft might rain on our parade.
It asks a blunt question: are we letting these tools do too much of our thinking? Are we outsourcing our brains?
I'm talking about critical thinking – that ability to really dig into information, analyze it, and make smart decisions.
This study, surveyed 319 knowledge workers and looked at almost a thousand examples of how they use generative AI. And what they found is pretty darn interesting, and maybe a little unsettling.
Here's How We're Actually Using AI (and Where We're Slipping)
The researchers found that we're using critical thinking in three main ways when we interact with AI:
Setting the Goal and Prompt: We're thinking about what we want the AI to do. We're crafting those prompts. Instead of just saying, "Write an email," we're (hopefully) thinking, "Write an email to this audience, about this topic, with this tone." That's step one.
Checking the Output: This is where things get crucial. We should be evaluating what the AI spits out. Is it accurate? Does it make sense? Does it fit our needs? This often means fact-checking against other sources. We can't just blindly trust it.
Integrating the Output: We're not (or shouldn't be) just copy-pasting. We're taking the AI's work, tweaking it, rewriting parts, adding our own insights, and making it fit into the bigger picture. We're shaping the information.
The Confidence Trap: This is Where It Gets Tricky
Here's the kicker. The study found that confidence plays a HUGE role, but not in the way you might expect. They looked at two kinds of confidence:
Confidence in the AI: If you really trust the AI, you're less likely to think critically. You assume it's right. You don't question it. That's a problem.
Confidence in Yourself: If you're confident in your own abilities, you're more likely to challenge the AI. You see its output as a starting point, not the final word. You're the boss, not the AI.
See the difference? It's a trap! The better the AI gets, the more we might be tempted to just let it do its thing. And that's where we risk losing our edge. We get lazy. We forget things (that "digital amnesia" thing).
We're Shifting Where We Think, Not Eliminating Thinking (Yet)
The study also shows that AI is changing where we apply our critical thinking. We're spending less time on:
Finding Information: AI is a search engine on steroids.
Initial Problem Solving: AI can brainstorm solutions.
Doing the Task: AI can write, code, analyze – you name it.
But we're spending more time on:
Checking Facts: We have to verify what the AI tells us.
Making it Fit: We need to adapt the AI's output to our specific needs.
Being the Boss: We're managing the AI, not the other way around.
So, we're not not thinking. We're just thinking differently. The danger is that we might be doing less of the deep, challenging thinking that really matters.
The "Ironies of Automation" and Why Everything Starts to Sound the Same
In 1983, Lisanne Bainbridge warned in Ironies of Automation that automation could make operators’ jobs harder by removing opportunities to practice essential skills.
Forty years later, generative AI might be proving her right.
It basically says that the more we automate, the less we practice those skills ourselves. And if we're not practicing, we get rusty. We lose our ability to do those things well.
And then there's "Mechanised Convergence." Think about it: if everyone's using the same AI to write, won't everything start to sound the same? Where's the originality? Where's the creativity?
What This Means for Students (and Everyone Else)
This is especially important for students.
If you're just using ChatGPT to write your essays, you're cheating yourself.
You're not learning. You're not developing those critical thinking muscles. Educators need to figure out how to use AI in a way that helps students learn, not just get the answers faster.
And for the rest of us, in the workplace, we need to be aware of this too. We need to train ourselves (and our teams) to question the AI, to challenge it, to use it as a tool, not a crutch.
The Bottom Line: Don't Outsource Your Brain
This study is a wake-up call. Generative AI is powerful. It can make us more productive. But it can also make us less… well, us. It can erode our ability to think for ourselves.
The key is to be mindful. To be intentional. To use AI to enhance our thinking, not replace it. We need to view Generative AI as a tool that assists in reaching information, not as a provider of information. Don't just copy and paste. Question everything. Be critical. Don't let ChatGPT steal your ability to think. Your brain is your most valuable asset. Don't let it get lazy.
Hoff
xx
Share this post