Does AI Make Smart People Smarter - and Dumb People Dumber?
What happens when you give a thinking tool to people who’ve stopped thinking?
I was at a lecture the other day that was covering the potential impacts of AI on human intelligence. Charts showing reaction times, brain scans highlighting neural pathways, studies thick with statistical significance. Then, forty minutes into his presentation, he stopped mid-sentence and said: "…basically this all shows one thing - AI makes smart people smarter and dumb people dumber". He was clearly fishing for his viral moment - and it worked - the room erupted in uncomfortable laughter.
But that soundbite has stuck with me for days (damn him) - not because just it was catchy, but because I wondered whether it might actually be true. This got me thinking, not only about whether it was true or not, but about what it might mean for us. What if this throwaway line wasn’t just a flippant summary - but a structural truth?
This may be one of the more common misunderstandings of the early AI era: the belief that AI makes you smarter simply by using it. But does it? What if it simply amplifies what’s already there. Which, depending on who you are - is either very good news, or very bad.
This week’s article explores a question that I’ve been chewing on all week: Does AI truly improve our cognitive abilities - and if so, what exactly is it improving? Does it improve our intelligence, or simply mirror what’s already there? Does it help us think more clearly, or simply reinforce our existing mental habits - good or bad?
The aim is to explore how this dynamic might play out in real-world settings, why it matters more than most realise, and what it could mean for society, leadership, and the very act of thinking in an AI-saturated world.
Is AI Flattening the Curve – or Making It Steeper?
Generative AI was sold to the public with the idea of democratisation of knowledge. A kind of silicon socialism for the intellect. No longer would insight be confined to experts - now anyone with a keyboard could summon the wisdom of the ages.
Except it hasn’t played out that way.
Like calculators in the hands of those who never learned math, Photoshop in the hands of the creatively challenged, or me with a microphone - tools don’t equal talent. They expose it. Or (in the case of my singing ability) its absence.
Recent studies suggest that users with stronger metacognitive skills and strategic reasoning extract substantially more value from LLMs. Prompting, far from being a syntactic trick, is increasingly recognised as a cognitive and reflective process (Knothe et al., 2024). And if your thinking is muddled, shallow, or in some people’s case - simply absent, AI will just echo that back to you. But faster and at scale.
Crucially, AI doesn't help you structure an argument if you don't already know what you're trying to say. It doesn’t resolve ambiguity - it magnifies it. It doesn’t enforce clarity – but it is great at polishing vagueness (or even utter rubbish) until it sounds convincing. The result can often be a flood of plausible-sounding mediocrity, indistinguishable from insight (especially if you’re using Copilot) - until you press for justification, or ask it to withstand a little scrutiny.
The Smart Get Smarter…
High-agency users treat AI as a second brain - not a replacement for the first. They view it as a sounding board, a collaborator, and a stone that sharpens rather than flatters.
They understand its limitations. They approach it with epistemic humility. They know when to challenge it, when to constrain it, when to discard its answers entirely. To them, AI is a co-pilot - not a chauffeur.
For these users, productivity gains are exponential. One recent study found that users with higher metacognitive accuracy - a trait closely related to cognitive reflection - were significantly better at integrating correct information and filtering out errors when using AI tools like ChatGPT, while those with poorer self-monitoring gained less benefit (Urban et al., 2025).

A strategist with clear mental models can use AI to test scenarios in seconds. A designer with strong creative intuition can come up with dozens of concept ideas before lunch. A philosopher can cross-reference Schopenhauer, Nietzsche, and Aristotle before the kettle boils.
And perhaps most importantly, they don’t mistake output for understanding. They use AI not just to produce, but to provoke. To interrogate ideas, clash perspectives, and clarify their own.
These people are not being replaced by AI. They’re being augmented by it. And increasingly, they're running laps around those who can't or won't keep up.
…While the Rest Fall Further Behind
Meanwhile, the cognitively underprepared - the "clickers", not the thinkers - find in AI a seductive crutch.
Why wrestle with a problem when you can get a fluent paragraph in seconds? Why study a domain when you can fake surface-level knowledge with a few prompts and a LinkedIn post? Why reflect, when the machine flatters your preconceptions?
The danger here is not stupidity, but performative competence - a growing class of workers who look like they know what they’re doing because the AI makes their output presentable. But scratch the surface, and you find no depth. No rigour. No actual model of the problem being solved.
We’re not just dealing with automation of low-level tasks. We’re witnessing the automation (and increasingly the monetisation) of intellectual posturing. A sea of strategy decks, blog posts, and whitepapers that sound smart - and say ab-so-lutely nothing.
This isn’t just inefficient. It’s dangerous. Organisations are now filled with pseudo-analysts and strategic parrots - confidently delivering plausible nonsense at unbelievable speeds and at industrial scale.
A New Cognitive Aristocracy
Historically, intelligence has been a slow-burn advantage. Useful, yes - but not always rewarded. Often resented.
Now, with AI as a lever, that intelligence scales. It compounds.
Which means we’re potentially witnessing the emergence of a new hierarchy - one not based on wealth, family, or title, but on the ability to think with machines.
Those who can’t? They risk becoming cognitively outsourced. Fed a stream of pre-chewed content. Spoon-fed whatever they want to hear by the algorithm – determined to keep the dopamine flowing. This possibly may even lead to people being puppeted by predictive suggestion systems without even realising it.
This isn’t just about job titles. It’s about participation. Who gets to steer the system? Who gets to build it? Who gets to understand it well enough to dissent? As Schopenhauer might have put it: Intelligence and social life are in conflict - a tension now smoothed over and hidden behind friendly interfaces and algorithmic design. What once required debate and discernment is now navigated through UX flows optimised for ease, not depth.
But Who Gets to Decide What’s Worth Amplifying?
Here sits the quiet ethical elephant in the room.
If AI amplifies what’s already there, how do we cultivate the traits worth amplifying - and who decides what those traits are?
In education, do we prioritise curiosity over compliance? In leadership, do we reward first-principles thinking or risk aversion? In politics, do we favour truth-seeking or tribal allegiance?
The machines don’t care - they’ll amplify whatever they think makes their users the happiest. But societies can’t afford to be passive or indifferent. We must actively choose what we value, what we elevate, and what we reinforce - especially in a world where amplification can quickly turn noise into norm. Not when the costs of bad thinking scale just as quickly as the benefits of good thinking.
It raises uncomfortable questions. Will we build cultures that reward discernment? Or will we descend into cognitive populism - where the loudest, fastest prompts get the most engagement, regardless of merit?
If the fame and popularity of the Kardashians is anything to go by – my guess it that we’re all in a lot of trouble.
On the Flip Side - Is AI Making Us Dumber?
Before we get too carried away with the promise of AI as a thinking partner, we should ask a harder question: what if it’s not just amplifying what’s already there, but subtly eroding our capacity to think altogether?
One recent study from Stanford and Google DeepMind tested students on an essay-writing task with and without AI assistance. The findings showed those who relied on AI showed lower neural engagement, had poorer recall of their own material, and felt less ownership over what they’d written. In short, their minds had checked out. The researchers called it "cognitive offloading" – a kind of neural outsourcing, where the machine does the lifting and the user coasts along.

Critics of the study rightly pointed out its limitations: small sample size, short duration, and narrow scope. In addition, a single essay task hardly captures the complexity of real-world thinking. But even sceptics concede the core concern: the more we let AI handle the hard bits - synthesis, evaluation, judgment - the less mental muscle we build ourselves. In the same way that when machines took over physical labour, there was an increase in humans become soft and overweight, will the same thing happen to us mentally if they take over our intellectual labour as well?
This isn’t a reason to ban AI - but it is a reason to use it with greater awareness. Because the line between cognitive enhancement and cognitive atrophy might not be as clear as we think.
So - What Can You Actually Do?
What does this all mean in practice? Here are three takeaways worth thinking about:
Audit your own thinking: Do you use AI to accelerate your reasoning - or to avoid it? Are you co-creating with it, or just asking it to do your homework?
Invest in epistemic fitness: Read the text/article/book first – then ask the AI what it thinks about it rather than just summarising it all for you. Argue with it. Get it to question your own assumptions. Prompting is just performance; thinking is the real game.
Design for discernment: If you’re leading a team or building systems, reward rigour. Build in pauses for debate and group reflection. Require justification. Make room for real thinking in workflows that increasingly reward output over insight.
Because if we don't create space for thought, the system will reward speed over common sense and/or practical wisdom (phronesis) - and polish over truth.
Closing Thought
In a world where the cost of intelligence is approaching zero and can be scaled at the push of a button, perhaps the real divide is not access - it’s aptitude. Because if that speaker was right - if AI really does make smart people smarter and dumb people dumber - then the real question isn’t what the technology does. It’s what we’re becoming.
However, if you’ve ever felt that your thinking style doesn’t quite fit the corporate herd… good. You may be part of the last generation that remembers how to actually think before we all start reacting by reflex, with AI finishing our sentences.
And if you suspect this amplification trend is only just beginning - well, you’re probably not wrong. But what happens next depends on what, and who, we choose to amplify.
Until then, keep thinking for yourself.