
For centuries, we’ve believed that the act of thinking defines us.
In what is widely considered a major philosophical turning point, marking the beginning of modern philosophy, secular humanism, and the epistemological shift from divine to human authority, the French philosopher and mathematician René Descartes (1596–1650) famously concluded that everything is questionable except the fact that we think, “Cogito, ergo sum”(I think, therefore I am).
Fast-forward a few hundred years, however, and in an age where generative AI can produce emails, vacation plans, mathematical theorems, business strategies, and software code on demand, at a level that is generally undistinguishable from or superior to most human output, perhaps it’s time for an update of the Cartesian mantra: “I don’t think . . . but I still am.”
Indeed, the more intelligent our machines become, the less we are required to think. Not in the tedious, bureaucratic sense of checking boxes and memorizing facts, but in the meaningful, creative, cognitively demanding way that once separated us from the rest of the animal kingdom. The irony, of course, is that only humans could have been smart enough to build a machine capable of eliminating the need to think, which is perhaps not a very clever thing.
Thinking as Optional
Large segments of the workforce, especially knowledge workers who were once paid to think, now spend their days delegating that very function to AI. In theory, this is the triumph of augmentation. In practice, it’s the outsourcing of cognition. And it raises an uncomfortable question: if we no longer need to think in order to work, relate to others, and carry out so-called “knowledge work,” what is the value we actually provide, and will we forget how to think?
We already know that humans aren’t particularly good at rationality. Nobel laureates Daniel Kahneman and Amos Tversky showed that we mostly operate on heuristics (fast, automatic, and error-prone judgments). This is our default “System 1” mode: intuitive, unconscious, lazy. Occasionally, we summon the energy for “System 2”(slow, effortful, logical, proper reasoning). But it’s rare. Thinking is metabolically expensive. The brain consumes 20% of our energy, and like most animals, we try to conserve it. In that sense, as neuroscientist Lisa Feldman Barrett noted, “the brain is not for thinking”; it’s for making economic, fast, and cheap predictions about the world, to guide our actions in autopilot or low energy consumption mode.
So what happens when we create, courtesy of our analytical and rather brilliant “System 2,” a machine that allows us to never use our brain again? A technology designed not just to think better than us, but instead of us?
It’s like designing a treadmill so advanced you never need to walk again. Or like hiring a stunt double to do the hard parts of life, until one day, they’re doing all of it, and no one notices you’ve left the set.
The Hunter-Gatherer Brain in a High-Tech World
Consider a parallel in physical evolution: our ancestors didn’t need personal trainers, diet fads, or intermittent fasting protocols. Life was a workout. Food was scarce. Movement was survival. The bodies (and brains) we’ve inherited are optimized to hoard calories, avoid unnecessary exertion, and repeat familiar patterns. Our operating model and software is made for hungry cavemen chasing a mammoth, not digital nomads editing their PowerPoint slides.
Enter modernity: the land of abundance. As Yuval Noah Harari notes, more people today die from overeating than from starvation. So we invented Ozempic to mimic a lack of appetite and Pilates to simulate the movement we no longer require.
AI poses a similar threat to our minds. In my last book I, Human, I called generative AI the intellectual equivalent of fast food. It’s immediate, hyper-palatable, low effort, and designed for mass consumption. Tools like ChatGPT function as the microwave of ideas: convenient, quick, and dangerously satisfying, even when they lack depth or nutrition. Indeed, just like you wouldn’t choose to impress your dinner guests by telling them that it took you just two minutes to cook that microwaved lasagna, you shouldn’t send your boss a deck with your three-year strategy or competitor analysis if you created with genAI in two minutes.
So don’t be surprised when future professionals sign up for “thinking retreats”: cognitive Pilates sessions for their flabby minds. After all, if our daily lives no longer require us to think, deliberate thought might soon become an elective activity. Like chess. Or poetry.
The Productivity Paradox: Augment Me Until I’m Obsolete
There’s another wrinkle: a recent study on the productivity paradox of AI shows that while the more we use AI, the more productive we are, the flip side is equally true: the more we use it, the more we risk automating ourselves out of relevance.
This isn’t augmentation versus automation. It’s a spectrum where extreme augmentation becomes automation. The assistant becomes the agent; the agent becomes the actor; and the human is reduced to a bystander . . . or worse, an API. Note for the two decades preceding the recent launch of contemporary large language models and gen AI, most of us knowledge workers spent most of their time training AI on how to predict us better: like the microworkers who teach AI sensors to code objects as trees or traffic lights, or the hired drivers that teach autonomous vehicles how to drive around the city, much of what we call knowledge work involves coding, labelling, and teaching AI how to predict us to the point that we are not needed.
To be sure, the best case for using AI is that other people use it, so we are at a disadvantage if we don’t. This produces the typical paradox we have seen with other, more basic technologies: they make our decisions and actions smarter, but generate a dependency that erodes our adaptational capabilities to the point that if we are detached from our tech our incompetence is exposed. Ever had to spend an entire day without your smartphone? Not sure what you could do. Other than talk to people (but they are probably on their smartphones). We’ve seen this before. GPS has eroded our spatial memory. Calculators have hollowed out basic math. Wi-Fi has made knowledge omnipresent and effort irrelevant. AI will do the same to reasoning, synthesis, and yes, actual thinking.
Are We Doomed? Only If We Stop Trying
It’s worth noting that no invention in human history was designed to make us work harder. Not the wheel, not fire, not the microwave, and certainly not the dishwasher. Technology exists to make life easier, not to improve us. Self-improvement is our job.
So, when we invent something that makes us mentally idle, the onus is on us to resist that temptation.
Because here’s the philosophical horror: AI can explain everything without understanding anything. It can summarize Foucault or Freud without knowing (let alone feeling) pain or repression. It can write love letters without love, and write code without ever being bored.
In that sense, it’s the perfect mirror for a culture that increasingly confuses confidence with competence: something that, as I’ve argued elsewhere, never seems to stop certain men from rising to the top.
What Can We Do?
If we want to avoid becoming cognitively obsolete in a world that flatters our laziness and rewards our dependence on machines, we’ll need to treat thinking as a discipline. Not an obligation, but a choice. Not a means to an end, but a form of resistance.
Here are a few ideas:
- Be deliberately cognitively inefficient
Read long-form essays. Write by hand. Make outlines from scratch. Let your brain feel the friction of thought. - Interrupt the autopilot
Ask yourself whether what you’re doing needs AI, or whether it’s simply easier with it. If it’s the latter, try doing it the hard way once in a while. - Reclaim randomness
AI is great at predicting what comes next. But true creativity often comes from stumbling, wandering, and not knowing. Protect your mental serendipity. Use genAI to know what not to do, since it’s mostly aggregating or crowdsourcing the “wisdom of the crowds,” which is generally quite different from actual wisdom (by definition, most people cannot be creative or original). - Teach thinking, not just prompting
Prompt engineering may be useful, but critical reasoning, logic, and philosophical depth matter more. Otherwise, we’re just clever parrots. - Remember what it feels like to not know
Curiosity starts with confusion. Embrace it. Lean into uncertainty instead of filling the gap with autocomplete. As Tom Peters noted, “if you are not confused, you are not paying attention.”
Thinking Is Not Yet Extinct, But It May Be Endangered
AI won’t kill thinking. But it might convince us to stop doing it. And that would be far worse.
Because while machines can mimic intelligence, only humans can choose to be curious. Only we can cultivate understanding. And only we can decide that, in an age of mindless efficiency, the act of thinking is still worth the effort, even when it’s messy, slow, and gloriously inefficient.
After all, “I think, therefore I am” was never meant as a productivity hack. It was a reminder that being human starts in the mind, even if it doesn’t actually end there.