The Question of the Millennium: Can Machines Think?
Time has stopped, and humanity holds its breath, waiting for the answer to a question on which our very existence depends.
The original (Spanish) version of this article can be found here.
I propose to consider the question: “Can machines think?” This should begin with definitions of the words “machine” and “think.” The definitions might be framed to reflect, as far as possible, the normal use of the words. But this attitude is dangerous: if the meaning of the words “machine” and “think” is to be drawn from their common usage, we would have to conclude that both the meaning and the answer to the question “Can machines think?” should be determined by a poll, perhaps a Gallup survey. But that is absurd. Instead of attempting such a definition, I shall replace the question by another, which is closely related to it and expressed in relatively unambiguous terms. The new form of the problem can be described in terms of a game that we call the “imitation game.”
— Alan Turing, Computing Machinery and Intelligence, 1950.
The world is holding its breath. Since the debut of ChatGPT in the final days of 2022, humanity has lived in a state of alert, hanging on the answer to a single question: Can machines think?
Because if the answer is yes — if what we call “artificial intelligence” can truly produce a machine that thinks like a human being — it would mean the end of the world as we know it.
To begin with, millions of jobs would evaporate. The economy would be transformed in ways we can’t yet imagine. Tax revenues would disappear, along with the funds that pay for pensions and unemployment benefits. It would feel like a monstrous global earthquake: overnight, markets and nations would shake in unison.
Military power would never be the same. Governments that obtained such technology could go to war with unlimited armies of robots and drones; those that didn’t would be left at the mercy of the new technological titans.
And what if it wasn’t a country that gained access to that AI? Criminals, rogue states, and terrorist groups could find themselves on equal (military) footing with the most powerful nations.
The potential impact of thinking machines is so vast that many wonder whether humanity could survive their emergence as a species. If the 20th century lived in fear of nuclear war, our updated version of self-destruction is this one. No wonder we’re so obsessed with the answer — our very existence is at stake.
And yet, three years after the launch of ChatGPT, none of this has come to pass. We have not reached “artificial general intelligence” (AGI), nor do robot armies exist. For now, the only thing that AI has truly revolutionized is the stock market — where colossal expectations have inflated a massive bubble now threatening to burst into a financial crisis as bad as, or worse than, 2008.
Still, despite the fizzle of chatbots, the fact that this may be a bubble doesn’t settle everything. Some voices remain convinced that AGI is still possible. Perhaps, they say, what’s happening mirrors the railways of the 19th century or the Internet of the 1990s — two revolutions that also sparked stock bubbles because investors rushed in too soon. The bubbles popped, but the transformations eventually arrived.
And so we remain suspended in the question of the millennium: Will this technology ever produce a new kind of intelligence?
In this article, we’ll answer that question in a way that allows anyone — guided by common sense — to form their own judgment.
To begin with, we should say that there are actually two different questions.
One is whether the technologies rising today that we call “AI” (which include chatbots, so-called “agents”…) could ever become intelligent.
And the second is broader: it asks whether any conceivable machine — even if it does not exist today — could, now or at some point in the future, come to think.
Can today’s AI produce true artificial intelligence?
The idea of a thinking machine has accompanied humanity for millennia. But defining “machine” and “intelligence” was never easy. “What does it mean to think?”, “What is the mind?”, “What is consciousness?” — these are some of the hardest questions we’ve ever asked ourselves. For thousands of years, they were the domain of philosophy, the discipline of difficult questions.
Until, in 1950, an English mathematician named Alan Turing formalized what a future computational intelligence might consist of. He proposed a game with three players: A, B, and C.
A and B were a man and a woman. In a separate room sat C, an interrogator who could communicate with A and B only by sending messages through a teletype. Their conversation took the form of a series of text exchanges — like an early chat. The interrogator’s goal was to figure out which participant was the man and which was the woman. B’s goal was to help him; A’s was to fool him.
But what if, Turing wondered, we replaced A with a machine? Would the interrogator be deceived just as often as before? “The best strategy [for the machine],” Turing concluded, “would be to try to provide answers that would naturally be given by a person.”
Thus, the question of whether machines can think was reduced to something far simpler and easier to measure: “Are there imaginable digital computers that could succeed in the imitation game?” In other words: Can machines deceive humans into believing they are one of us?
Software, after all, is a branch of mathematics — it evolves by solving concrete, well-defined problems. Unlike philosophical speculation, the imitation game was exactly the kind of problem engineers loved: something they could build, test, and measure.
In 1990, Hugh Loebner began offering a prize to whoever could create the software that best played this game. It was no longer called “the imitation game” — too quaint — but the Turing Test.
Despite many criticisms, over the next thirty years various institutions (including Cambridge University, Dartmouth College, the London Science Museum, and the Universities of Reading and Ulster) kept the award alive. The prize offered a bit of money, a lot of prestige, and plenty of publicity to anyone who could make a machine convincingly pose as a human. Turing’s game of deception became the yardstick of “artificial intelligence.”
The format was simple: a program had to chat with a panel of humans and fool them into thinking it was one of them. This is how a strand of AI research became obsessed with conversational software — a path that eventually led to ChatGPT and its relatives.
And this is also how modern chatbots inherited their defining trait: they were never built to be intelligent. Not even now. No one developing them has seriously grappled with what intelligence is; their goal has always been to trick us — to appear intelligent, to make us believe they think.
What we call “AI” today is really a set of new ways to process information — powered by leaps in computing power and fresh approaches to language modeling — whose goal is not to be intelligent, but to seem human. To seem like a smart human. AI is, as Julio Gonzalo brilliantly put it, a stochastic brother-in-law: a contraption that pretends to be intelligent by spewing out random things it’s read somewhere in its database.
Months ago, I wrote about why I don’t believe any system based purely on verbal language could ever match human intelligence:
“AI creators take a tiny, limited, archetypal part of consciousness — verbal language — and confuse it for the whole. They ignore that words are just a narrow representation of the vast, interconnected notions that inhabit our brain like chords on a piano with billions of keys.”
But the real reason this AI will never be intelligent is that it doesn’t even try. That’s why, when it doesn’t understand something, instead of saying so — the truly “intelligent” response — it makes something up, because that’s what makes it seem intelligent. It doesn’t actually hallucinate; it simply blurts nonsense, cheerfully and shamelessly, because its purpose isn’t to think but to deceive — to make us believe it thinks.
So the answer to our first question is no: this so-called AI will never produce true intelligence.
But that doesn’t quite put our fears to rest. There could be another technology just around the corner that actually manages to do it. The truly important question is the second one:
Can humans create an artificial intelligence that could replace us?
That depends on what we mean by artificial.
In fact, humans have always created “artificial” intelligences: the animals we breed and train don’t exist in nature. Sheepdogs, carrier pigeons, and mules, for example, are forms of artificial intelligence that replace us in certain tasks.
With our current knowledge, it wouldn’t be impossible to breed or genetically modify chimpanzees to make them more efficient at mining minerals or delivering packages.
But of course, none of these things fit today’s definition of artificial intelligence. Nor do they inspire civilizational fear, or make investors or warlords salivate.
Because animals, like people, have needs and desires. They get sick, age, and require care. More importantly, they suffer and enjoy. Therefore, how we treat them matters — they have moral value.
In the expression “artificial intelligence,” it’s really artificial that does all the work — concealing what we truly mean.
What we actually mean when we talk about artificial intelligence is immoral intelligence: a kind of intelligence that has no moral value and doesn’t care about ours. One that’s indifferent to life — its own or anyone else’s — unbound by experience, incapable of attachment. A being that could kill or be killed, harm or be harmed, without pain, without remorse, without asking questions. A creature that could be neglected, unplugged, altered, violated, or enslaved — and no one would care. One that obeys blindly because it cannot feel.
Can such a creature — intelligent yet incapable of suffering — exist? That is the question that truly terrifies us. Because if it could, it might fall into the hands of the soulless, and end us all — because it simply wouldn’t care.
Can they exist?
In truth, forms of “immoral” artificial intelligence already exist: the bacteria we use to ferment beer or produce insulin, though not conscious like us, can be seen as intelligences we’ve domesticated to perform tasks.
Similarly, the modified viruses used in mRNA vaccines act as programmed biological systems: they enter our cells and “teach” them to produce a specific protein. They don’t think, but they execute instructions with precision and autonomy — like biological algorithms.
If we were to evolve such organisms toward sentience, we’d find that as their intelligence grows, so does their consciousness, developing primitive moral instincts like empathy, fairness, or cooperation.
Because thinking, at its core, means making decisions through a lens that includes moral reasoning and a worldview. It means judging what benefits or harms me, how my actions affect others, what value I assign to things and beings, who is ally and who is enemy — in short, discerning what is right and what is wrong. Intelligence is a sequence of moral acts.
We see this clearly in the dilemma of self-driving cars. Building a car that drives itself is far easier than creating a general intelligence, because roads are closed systems with a small, well-defined set of rules. A car only needs to decide whether to brake or accelerate, to stop at a red light or not, to turn left or right. And yet, manufacturers remain stuck on moral decisions that can’t be preprogrammed: What if the car must choose between risking its passenger or a pedestrian? What if it must choose between two pedestrians?
As long as there’s a rule — “if the light is red, stop” — a machine can follow it. That rule substitutes for a moral system. But when there is no rule, how does a machine decide without morality?
Even the most selfish creatures, like sharks, have a biological moral program — one that tells them the only important beings are themselves. Something similar happens with humans who have narcissistic personality disorders.
Consciousness — the essence of who we are — arises from recognizing the effects of the world on oneself and oneself on the world. This self-awareness evolves alongside intelligence, which is why the most advanced beings are also the ones that understand themselves and their actions best. Without that feedback loop, life simply couldn’t exist.
So, to the second question — once reformulated correctly — the answer is also no: there will never exist a being that can think at the level of a human without also feeling as we do. Intelligence cannot exist without a moral system proportionate to its consciousness, because the two are inseparable.
The takeaway is this: we shouldn’t be distracted for a second by the siren songs of those selling cheap brooms under the name of “AI.” There are many technologies transforming the world today — but this, thankfully, will not be one of them.
I return to this newsletter after months locked away finishing a book that I’ve finally, finally delivered — thanks to the invaluable help of my dear friend Paloma Abad. It will be published by Debate in February, and I’ll tell you more in the coming months.
The book has been an intellectual triathlon, forcing me to refine many ideas and explore new territories. I come back with a head full of topics — and none of them are small. :p
My commitment now is to publish at least once a week, hopefully twice. If there’s a topic you’d like me to write about, I’d love to hear your suggestions :-)
Thank you all for your patience!



