The original (Spanish) version of this post can be found here.
Last week, Ezra Klein, the star columnist of The New York Times, released an episode of his podcast titled "The Government Knows That Artificial General Intelligence Is About to Arrive," in which he interviewed Ben Buchanan, the official in charge of AI-related decisions in the Biden administration.
In it, he claimed that he had spoken with many people deeply knowledgeable about the subject, and all had confirmed to him that Artificial General Intelligence—which until recently was thought to be at least 15 years away—was actually just around the corner. It would arrive in a matter of months or just a few years. In any case, within Trump’s next term.
And to cut through the semantic confusion surrounding what exactly Artificial General Intelligence (AGI) means, he defined it as follows:
"An intelligence capable of doing basically anything a human can do with a computer—but better."
And to give a sense of the scale of the issue, he stated:
"I think we are on the cusp of an era in human history unlike any we have experienced before. And we are not prepared, in part because it’s unclear what preparedness would even look like. We don’t know what this will look like, what it will feel like. We don’t know how labor markets will respond. We don’t know which country will get there first. We don’t know what it will mean for war. We don’t know what it will mean for peace."
I'm sure he was trying to be grandiloquent, but I think he fell short.
What would happen if there really were a technology that, for a fraction of the cost of "producing" a worker (which, let’s remember, means raising a child, educating them, and covering all their needs for ~80 years), could perform the same tasks—even better? What if, on top of that, this happened suddenly, in less than three years?
First of all, the entire economic system would collapse on day one. Companies—first in the U.S., then everywhere—upon hearing rumors of this technology’s existence, would start mass layoffs: call center operators, insurance brokers, programmers, lawyers, copywriters, creatives, financiers…
Panic would spread—not just among workers but also among companies, which wouldn’t know how to react to such an unprecedented market shock. The stock markets would crash, because layoffs wouldn’t just mean reduced spending but also massive losses of customers. Riots would break out in countries without a social safety net to support the newly unemployed—likely in Latin America and Asia.
Since this technology is inherently decentralized and already in the hands of multiple actors across many countries, before we even realized it, there would be an open war to hack the enemy using AIs (which, by this definition, would be far more efficient than today’s hacker groups) and, conversely, another war to defend against them.
There’s an internet meme that asks what happens when an unstoppable force meets an immovable object. That’s exactly the situation we’d be in: What happens when two technically identical AIs compete to destroy each other? The answer: everything depends on computational power. Governments would have to militarize data centers to ensure that all available computing capacity served their own AI systems.
There would be shortages, chaos, and financial collapse. Society would fracture. Millions would lose their purpose overnight, and humanity would face an existential question: Are we still the masters of the world, or merely its spectators?
This has happened before: with frozen bread.
You might remember that until about 15 years ago (if I’m not mistaken), almost all the bread available in stores (in Spain) was the famous pistolas: loaves made in industrial ovens and distributed by van to bakeries. Then, someone invented a small oven and a type of frozen bread that could be baked in any store. And that was the bread apocalypse.
Overnight, traditional oven-baked pistolas disappeared, and the country was flooded with those chewy baguettes that smelled amazing for the first ten minutes but then became inedible. Before we even realized it, they were being given away. Even real estate agencies were even handing them out for free. I’d bet that not even 1% of the old pistolas factories survived, nor did the delivery drivers.
Years later, after the market had been flooded with fake bread, a new industry emerged—the artisan sourdough bread movement, which can only be made by hand and cannot be sold frozen. But that’s another story.
I’m not joking (well, maybe a little :p), but the economic mechanism that would play out with AI would be exactly the same.
Seriously though, the reason I allow myself to joke about it is that I’m completely convinced that none of this is actually going to happen. And to show that I’m putting some skin in the game—I’ll bet anyone in the comments a dinner that we won’t have Artificial General Intelligence in the next 36 months. There! (And hey, if I end up paying, considering the world will have collapsed, getting a free dinner will be even more valuable… :p).
Why There Won’t Be AGI, Explained for Kids
What we call "AI" is a broad term that includes several branches of software development that converge around some new ideas.
One of these ideas is what’s behind what we call machine learning. Traditionally, the way to analyze a database followed the scientific method. Humans would propose a hypothesis about the data and ask the computer to verify or reject that hypothesis.
For example, if we take a database about population movements in a city, we could ask the machine what the average distance traveled by women is (if we have that data) or whether women travel longer distances on average than men.
The science of databases is still in its infancy—before computers became widespread, we didn’t even have large data repositories. So, this field has been advancing rapidly. In recent years, researchers have discovered that it’s much more efficient to let the computer find relationships on its own. Instead of using a predefined hypothesis, machine learning models analyze vast amounts of data, identify patterns, and then make inferences based on those patterns. The key difference is that instead of humans dictating the questions, the algorithm itself formulates them by uncovering underlying relationships that weren’t obvious before. This has led to major advances in complex tasks like speech recognition, medical image analysis, and behavioral prediction.
Then there's natural language processing, which powers ChatGPT, DeepSeek, and all those chatbots. A few years ago, a group of researchers came up with a new approach to analyzing language. Instead of processing words sequentially—like those early online translators that always got things completely wrong—they discovered that it made more sense to put all the words of a text into a transformer, assign numerical values (or “weights”) to each word, and compare them against all the others. This created a matrix that related every word in the text to every other word.
And they were right, because that’s actually how language works. A word isn’t just related to the ones right before and after it—it’s connected to all the words in the text. The only reason language appears to be sequential is that we physically can’t pronounce multiple words at the same time.
By combining this approach with everything that had already been developed in machine learning, another breakthrough emerged: humans no longer had to manually fine-tune each of these tiny “weights” assigned to words—which would be impossible anyway. Instead, they could train a computer to infer those relationships on its own.
That’s why AI models need to be trained. They require a massive database with lots of questions and a database of correct answers so they can adjust all those weights and values until they learn to produce the expected output.
It’s like discovering a new way of cooking. Imagine we’ve been roasting chicken the same way for centuries, and suddenly we find out that we can make it faster and tastier in an air fryer.
But no matter how much we improve the air fryer, we’ll never be able to put in a chicken and get a cucumber salad out of it. And that’s exactly what’s happening here.
As we’ve explored in other articles, intelligence is far more complex than language or images—especially language from the internet, which is what these machines are “feeding” on. It’s like the chicken we’re putting into the oven. Natural language and images are, in reality, a highly simplified, filtered, and socially mediated representation of human thought. Unless we find a way for a machine to analyze human thought in all its complexity, it won’t be able to replicate intelligence—or become, as Ezra Klein described, an artificial general intelligence.
So, by its very nature, AI—no matter how much it advances—will never turn one thing (language) into another (intelligence). That’s why I’m convinced that AGI will never exist.
What AI Will Really Be Used For
Beyond all this, people who are following the topic with real expertise have been observing another major issue with this technology for a while now: it isn’t profitable.
Two years have passed since ChatGPT skyrocketed to fame, and there’s still no clear strategy for generating sustainable profits from this technology. The operational costs of generative AI are extremely high, and the revenue from users doesn’t even cover a fraction of them. OpenAI is losing money at an astonishing rate with every query, whether from free users or paying subscribers.
And the business case or use case—that is, the exact purpose these applications will serve to justify their enormous costs—still isn’t clear. This is very different from internet technologies that do work, which quickly find multiple applications (think of GPS or email).
I do believe there will be some use cases for AI. I also believe—though explaining this would require another article—that its extremely high costs aren’t a result of the technology itself, nor are they inherent to it. The same results could be achieved with far less computing power, as DeepSeek has recently demonstrated. These sky-high costs are a consequence of the financial model behind AI, which requires making people believe that all these massive investments are necessary—because otherwise, the industry would become uninvestable. And right now, AI companies survive on investor funding, not sales, and they don’t seem to have any other plan.
In other words: AI’s intensive use of chips is not a bug; it’s a feature.
But—sorry, I’m getting sidetracked. What I was saying is that there will be use cases for AI.
The first and most obvious one is translation between languages, where AI is already outperforming humans. Perhaps this will open the door to a level of cultural connection that surpasses all the barriers imposed by language. And honestly, if I were a translator, I’d be rethinking my career—those jobs will disappear.
The second use case is less obvious.
Humans are not built for a hyperconnected society. For hundreds of thousands of years, our brains only needed to process a limited number of inputs in relatively simple environments, within small social groups. Our cognitive tools are adapted to those needs. That’s why, as Harari explains, we conquered the world through gossip, and all our social strategies—love, power, strength—work much better in small groups.
But today, we live in a world flooded with data, at a scale we simply can’t comprehend. The internet is like an endless ocean of information and connections, and our cognition is not designed to handle it—we are not prepared.
Until AI came along, the best interface we had to make sense of this universe of data and social connections was search engines—but it was a very primitive method. If someone wanted to know what the world thought about a certain topic—the equivalent of traditional gossip, but with millions of people—they had to spend hours or even days digging through link after link, and even then, reaching a conclusion wasn’t always possible. With social media, it was even harder, and the few tools designed to capture the "sentiment" of online conversations were wildly inaccurate.
I believe that what we’re calling AI will actually become the first interface that truly allows us to connect with—or translate—this vast, formless mass of data and adapt it to human cognition.
Some of the most promising applications, like DeepSearch, are moving in this direction: making it easier for individuals to access and understand massive amounts of information.
And that alone is no small thing. Even just this could mark a giant leap in integrating the human brain into the digital society. It would offer an entirely new way to think and understand the world, where we could finally interrogate the sum of human knowledge and receive an answer tailored to our own cognitive limits. The opportunities this could unlock would be immense! For the first time, we’d move beyond the need to store knowledge—a task we currently devote huge amounts of time and effort to, for example.
Let’s just hope that when this future arrives, it won’t be as trashy as that rubbery frozen bread.