[Wide Angle] The Truth of AI
The foundations, promises, and intuitions about the future of artificial intelligence.
“In a time of deceit telling the truth is a revolutionary act.”
― George Orwell
Human beings have two conflicts with truth: one is that we cannot reach it, the other is that we cannot live without it.
The stimuli we receive from reality pass through our senses and are then filtered through the language and mental models that structure our thinking — and in the process, they are distorted. Our inner “truth” is therefore always a translation of reality, never reality itself.
And yet human societies cannot exist without a shared truth. A modern city would be impossible — unworkable — if half its population lived by the laws of the Aztecs. Democracy could not exist if the vast majority of citizens did not share the principles of political liberalism. To have common rules and institutions, we need a single shared worldview.
In the absence of an external reality to hold onto, the history of our species can be told as a relentless effort to invent something resembling a common truth — and to extend it, impose it if necessary, to the edges of the group.
In the beginning, that truth was God. Religions were nothing more than a communal “myth” from which a set of rules emanated — rules that had to be followed regardless of each individual’s inner truth. Later, in the first version of humanism, the ancient Greeks believed that poetry was the instrument through which human beings discovered what was true about the world: if the gods had given you the gift of beauty, it meant you had something important to say.
But as social groups grow more complex, they also need more sophisticated truths — and one day religion and beauty were no longer enough to explain the world. That is when the method that has carried us to the present day asserted itself: mathematics, the language in which the laws of the universe are written.
The great virtue of mathematics was — like almost everything that succeeded afterward — that it put large numbers of people to work thinking together. It was a toolbox that anyone could use on equal terms, provided they had acquired enough knowledge. That is how it made it possible for thousands — and later hundreds of thousands — of people to reach the same conclusions. That is how truth emancipated itself from power and became everyone’s property.
From that possibility, a new horizon opened for humanity. Mathematics unlocked scientific and economic collaboration, gave rise to commerce and the development of medicine, and made possible the transition from absolutism to democracy. Thanks to mathematics, we could print newspapers, apportion seats in parliament, collect taxes, launch rockets into space, cross the Atlantic, and cure smallpox. Every discovery of the last 500 years rests upon its shared truths.
Modernity replaced absolute kings with absolute truths, and the twentieth century became — perhaps — the only moment in history when truth seemed universal and beyond dispute. A time when, despite the existence of opposing blocs, there was agreement on the constants that made civilization possible: progress, science, the nation-state. Even class struggle, if you like.
About 100 years ago, a mathematician named Shannon had an idea.
Despite the spectacular development of the exact sciences, the domain of words had remained practically unchanged since long before the Industrial Revolution. In 1935, books were still the only method for storing and transmitting the written word, and the only way to access their information was to read sequentially, one page after another.
Shannon proposed that any type of information — words, music, images — if broken down into sufficiently small pieces, could be encoded as a collection of ones and zeros (or equivalently, of trues and falses), and that those binary states could be transmitted as electrical impulses.
That idea — which was, incidentally, the same one behind the telegraph — is what today allows your phone to store photos of your children, and what still powers the fiber-optic cables running across the Atlantic Ocean. The mathematization of information made computation possible first, and networks afterward.
And it killed truth.
Drowning in Information
From the late twentieth century to the present day, computing and networks triggered an explosion in the quantity of information we produce and consume. One figure to put this into perspective: in the 7,000 years between the invention of writing and the advent of computers, roughly 130 million books were written. Today, the internet creates the equivalent of 400,000 trillion every year. Every year.
If it were water, the entire written output of humanity up to the age of computing would fit in a teaspoon. The information we generate annually in the twenty-first century would fill roughly three Olympic swimming pools. And all that liquid keeps pouring, year after year, into the ocean of the network.
The monumental challenge we face is this: all of that information is still stored as zeros and ones — like an ocean filled with indistinguishable drops of water. The internet is an unmanageable mass of information without meaning.
Computing solved how to store and copy the form of words, but not their meaning — much less the relationships between them. Meanwhile, the only way to understand the meaning of all that information remained, until very recently, conventional reading. And yet it is impossible for any single human being to read even a millionth of the information available on the internet.
This poorly digested digital revolution is what has left us, in the 2020s, drowning in information and starved of meaning. Without any way of making sense of the information circulating daily on the internet, networks have become factories of chaos, polarization has made genuine debate impossible, and the pervasive feeling is that no one — not experts, not governments, not the media — understands anything anymore.
From a world with a single truth, consumed every morning from the front pages of newspapers and tucked in with you at night after the evening news, we have moved to a world without truth. Or with millions of individual truths, impossible to reconcile with one another.
The widespread confusion in which we live today — from the inability of economic structures to make sense of what is happening to us, to the anxiety of people who have more information than they can process — originates here.
In an attempt to find a way out of this tangle, contemporary society is immersed in the challenge of mapping this new reality: of understanding the internet, which is the same thing as understanding ourselves in all our globalized complexity.
Google was a first attempt. Wikipedia has been another. Even Blockchain has at its core the ambition to create a truth beyond the reach of power. Big data techniques and modern statistics are two more approaches, as are the algorithms of social media platforms. From their different angles, all of these things are attempts to draw a map of that ocean — to find a shared truth once more, but this time for 8 billion human beings.
And what we have called “artificial intelligence” is a new map.
All You Need Is Attention
Mathematics is a “formal language”: a code with strict, defined, and universal rules that allow operations to be performed with precision and reproducibility. A system designed to eliminate ambiguity. Any “speaker” of the mathematical language can know exactly what the relationships between its signs are.
Natural languages — like human tongues — are the opposite. They have no manual. They are organic: they emerge within a community, with ambiguous and shifting rules. There is no universal norm that explains how the meanings of words relate to one another.
And yet speakers understand each other.
For decades, linguists had tried to crack the secret code that connects the meanings of words — without success. They were stuck. That is why, as you’ll remember, Google Translate was terrible before what we’ve come to call AI.
Nine years ago, a team of Google researchers proposed trying something different. Instead of analyzing the words in a text sequentially — one after another — they proposed building a map of relationships between all of them simultaneously. The links between each word and every other word formed a kind of mesh, where each connection could carry a different, variable value depending on the rest.
Those values wouldn’t come from a pre-existing equation — the way mathematical rules do — but would emerge from observing the actual use of language: from reviewing millions of texts written by human beings in real contexts. Once that immersion (”training”) was complete, those values (”weights”) would constitute a representation of all those interactions.
And it worked. Suddenly, this software — known in technical jargon as Large Language Models, or LLMs — was able to process the meaning of words with unprecedented precision. Five years later, ChatGPT arrived, along with all the other applications we know today. That is the source of the extraordinary translations AI produces, and of that uncanny sensation of talking to a human that these applications create.
Although it has often been said that they are token-computing machines, I think the image that best defines them is a different one: what we popularly call AI “models” are maps. Maps of the relationships of meaning present in the dataset on which they were trained. In the specific case of the models that have become famous, they are representations of the internet — as if they were a cartography of the ocean floor.
Nothing more. To this day, all models — ChatGPT, DeepSeek, Claude, and the rest — are variations of this same technology. In their more advanced forms, they are combinations of several LLMs given different instructions, or systems that blend this mechanism with traditional symbolic programming.
Can we find in these maps the truth that lies hidden within the internet? Do LLMs hold the key to restoring the horizon of lost certainties?
The Fickle Truth of Words
Unlike mathematics, the responses produced by LLMs trained on the internet cannot be deterministic. The square root of 9 is 3 — but the answer to the question “Can cats think?” is neither exact nor binary. Natural language does not admit the precision of algebra. That is why LLMs don’t calculate the correct answer — they calculate the most probable one.
It has often been said that they “hallucinate,” as if producing a nonsensical response were an exception, a deviation from their logic, or a system failure. It isn’t. They simply do not have, built into their own architecture, a notion of truth like the one that exists — and is central — in symbolic programming or mathematics.
That is why LLMs cannot converge on a single solution when the ocean of information they represent contains none. This is not a problem that will be “fixed” with more technology — it is a direct consequence of the reality they represent. LLMs cannot converge on a single truth because on the internet, just as in society, no such thing exists.
Is this a problem? If we recognize the technology for what it is, I would say no. Google’s search algorithm is also wrong sometimes. Its results are often not as accurate as one might hope, and users have grown accustomed to rephrasing their search when the expected answer doesn’t appear. We have also learned to distinguish a reliable source from an unreliable one. In the same way, the millions of users who are successfully using this technology to find meaning on the internet are learning to work around its limitations.
It only becomes a problem when AI is used as a substitute for the coordination mechanisms of the economy — mechanisms that cannot function without certainty.
The Truth of the Economy
The industrial economy is the application of mathematical logic to the production of goods and services: a mechanism for coordinating countless parts to generate determined, reproducible, and verifiable outcomes — that is, certain outcomes.
What underpins that entire mechanism is not production, but guarantee: the promise that a machine will work in a predictable way, or that a contract will be honored. Appliance warranties, corporate legal liability, and quality standards are the true expression of the productive system.
Imagine the opposite: that when you buy a plane ticket, there is no expectation of reaching your destination — only a probability of doing so. Or that a home appliance is designed not to work, but to “probably” work. In that world, there would be no industrial economy as we know it.
To solidify those certainties, modern legal systems impose very high costs on deviations. If a plane doesn’t fly, the airline must compensate passengers substantially. Electronics manufacturers are liable for the damage their products cause, just as lawyers are liable for the consequences of an error in a trial or a due diligence process. The industrial economy doesn’t just produce — it assumes the cost of guaranteeing what it has produced.
In that machinery, LLMs are a bomb: a technology that by design works with probabilities cannot sustain a system that runs on certainties. That is why AI shines in contexts where ambiguity is acceptable — drafting a document, summarizing a text, exploring ideas, even mining data — and grinds to a halt where what is at stake is someone’s money, life, or liberty, as in law, finance, or medicine.
For AI to play a meaningful role in the economy, it would need to achieve something that is neither easy, nor obvious, nor resolved: producing certain outcomes.
For this reason, the entire field of “artificial intelligence” today is dedicated to a single underlying goal: building a bridge between the probabilistic thinking of language models and the symbolic thinking of mathematics and classical programming, where explicit rules of true and false exist.
To try to close that gap, researchers are exploring several paths: breaking a question down into simpler sub-problems where prior truths do exist; pitting multiple models against each other so that some correct the deviations of others; or connecting LLMs to external symbolic tools capable of verifying results against formal criteria of correctness.
If they don’t succeed, the technology risks remaining what it is today in its purest form: a highly sophisticated language generation system — but with limited economic utility in every context where what matters is not what is plausible, but what is true.
The Special Case of Software
In recent months, Anthropic’s advances with a coding tool called Claude Code have led many to believe that LLMs might actually be capable of achieving that goal.
Software is a special case. Programmers have spent decades storing the code they write in a public repository (GitHub) and sharing criteria for how to write that code in another (Stack Overflow). A vast portion of humanity’s collective knowledge about code is stored, neatly organized, corrected, improved, and expanded across successive versions in two websites that are, moreover, freely accessible to everyone.
This makes software the only domain in which formal, defined, and precise languages — at least partially; there are still different architectures, approaches, and coding styles — coincide with a cultural repository of information where something resembling a single truth actually exists. As a result, the degree of ambiguity in the dataset on which Claude Code and similar tools operate is lower than that of legal or medical databases. If there is any domain in which an LLM can come close to producing a “correct solution,” this is it.
For comparison: this same condition doesn’t even exist across mathematics as a whole. Mathematical papers are still published in natural language, and there is no “Claude Code” equivalent for mathematics today. Only in recent years have mathematicians begun trying to formalize the entire field in order to move in this direction.
For an LLM to achieve the same level of success in another domain — finance, legal practice, architecture, medicine — that entire domain would first need to be fully formalized.
Which raises the question: what role is left for AI if it cannot become a fundamental pillar of the economy?
The Truth Behind the AI Bubble
The companies selling AI products are, without question, the ones with a truth problem.
Sam Altman, the CEO of the company that launched ChatGPT — and someone many people are now describing publicly as a pathological liar — is the one who created the monster. In 2022, Altman wasn’t content with becoming the next Google. He had a far more ambitious plan. He wanted to convince the world that his technology wasn’t just a new way of mapping information — it was a form of intelligence. An entity capable, simultaneously, of destroying us all and giving us everything we need. Of ending the economy, waging war, supercharging productivity, and curing cancer. AI was God. Altman was Noah. And only those who climbed aboard his ark would be saved from the flood.
That is how the companies selling LLMs became the new Seventh-day Adventist sects, making the apocalypse their sales strategy — exactly like Donald Trump, incidentally.
That is how they began to promise that AI would destroy millions of jobs, that it would surpass human intelligence and render people “unnecessary”, that it would boost productivity, cure cancer, wipe out entire professions, drive better than humans, kill the film industry, and “put to the test what we are as a species” — among many other things.
Even today, Dario Amodei, CEO of Anthropic — who copied everything from Sam Altman, including the compulsive dishonesty — continues to claim that AI will “eliminate half of all jobs within five years.”
The reality is that none of these things have happened. And none of them are going to. Quite the contrary. Today, the AI era dominates newspaper front pages and social media feeds, yet it is conspicuously absent from productivity statistics. Meanwhile, 8 out of 10 knowledge workers say they don’t use it at all.
The AI companies, meanwhile, are caught between a rock and a hard place. OpenAI, which runs ChatGPT, closed its latest funding round at a valuation of $850 billion. Anthropic is expected to list above $1 trillion. And the big tech companies — Google, Microsoft, Meta, and others — have invested $776 billion in data centers to power this technology, with no plans to stop. They have inflated a bubble sustained entirely by expectations of total economic transformation — by the promise of true artificial intelligence.
To live up to those expectations, they need AI not merely to interpret the truth of society, or even to create new truths, but to produce things of genuine economic value.
But what does it actually mean to be of economic value?
The Truth of Productivity
Not all value is economic. In fact, the most valuable things — air, love, harmony, mathematics — have no price. For something to carry economic value, two conditions must be met. First, it must be scarce: if there are infinite copies of something, its price tends toward zero. Second, it must be exchangeable: there must be someone willing to give you something in return for what you have. And for that exchange to happen, both parties need to agree on what is being exchanged. They need, in other words, a shared truth.
That is why economic value is, at its core, a form of truth: the agreed truth between producer and buyer about what a thing is, what it is worth, and what can be expected from it.
But not all truth carries the same economic value. Truth is the entry condition for exchange; what sets the price is how many people share it. An idea, an industrial design, a work of art, or a computer program is worth something to the extent that many people want it and few people can produce it.
The ideas in this newsletter have real value — because only I can write them, and many people find them worth reading. By contrast, we could use AI to generate a million Substacks full of content and publish a post every minute for the rest of our lives. If we can’t find anyone who wants to read them, they will produce zero economic value.
(Subscribe now, before you forget!)
For this reason, information technologies — email, digital photography, cloud computing, social media, search engines, AI — increase the individual capacity of knowledge workers, but not the aggregate productivity of an economy. In fact, they reduce it. What they actually do is multiply the number of available truths, while making each one less economically viable than what existed before. By making large swaths of human activity abundant and cheap, they reduce the need to exchange in markets. And when there is less exchange, the economy contracts.
The same will happen with AI. It will increase the individual capacity of knowledge workers — in journalism, consulting, programming, publishing — and raise the standards of those professions. But it will not produce a productivity revolution like the railways, the automobile, or electricity. It will produce more truths than ever before. But each one will be worth less.
You can find a more detailed explanation in this article:
The Truth About Good Information
In my view, the great risk facing AI is that the value it can produce doesn’t depend on the technology itself — it depends on the information it feeds on. As is abundantly clear with Google Search and Google News, this technology cannot exist without an internet full of quality content.
The paradox is that this data is the product of the 25 years we spent online without any other way of finding information: Reddit and Wikipedia exist, and there are 10,000 answers to the question “how do you make chicken soup?”, precisely because we weren’t able to find a single definitive answer.
But if AI finds the answers and — as seems to be happening — people stop visiting websites; if it fills the internet with low-quality content regurgitated by a chatbot, the technology itself will run out of material to learn from. What will follow — and is already beginning to happen — is that LLMs will start producing increasingly poor results, because the information available to them is manipulated or simply impoverished.
This technology, which is by its very nature regressive*, needs a resource from which to extract value — and at the same time, threatens to destroy that very resource.
My intuition is that this game will ultimately be won by the best content creators. I’ve called it somewhere “the frozen bread theory”: the idea that an overabundance of information creates a new demand for the highest-quality information. And I believe AI will end up as a tool in service of whoever holds a strong repository of genuine knowledge.
The Truth About Human Value
There is one final idea I don’t have space to develop here — I’ll save it for next week’s Wide Angle. (Hit the button to have it delivered to your inbox.)
There is no single correct way to write a computer program, or to argue a legal case. There is no perfect answer to what an article should say, or a book. The reason AI cannot create images that truly move us — beyond a pile of things worse than stock photography, if that’s even possible — is that there is no such thing as a “good image” without the intention of the person who made it.
Value still resides in human beings. Because we are the only thing that truly interests us.
To be continued…
If this content interests you, you’ll love my first book. It’s called Hijos del Optimismo (Children of Optimism)— a thesis on the great transformations of the knowledge economy and what comes next.
*Correction: This was a slip — the correct term used to describe these models is not “recursive” but “regressive.” However, the sentence doesn’t work with that terminology, so I’m leaving it as is.


