The original (Spanish) version of this article can be found here.
These days, I came across, on the recommendation of Pablo Gavilán, two interviews with Lian Wenfeng, the founder of #DeepSeek. They’re quite technical, both on the technological side and the financial one. (The parent company of DeepSeek is a “quantitative” investment fund, which uses mathematical models to make investment decisions). But there’s one question that really caught my attention because I think it perfectly explains where the course of what we've called “artificial intelligence” has gone wrong, and why I believe it will turn out to be a bluff.
As we’ve already discussed, the whole commotion around artificial intelligence revolves around the idea that these companies are going to eventually reach “general artificial intelligence.” It’s a way of acknowledging that current large language models are not intelligent, but without abandoning the expectation of reaching true AI. It’s framed as a matter of developmental progress, but that they’re heading in the right direction.
And this is exactly what Liang openly expresses in the interview:
“We understand that the essence of human intelligence may be language, and human thinking may be a language process. You think you're thinking, but you're actually weaving language in your head. This means that human-like AGI may be born from LLMs.”
But that’s just not true. Intelligence is not a linguistic process, as demonstrated by the fact that humans have the ability to think before having language.
"As animals, our primary form of intelligence is emotional. We think and understand with emotions, and we use language as a vehicle to organize those emotions and share them with others."
Confusing intelligence with language is like confusing love with poetry.
And it’s simply not possible that no one in Silicon Valley has spoken with the many psychologists, neurologists, and neuroscientists who have demonstrated that intelligence is infinitely more complex than language.
It makes much more sense to think that a more or less large group of excited technicians, completely drunk on expectations from their work, like Liang, have coincided with a lot of vested interests that have been waiting for a technology to come along that would deliver mountains of profits.
And while it’s true that large language models are a giant step in the interpretation of language, it’s likely that they won’t go beyond that.