Claude Code: The AI Bubble is Over
It's the moment of truth — time to prove whether there was any real economic value behind all of this.
As everyone knows, there’s no better sex than the kind you have in your imagination. Because when you actually go to bed with someone and reality arrives — with all its textures, its rough edges, and crucially, the involvement of another person — however fantastic the experience turns out to be, it can never match that perfect expression of your desires that could only ever exist in your head.
Nobody knows this better than Silicon Valley. During the years of cheap money that followed the 2008 crisis, the Valley invented a way of building companies that consisted, essentially, of avoiding any confrontation between their fantasies and the reality of running a business. Someone called it blitzscaling. The idea was simple: in a hyper-connected world, whoever gets there first takes everything. So what mattered wasn’t being a functional company — meaning, one that actually made money. That would come later, on its own. What mattered was growth. Growth at any cost, even if it meant bleeding out at a loss.
This newsletter lands in your inbox for free because information wants to be free. But it doesn’t arrive by spores. Or float in on the air. If you don’t want to miss anything, make sure you subscribe and share.
Venture capital — the specialist startup investors who rock this particular cradle — had no incentive then, and has none now, to build profitable companies for the long term. That’s not their business. Their business is buying cheap shares and selling them higher. And for that, it’s far more profitable to buy and sell before reality catches up with expectations: pre-money, pre-profit, pre-product-market-fit. A good fantasy is a far more valuable asset than cash flow.
The result was an entire generation of “unicorns”: startups valued in the billions that existed primarily in the imagination of their investors. Uber was one of them — it racked up $33 billion in losses before turning a profit. Spotify too, taking 17 years to post its first profitable year. WeWork reached a valuation of $47 billion before anyone bothered to read its accounts — and when they did, the company nearly ceased to exist. These are the success stories. Of the rest, little is known. In 2021, 354 companies had achieved unicorn status. Only six have gone public since.
The mega-bubble of the misleadingly named “artificial intelligence” is the latest chapter of this adventure. Except that after so much growth, we’ve run out of mythological creatures to compare it to. Five years ago, a company needed to reach a valuation of $1 billion to be considered a unicorn. OpenAI, the AI frontrunner, closed its latest funding round at a valuation of $850 billion. Anthropic, hot on its heels, stands at $350 billion. Meanwhile, the big tech companies — Google, Microsoft, Meta, and others — have collectively invested $776 billion in data centers to power this technology, with no plans to stop.
These are astronomical figures. Unthinkable ones. Impossible to square with any model of profitability on a spreadsheet. Like the sex you had in your head, they are incompatible with reality. They can only make sense in the fantasy world where these companies succeed in creating a new intelligent species.
For that reason — to justify this kind of madness — over the past three years OpenAI, Nvidia, Anthropic, and the rest of the AI industry have reinvented themselves as the new Seventh-day Adventists. They have proclaimed that this technology would destroy 300 million jobs and transform two in three roles worldwide; that it would surpass human intelligence and render people “unnecessary”; that it would boost productivity by 7%; that it would cure cancer before 2025; that it would wipe out radiologists, then doctors, then programmers, then lawyers; that it would drive better than humans by 2023; that it would replace the film industry with one made up of amateurs; and that it would “put to the test what we are as a species”. The latest apocalypse — this week’s edition — even has a name: “Vulnpocalypse”, the prophecy that an AI model will trigger a cybersecurity armageddon.
None of this has happened. But until very recently, AI companies and their investors could keep living off the expectation that it eventually would.
Until Claude Code.
Three months ago, Anthropic released a large language model adapted to the needs of software developers. It has been well received in that sector. Some of the most respected programmers in the world — including Linus Torvalds and David Heinemeier Hansson — have acknowledged they’re giving it serious consideration. Claude Code is an LLM that could, ideally, find what is known as “product-market fit”, meaning it could land as a concrete product with a potential buyer, carry a price tag, and actually have its sales tested in the real world.
Oh, no!
It’s time for reality. AI now has a product to sell. It is no longer just a promise. Now it will have to show how much developers are actually willing to pay for its solution, and how many developers in the world are interested in the tool.
Suddenly, the AI companies have a problem. How much does a software company need to charge to justify a valuation in the hundreds of billions? And another: what does it cost to deliver that service to developers? Because if the computing cost is very high — and it is — it’s possible that companies will only want to pay for it in a handful of specific use cases.
And another: what does Anthropic have that protects its product from competitors who want to copy its business model? What stops 20 or 30 other companies from building something exactly like Claude Code?
And if all those companies pile in to develop similar models, won’t they have an obvious advantage over Anthropic — precisely because they don’t carry the weight of having paid all the development costs to get here?
And one more: software development has the enormous advantage of being a domain where two things these models need to function actually coexist — well-established rules, and a repository of knowledge built up over decades by thousands of developers sharing their solutions. Can Claude Code be exported to other industries, or is it going to remain confined to this particular corner of programming?
And if it stays confined to programming, how do the hyperscalers justify the investment they’ve made in data centers — an investment premised on AI replacing hundreds of millions of workers?
None of these questions are extraordinary. They’re the ones any company asks when its business isn’t buying and selling shares, but actually creating value for someone willing to pay for your product. What they will be forced to answer in the coming months is the big question hiding behind the bubble: is there a real business in AI? Enough of one to justify investing trillions of dollars in a handful of companies?
And so the AI bubble ends here. With Claude Code. For better or worse, the companies that have poured money into this and tried to convince us it was going to work — now that something actually works — will have to show their hand. Either they produce a product that justifies the investment, or they scale the investment down to match what they can actually sell.
I can’t help thinking that Anthropic’s latest move — last week’s press release announcing it would not be releasing its newest model, “Mythos,” because it “could have terrible consequences for the banking sector” — is a cheap ploy to avoid facing the harsh reality of having to sell your product and stay in the hype cycle.
Because it is rather curious that, at the exact moment when one of your models actually seems to be useful for something, you decide not to publish the next one — just in case someone notices it has a practical application and isn’t simply an existential threat to life on Earth.
But it’s a futile effort, I think. The AI fantasy will end the only way fantasies always end: with the encounter with reality, its textures, and its rough edges.
In the next article I’ll tell you what I think is going to happen…
Heads up! Changes to the newsletter.
I owe regular readers an apology — once again, I’ve neglected this newsletter. The launch of Hijos del Optimismo (Children of Optimism) collided with the opportunity to make a significant investment to get a new project off the ground, plus a few personal matters, and life simply took over.
But I’ve used these past few days to think through a format change. From now on, during the week I’ll be publishing shorter pieces tied more closely to current events — like today’s — and on Saturdays a deep-dive piece called “Wide Angle.”
Let’s see if I can pull it off!
Photo by Madison Oren on Unsplash


