The Heist of the Century
The big tech platforms have taken something far more important than our data. It’s time to get it back.
The original (Spanish) version of this article can be found here.
When Clive Humby coined the phrase “data is the new oil,” he didn’t mean that data would become the next raw material to replace oil as a source of wealth.
It was 2006, and this British mathematician wanted to highlight something quite different: just like crude oil, raw data is worthless. Only once selected, analyzed, and refined does it gain utility. Nevertheless, the phrase has traveled through history as a metaphor for a supposed business of buying and selling people’s personal information, on which the contemporary reign of digital platforms would be built.
According to this oft-repeated belief, big tech companies — such as Google, Facebook, Twitter, TikTok, etc. — have been taking our personal data to sell it to advertisers. These data would be so valuable that some politicians have even proposed creating a “data dividend” that companies should pay to their users for access to them.
This narrative, apart from evoking other historical episodes where ordinary people’s wealth was absorbed by large corporations, is powerful because it connects to a widespread suspicion: if these companies have amassed so much power and wealth, it can’t be just by offering a free service. How could they have become the giants they are — more like contemporary empires than multinational corporations — by providing a service that is apparently free? Their enormous growth forces us to ask what they are really extracting from us and why their model works.
So, what are tech companies trading? Where does their power come from? Would it make sense for them to be required to pay us for using our data? And what exactly do we mean when we talk about “data”?
In the early 2000s, when we were still unaware of how big this would get, there were plenty of opportunists who collected personal data (names, phone numbers, emails) to build databases that were bought and sold for third parties to send advertising. That gave rise to spam — the practice of sending mass unwanted emails.
Fortunately, both app providers and governments reacted fairly quickly. Within a few years, spam filters were installed in most email clients, and the servers that direct Internet traffic began blocking senders using stolen addresses. Various regulations were passed to ban the use of personal data, and more or less everyone caught up and put a stop to those practices. Today, many of the apps we associate with “data theft” don’t even have that kind of personal information about the user: they rely on third-party authentication (like when you log in to an app with your Google or Facebook account).
So, in 2025, when we talk about “data,” we are not talking about selling your email to a spammer who will send you Viagra ads. It’s about something else.
All applications where you need to identify yourself — create an account — record user movements. This is not a malicious practice; it is normal behavior in digital technology. All programs keep a log of each tiny transaction and change that happens within the system. Even your own computer records everything you do.
The difference is that, unlike someone with a physical activity where they meet people face-to-face, in digital apps you do not see the users: you don’t know who they are, what they need, or how to serve them better. So the managers of these programs study user behavior through their “data”: by analyzing every small action we take within each website. This includes how long we stay on a page, what links we click on, which content we favor, who we interact with and how often, whether we leave a site through one page or another, and so on.
And everyone does this, from public administrations to your bank’s website, to your favorite newspaper.
Platforms that sell advertising (like social networks, but also often newspapers and digital magazines) use that information to let advertisers segment the audience and target specific groups — for example, by age range or by certain content interests. With these filters, platforms assure advertisers their ads will be more effective, and thus deserve to charge more for them.
Until these platforms appeared, advertising was like shooting flies with a cannon. Without being able to segment the market to reach a specific customer, and in the hands of the big mass media publishers, only those with enormous budgets could advertise. “Half the money I spend on advertising is wasted,” a famous American businessman once said. “The problem is, I don’t know which half.”
Thanks to this segmentation, digital tools opened up the advertising market to new advertisers looking for niche customers, offering a better service than traditional formats that cannot discriminate among their audiences, like billboards or TV ads.
So, could we say that platforms profit from the information they get from their users to increase the price and variety of advertising options they offer? I suppose so. But if that is “theft,” it is no different from what newspapers were doing 50 years ago — only better executed.
Moreover, these data are not universal: they have no real value outside those platforms. That is what Clive Humby meant when he said data needs to be refined, like crude oil. Although it is valuable for audience segmentation, it is contextual: you cannot transfer it to another app to use elsewhere. It is not the data these platforms are capturing.
So where do they get their power and the enormous amounts of money they generate?
From your identity.
Every time you open a profile on a platform, you leave a version of your identity there. A version of yourself that, over time, stores an image, a collection of stories, and a network of contacts. A huge amount of value that takes time and effort to build.
In the attention economy, that identity has become people’s true asset. It is within our identity that we accumulate the trust we generate, the recognition we receive, the status we project, and even the knowledge we share. In the 21st-century world, that symbolic capital determines who gets hired, who gets listened to, who collaborates, and who is followed. It is the basis on which economic decisions are made in the digital world. Every time we venture into life, we draw on that asset we have built up in our identity. That is why it is worth so much money.
Today that identity is fragmented. One piece is on Twitter, another on Airbnb, another on Wallapop or LinkedIn. Each of these platforms retains a portion of us and forces us to interact within their boundaries if we want to use that symbolic capital.
So, if you want to rent out your house, you have to do it through Airbnb because that’s where you’ve built up positive reviews. If you want to sell something, you must do it on Wallapop because that’s where you have earned credibility. If you want to talk about politics, any random blog won’t work: you need the reach and legitimacy you have built among your followers on Twitter.
This is why the apps can then turn nasty and stir up polarization: because it is not easy to leave without abandoning the relational capital you have built with so much work. We are at the mercy of their owners’ interests.
What does this mean economically? Imagine that every time you changed jobs, you had to leave behind your prior experience, your skills, your references, as if you had never worked before. Starting from zero. Where would you be today? How much would it cost to rebuild that invisible capital from scratch? That is what happens to us with digital identity. Every time we want to leave a platform, we have to start again from zero.
But if we don’t leave, we have no choice but to keep accumulating capital in a space that isn’t ours. Every action within each of those platforms reinforces that digital identity, but also chains it down. It’s as if we were building a house on land that isn’t ours, a house we can only enter when they let us, for what they let us, and knowing they will charge a commission — or a levy — every time someone comes to visit.
The business of big tech is not the theft of data: it is a kind of neo-feudal society, where a bunch of serfs “work the land” of their digital identity on the estates of a few lords with whom they signed a contract forbidding them to leave. The heist of the 21st century, which has been going on for 25 years, has nothing to do with our data, but rather with the hijacking of identity and the identity capital of billions of people.
That is why it makes little sense for tech companies to pay back a portion of what they are taking from all these serfs. What makes sense is to leave.
Claim what is yours
Fortunately, all this is about to change. Just as technology got us into this mess, it has the capacity to get us out.
A protocol is a set of rules that dictate how information is transmitted between nodes on a network. Unlike private applications like Facebook or Twitter, protocols are open standards, common to any user — something like the commons of the digital world.
For example, “http” — those letters you see in front of every web address — is the protocol that allows any browser to display any web page. If it didn’t exist, neither would the common web space, where anyone can have their own domain and their own page.
In recent months, a new protocol has been growing rapidly with the potential to return our identity to us. It is called “atproto,” short for Authenticated Transfer Protocol, and it is designed to build decentralized social networks.
Developed by the team behind Bluesky, its goal is to allow different social platforms to interact with each other without relying on a single company or centralized server. Unlike traditional networks, atproto separates the technical infrastructure (the network and data) from the user interface, allowing users to keep their identity, their followers, and their posting history even if they change application or provider.
Atproto works on three pillars: portable identities (users control their digital name and identity), personal repositories (each user has their own file of their activity), and open algorithms (users can choose how their information is organized). Instead of one company deciding what content is shown and what is hidden, atproto allows multiple algorithms to compete for user attention. This architecture promotes a diverse, resilient, and transparent ecosystem, where trust is built into the design of the system rather than imposed by corporate control.
The possibilities opened up by atproto are immense. If digital identity and reputation could move freely between platforms, a new trust-based economy could emerge. We could follow the same content creator — or the same media outlet — without relying on algorithms, exchange houses or anything else without a platform in the middle, and build social networks to share knowledge, support, or experiences without going through a commercial filter. In a world where trust is no longer privatized, new forms of community could flourish, beyond the control of tech giants, across countless areas: from dating, to commerce, to many forms of work.
And best of all, we can all be part of the solution.
Here are three very simple ideas you can put into practice:
— Open an account on Bluesky, the first atproto application and its flagship (it already has 40 million users), and encourage others to do the same.
— Donate to the “Free our feeds” campaign, which is raising funds for more developers to build apps on top of atproto.
— Spread the word by sharing this article or others.
See you on the (free) networks :D