What Autism Explains About Artificial Intelligence
The original (Spanish) version of this article can be found here.
Humans operate in two distinct “modes of thought” (and no, they’re not “fast and slow” :p). There are moments when we seek the truth. For example, if you want to know what time a train departs, the most natural thing is to check the train company's website. You're looking for a specific answer — a reliable piece of information. Your brain is in true/false mode, to put it as simply as possible. The train leaves at 9:36 a.m., so any other answer is invalid.
However, if someone else asks you that question, a different thinking system kicks in — one where you're no longer just interested in the factual truth about the trains. Maybe the person asking wants to gauge your knowledge about train schedules because it’s your job to know them. Or maybe they’re mocking you. Or maybe you're trying to impress them. Or trying to go unnoticed. Or it’s a teacher and getting it right has consequences. Maybe you don't even want to answer.
When we interact with others, truth takes a backseat, and a social mechanics of behavior begins to operate.
Most people — those we call “neurotypical” because they are not on any neurodivergent spectrum such as autism or ADHD — spend most of their time operating within that social thought mechanic. If someone in a group of friends asks a question, the answers are often not about what’s true, but about what position each person wants to occupy: whether they want to align with one person or another, or reinforce their identity. Above all, it depends on who holds the power within the group. What does this person expect me to say? What is expected of me? Who should I be in this context? How should I respond?
So, most people operate much of the time seeking belonging, reputation, power, or safety within the group.
For that reason — because people care more about fitting in than about telling the truth — we often end up believing things that aren’t true. Like calories, or capital. That’s also why it’s so hard to dismantle a lie, even if proving it false is easy: most people don’t really care if something is true or false, as long as it helps them reach their goals. It’s not that they’re liars or cynics — it’s just that the very notion of “truth” doesn’t come into play in this mode of thought.
Autistic people don’t work that way: they don’t give the same weight to others’ reactions and tend not to process what others are thinking. That’s why we often live in that truth-focused system where neurotypicals operate only when no one is watching. If you ask an autistic person a question, they won’t beat around the bush: they’ll tell you what they actually think without giving much weight to how their words might affect your feelings or your relationship. That’s why we’re sometimes seen as awkward or inappropriate.
Software — at least until the arrival of what we now call “artificial intelligence” — functioned very much like an autistic person. In fact, most likely, a large portion of the software we use was designed by autistic people in their own image. So it was built to be precise, unforgiving of mistakes, unambiguous, and as succinct as possible. Its purpose was to understand what is true and to verify it. That’s why the programs we know are constantly checking whether something is true or false, or whether a condition is met, in order to determine the strict consequences of that outcome.
So traditional software had no trouble telling you what time a train leaves — as long as you’d entered the right data into a database — but it was completely incapable of answering a vague or generic question about train schedules.
Until AI came along. Large Language Models (LLMs) are the first form of software that function like a neurotypical human — seeking the socially acceptable response. For the first time in history, a piece of software doesn’t aim for truth, but rather the answer its interlocutor is most likely expecting.
LLMs are trained on massive datasets, which include vast amounts of information about train schedules. If you ask one of these bots how to get from one city to another, it might give you lots of general information — not because you asked for it explicitly, but because the LLM infers it might be useful based on what other people typically want in similar situations. That’s why LLMs are “stochastic uncles” — experts in commonplaces, in offering the most expected answer, filtered through political correctness, popularity, and repetition.
The problem these models are now facing is that the world has been promised they can deliver truth. For example, that they can reliably handle customer service tasks, like humans do. Or that they can review case files or read books coherently. Or that they can understand the complex nature of entries in a database. In reality, they are incapable of any of this. Truth cannot come from a probabilistic model.
A system that fails, say, 10% of the time, will — when asked a question requiring 10 independent decisions — make a mistake about 65% of the time. For this reason, LLMs will only work for producing ideas, inspiration, or results where precision isn’t vital — like social media posts, images, or creative drafts. They’re also helpful for people who already know how to do something — like writing articles or coding — and can use the LLM to handle some of the grunt work, as long as they carefully review everything afterward.
For an LLM to produce truth at the level we expect from a person or a code script, it would need to be trained on a dataset that also aims exclusively at the truth, systematically discarding everything else. And to do that, we’d first need to define what truth even means — for a large group of people and objects. And that’s no easy task.
In this fascinating interview, Lex Fridman and mathematician Terence Tao discuss this problem. As of 2025, mathematics still operates in a completely analog manner. Research is still published in natural language papers that other humans have to read, review, and process. A group of people is trying to “formalize” all mathematical objects into a system that, if successful, would allow new contributions to be made directly within this formal model — a system that already understands each object precisely and can manipulate them the way a calculator manipulates numbers.
Additionally, you could then run an LLM on top of that system to generate as many proofs as computation would allow.
But to get there, they first need to build a database — a model — that is ruthless with errors. One that’s written in a formal language, not in the permissive natural language we humans use. And that’s a colossal task, because much of mathematical knowledge, contrary to how it may seem, is informal, intuitive, and hasn’t been formalized anywhere.
In the meantime, as long as LLMs continue to feed on the social thinking mechanics of neurotypical humans found across the internet, “AI” will never be able to systematically find the truth.