We wanted Superman-level AI. Instead, we got Bizarro. | by Michael Buckley | Nov, 2025

The illusion of intelligence is the new frontier of deception.

Press enter or click to view image in full size

Animated character with pale skin, scraggly black hair, and an eerie expression. Wears a distorted Superman logo on a purple suit. Dark, vibrant background.
Image source: dcau.fandom.com/wiki/Bizarro

I was a huge Superman fan as a kid. In fact, my first tattoo was of the Superman symbol — I know, cheesy. But there was something about that idea of strength guided by purpose that stuck with me. However, one character that truly captured my imagination was Superman’s failed copy — Bizarro.

Bizarro is a botched experiment by the genius villain Lex Luthor to replicate Superman. He sort of looks like the hero we know, has his powers, and even tries to do good — but everything he does comes out wrong. He saves people by endangering them, speaks in twisted opposites, and mistakes harm for help. He isn’t evil — just reversed. That inversion — an imitation of greatness that misunderstands its essence — is a fitting metaphor for modern AI.

But this metaphor isn’t just poetic—Apple’s 2025 paper The Illusion of Thinking backs it up. In the study, Apple researchers tested a class of models they called “Large Reasoning Models,” essentially Large Language Models (LLMs) retuned for reasoning, using puzzles like the Tower of Hanoi and Blocks World. At first the models performed well, but as the puzzles grew more complex, their reasoning began to fail.

Instead of increasing their effort, the models produced shorter and less coherent thought chains. They often stopped trying even when more computation time was available. The researchers observed that these systems were not reasoning at all. They were matching patterns that looked like reasoning.

As complexity increased, their logic collapsed into pure prediction. The models recognized the shape of thought without ever truly thinking. The result sounded intelligent but felt hollow.

At the end of the day, these “intelligent” machines are little more than glorified autocorrect systems — predicting, not thinking. That’s dangerous, because it blurs the line between intelligence and imitation.

It’s a false illusion sold by powerful tech companies as they rake in billions. If I didn’t know better, I’d call the entire AI movement a modern Ponzi scheme — but that’s a discussion for another day.

Anyone who’s used tools like ChatGPT knows the frustration of arguing with a machine that doesn’t know it’s wrong. And yes, I say “arguing” because I often catch myself yelling (with colorful language) at these conversational AI tools — a fact I’m not proud of.

I know it’s foolish to yell at a machine, but in my defense, maybe it should act more like a machine instead of a compulsive liar.

Just the other day, I asked it to debug a JavaScript snippet it had written. When I said it wasn’t working, it replied, “You’re absolutely correct! The code appears to be flaky.” Code it had just given me. I was actually more surprised by the use of the word flakey than the fact that it gave me broken code.

That moment perfectly captured the essence of modern AI — confident errors, fluent nonsense, and zero accountability. This is where we’ve landed as a society. We’ve traded honesty for efficiency.

To understand why a model would confidently call its own code “flaky,” we need to look at how we have historically defined intelligence in machines, and how that definition has shifted.

The earliest approach in AI, symbolic intelligence, encoded human-knowledge in explicit rules and logic. It relied on structured reasoning — if-then statements, symbol hierarchies, and clear inference paths. The advantage—every conclusion is transparent and traceable.

But the disadvantage—when faced with messy, real-world inputs (such as image recognition or natural language), the rule-based system struggles to scale or cope with ambiguity.

Following the limitations of rule-based symbolic methods, the field shifted to statistical artificial intelligence. Instead of encoding explicit logic, this approach relies on probability, large datasets and pattern recognition.

The system doesn’t “reason” in human terms — it correlates. It learns statistical relationships between inputs and outcomes, then uses those to predict. The trade-off—high scalability and flexibility, but weak transparency and little genuine “understanding.”

And then there’s today’s generative artificial intelligence — the kind behind text, images, and music. Large Language Models (LLMs) like ChatGPT are trained on massive datasets scraped from the internet, learning statistical patterns across billions of examples.

LLMs use transformer networks to learn contextual relationships by predicting the next word across billions of parameters, creating language that feels coherent but remains purely probabilistic.

Meanwhile, diffusion models generate images by gradually denoising random noise into structured visuals, guided by patterns learned from vast datasets.

Both systems learn correlations, not concepts — they model likelihoods, not meaning.

The trajectory is clear — the further we’ve moved from the transparency of logic (symbolic AI) toward the fluency of prediction (generative AI), the more convincingly human the output appears — and the more fundamentally alien the process behind it becomes.

This is the paradox of modern AI. In our quest to build Superman, we created Bizarro — a brilliant mimic whose fluency guarantees imitation, but whose fundamental lack of reasoning prevents the very understanding and accountability we need to trust it.

But maybe it’s not entirely bad. Generative AI feeds on what already exists, which means originality still belongs to us. That gives creators a chance to stand apart — to make what the machine can only imitate.

The more I learn about AI, the less I fear it taking our jobs — and the more I fear its power to distort truth. These systems don’t think—they replicate. They strip away context until meaning becomes optional.

In that sense, Big Tech has become our Lex Luthor — brilliant, self-assured, and convinced it’s saving humanity while quietly trying to own it. Its creations mirror its ambition—powerful, profitable, and fundamentally indifferent to the truth.

That’s the real danger — not that AI will outthink us, but that it will outproduce us, flooding the world with fluent illusions that sound right but aren’t. It’s our own version of Bizarro — except this time, there’s no Superman coming to save us.

Schreibe einen Kommentar