Skip to content
islam.ninja
Go back

Everything Sounds Right Now

6 min read

AI commodified explanations. Understanding didn’t get cheaper.


For most of history, understanding was expensive. You needed access — to teachers, libraries, mentors, or the right conversation at the right time. The bottleneck was supply. Not enough explanations existed for enough people about enough things.

AI just removed that bottleneck. And now we have a stranger problem: what happens when explanations are everywhere, but knowledge isn’t?


There’s a useful distinction the physicist David Deutsch makes between two kinds of explanations. The first kind is structural — every detail matters, and if you change one piece, the whole thing breaks. The second kind is decorative — the details sound good but none of them are doing real work. You can swap any part and the explanation still holds up, which means it was never really explaining anything.

That’s a good lens. But what matters right now isn’t the philosophy. What matters is that we’ve built machines that produce explanations at zero marginal cost, and most of what they produce is the decorative kind. The story sounds right. The structure is optional.


Here’s what AI actually made cheap: fluent, plausible, well-structured accounts of how things work. Multiple angles — economic, psychological, historical — available on demand. Explanations tuned to your vocabulary and your starting point. A first-generation college student can get a clear walkthrough of options pricing in five minutes. A founder can get a crash course on tax structures before a meeting. That access is real, and it matters more than most critics admit.

But there’s an inversion hiding inside the abundance. The things that made explanations valuable were never the explanations themselves. They were the constraints around them.

What’s actually scarce now — scarcer than before, because the volume of plausible-sounding text has exploded — is everything that separates an explanation from a story: the attention to challenge and test it, the contact with reality that breaks it, the instinct for knowing which parts actually matter, and someone paying a price when it’s wrong.

We’ve moved from an information shortage to a criticism shortage. That’s the shift nobody’s pricing in. And by criticism I don’t mean disagreement — I mean the demand that an explanation risk being wrong.


Here’s where it gets dangerous.

Ask an AI to explain why a startup failed, and you’ll get a crisp, confident narrative. Market timing. Founder-market fit. Premature scaling. The explanation will be coherent, pattern-matched from a thousand case studies, and completely impossible to disprove. You could swap “premature scaling” for “insufficient scaling” and the narrative would reorganize itself around the new detail without flinching. Nothing in the explanation actually matters. Every part is interchangeable.

Now challenge it. Say “I don’t think that’s right.” The model won’t defend its position the way someone with real experience would — by pointing to the specific detail you’d have to disprove. It’ll generate a new story that accommodates your objection and still sounds coherent. This looks like open-mindedness. It’s actually the opposite. It’s an explanation with no spine — it doesn’t defend itself, it just reshapes.

This is the new failure mode, and it’s subtler than misinformation. It’s not that AI lies. It’s that it produces infinite “maybe” narratives — stories that are always plausible and never pinned down. Not wrong, just empty. And fluency is the perfect disguise for emptiness.

Think about what this does at scale. Every idea gets a thousand well-written defenders before anyone checks whether it holds up to a single real test. Every position gets enough supporting narrative to feel researched.

Every bad idea now has a compelling origin story. That used to require a movement. Now it requires a prompt.

You get a massive increase in the variety of explanations without any increase in the pressure to get them right.

The person reading the output can’t tell the difference. That’s the whole problem. The feeling of “oh, that makes sense” is identical whether the explanation is real or decorative. And that feeling was already most of what people meant by “understanding.”


The optimistic case is that more ideas mean faster progress — if the world around them rewards testing and refinement. That “if” is doing all the work. The same tool that generates ideas also generates defenses of those ideas, and rebuttals to the criticisms, and counter-rebuttals, and frameworks for thinking about the counter-rebuttals. The whole back-and-forth that’s supposed to filter good thinking from bad can now be simulated without ever touching reality. You can have a complete intellectual debate inside a chat window, reach a satisfying conclusion, and never once check whether any of it is true.

That’s not progress. That’s theater.


So what actually works?

I asked an AI why most diets fail. It gave me a clean story: metabolic adaptation, willpower depletion, hormonal shifts. Three mechanisms, clearly explained, each one plausible on its own. Then I asked: “If someone switched from calorie restriction to just cutting processed food, which of those three mechanisms would still apply?” The answer shifted — not because it was wrong, but because it had never committed to how the pieces were connected. They were three separate stories wearing a trench coat pretending to be a model.

That’s when I pushed harder: “What would someone actually experience in the first two weeks that would tell me which mechanism matters most?” Now it had to make a bet. Not “it depends” — a specific, observable prediction. And the answer got cautious. Which was more useful. Because caution means the explanation has hit the point where it has to be right, not just smooth.

That’s the move. Push every explanation until it has to predict something specific. Until you can’t swap a detail without consequences. Most explanations collapse at that point. The ones that survive aren’t the smoothest. They’re the ones that had something to lose.


The gap between feeling like you understand and actually understanding has never been wider. AI didn’t create that gap — it existed every time someone nodded along to a TED talk and walked away with a warm glow and zero usable knowledge. But AI widened it into a canyon. You can spend an entire afternoon in deep, satisfying intellectual conversation and come out the other side with nothing that would survive contact with a spreadsheet, a lab result, or a customer who doesn’t care about your framework.

The scarce resource was never explanations. It was the willingness to break them. The technology is built to make you feel like you already understand. Criticism starts with the admission that you don’t.

Everything sounds right now. The question is whether any of it holds up.


Share this post on:

Previous Post
The 30% That Survives
Next Post
Selling to the Machine