On the difference between knowing and understanding, and why it costs more than you think
In 1751, Denis Diderot and Jean le Rond d’Alembert began publishing the Encyclopédie, a twenty-eight-volume attempt to compress all of human knowledge into a format that anyone literate could access. The project was radical in a specific way: it was designed to eliminate the middleman. You would no longer need a priest to explain theology, a guild master to explain a trade, or a professor to explain natural philosophy. The knowledge would be right there, organized alphabetically, illustrated with engravings, available to anyone who could read French.
The establishment lost its mind. The Catholic Church condemned the project. The French government twice suspended publication. The Jesuits campaigned for its suppression. The fear was explicit: if ordinary people could access expert knowledge directly, they would stop deferring to experts. Authority would collapse. Dangerous ideas would spread unchecked.
They were right about the dangerous ideas. They were wrong about the experts. Two and a half centuries later, we have more credentialed specialists than at any point in human history. The Encyclopédie did not kill expertise. It killed a particular business model of expertise — the model where the expert’s value lay primarily in possessing information that others lacked. What survived, and grew, was every form of expertise that could not be flattened into an encyclopedia entry: judgment, verification, accountability, the generation of new knowledge, and the diagnosis of problems that the patient did not know they had.
We are running the same experiment again, at enormously greater speed and scale. Large language models are the Encyclopédie of our era — except they do not merely store explanations. They generate them, on demand, personalized to the question, with infinite patience and a confident tone. The question everyone is asking is whether this finally kills the expert. I think the answer is the same one Diderot accidentally discovered: it kills the expert as content, and it resurrects the expert as assurance.
But the transition between those two roles is rougher than it sounds, and the casualties along the way are real.
I. The Behavior Has Already Changed
The shift is not theoretical. It is measurable and it is happening now.
For a growing share of questions, people have moved from a multi-step process — find sources, read them, synthesize — to a single step: ask, receive synthesis, move on. When search engines surface AI-generated summaries, users click through to the underlying sources far less often. The answer is right there. Why would you leave?
This is not laziness. It is rational economizing. If the synthesis is good enough for your immediate purpose — and it often is — the old process of reading three articles and triangulating is a luxury, not a necessity. The person who asks an LLM “what is the hedonic treadmill” and gets a clear, tailored two-paragraph explanation has, for most practical purposes, gotten what they needed. They are unlikely to then buy a psychology textbook.
This creates what you might call a new breed of learner: someone who explores a question until they have a fluent answer, and then stops. They are not ignorant. They have a real, if shallow, understanding of a wide range of topics. They move fast. They are curious. And they are, in a specific and consequential way, vulnerable — because the feeling of understanding and the fact of understanding have come apart in ways they may not notice.
II. The Illusion Machine
Here is the core risk, and it is subtle enough that most discussions of AI and learning miss it entirely.
LLMs make people more accurate on average. The research is fairly consistent — give someone access to a good language model and their performance on knowledge tasks improves. But LLMs also make people less calibrated. They become worse at knowing what they know and what they do not know, because fluency feels like comprehension. A well-written paragraph explaining quantum decoherence feels like understanding quantum decoherence, the same way watching a cooking video feels like knowing how to cook. The gap between reception and mastery gets papered over by the quality of the presentation.
Psychologists call this the illusion of explanatory depth. You think you understand how a zipper works until someone asks you to explain it from scratch, and you discover that what you had was a vague sense of familiarity dressed up as knowledge. LLMs are industrialized producers of this illusion — a problem I explore in Everything Sounds Right Now. They deliver explanations so fluent, so patient, so tailored to your level that the normal cues for “I don’t actually understand this” — confusion, frustration, the feeling of being lost — never fire.
And the models themselves compound the problem. LLMs are systematically overconfident: they present uncertain claims with the same syntactic certainty as established facts. They do not hedge in proportion to their reliability. Research suggests this confidence is contagious — exposure to authoritative AI output ratchets up human certainty, not because the user evaluated the evidence but because the answer sounded like it came from someone who had. In medical contexts, models are more likely to accept and propagate misinformation when it appears in realistic clinical notes. In scientific summarization, LLM-generated summaries overgeneralize, smoothing over the caveats and confidence intervals that distinguish a finding from a headline.
The result: access to plausible answers has never been cheaper. Access to trustworthy answers has not gotten cheaper at all. That gap is where the surviving form of expertise lives.
This matters enormously for the question of whether experts get devalued. A huge driver of expert demand has always been the awareness of one’s own ignorance. I go to a doctor because I know I do not know what this symptom means. I hire a lawyer because I know the contract has traps I cannot see. If people feel they understand — even when they do not — they consult experts less. Not because the experts became less valuable, but because the felt need for them evaporated. The correction comes later, when reality punishes the mistake. It always does. The question is how much damage accumulates before it arrives.
III. What Gets Commoditized
Not all expertise is the same, and cheap answers do not erode all of it equally. The clearest way to see the shift is to ask, for each layer of what experts do, whether an LLM can substitute for it.
Knowing things — definitions, summaries, comparisons, “tell me about X” — is almost fully commoditized. An LLM does this at the level of a competent generalist, instantly, for free, at any hour. The expert who was primarily valued for possessing information that others lacked is in the same position as the encyclopedia salesman after Wikipedia: not wrong, just redundant.
Explaining things — personalized instruction, worked examples, patient re-explanation — is substantially commoditized. An LLM is, for many topics, a better first-pass tutor than an overworked human teacher. It never loses patience, it adapts to your level, and it is available at three in the morning.
Synthesizing — pulling together ideas from multiple domains, comparing frameworks, generating options — is partially commoditized. The LLM is fast and wide-ranging, but its synthesis is only as good as its training data and its ability to represent genuine uncertainty, which, as we have seen, is not great.
But three things remain stubbornly expensive, and they are the things that define expertise in its surviving form.
Verification: knowing what is true at the edges, in the hard cases, in the situations where the textbook answer is wrong or outdated. This requires not just knowledge but judgment — the ability to distinguish “this is the kind of claim that is probably right” from “this is the kind of claim that requires checking.” LLMs cannot make this distinction because they cannot tell the difference between what they know well and what they are guessing at. Experts can. That distinction is worth paying for.
Judgment under uncertainty: making decisions when the data is incomplete, the stakes are high, and someone has to commit. An LLM can generate a list of options with pros and cons. It cannot sit across the table from you, weigh the things that cannot be quantified, and say “given everything, I would do this — and here is why.” The expert is valuable precisely because they are on the hook.
Generating new knowledge: running experiments, doing fieldwork, developing novel theory, producing evidence that did not previously exist. LLMs recombine existing text. They do not produce new observations about the world. The frontier of knowledge remains a human enterprise, and the ability to cheaply synthesize existing knowledge makes the ability to produce new knowledge more valuable, not less.
The pattern is clean: commoditize the lower layers, and the upper layers get more expensive. Experts get cheaper as content. They get more expensive as assurance.
IV. The Counter-Case, Honestly
There is a strong argument that experts become more valuable in a world of cheap answers, and it deserves to be stated rather than buried as a caveat.
LLMs flood the zone with plausible-sounding text, analysis, and opinion. In a world drowning in content, the demand for filters — for someone who can tell you what to trust, what matters, and what is noise — goes up, not down. Experts are those filters. Their scarcity value increases precisely because the thing they are filtering has become abundant.
Experts who use LLMs raise their own throughput dramatically, widening the gap versus non-experts rather than closing it. A good lawyer with an AI drafting tool is not replaced by the AI — they become a faster, more prolific good lawyer. The junior associate who previously competed on volume is the one who gets squeezed. The senior partner who provides judgment and client trust becomes more leveraged.
And as reliance on AI grows, failures become more salient. Every high-profile hallucination, every confidently wrong medical suggestion, every legal filing that cites cases that do not exist pushes institutions toward credentialing, provenance, and audit trails. The “expert-verified” label becomes a premium product — the organic certification of the knowledge economy.
There is also early evidence that people use AI heavily to ramp into topics, which sometimes increases rather than decreases their appetite for deeper material. The student who uses an LLM to get oriented may be more likely to seek out serious sources afterward, not less. Cheap answers can be the gateway drug to expensive understanding.
But honesty requires naming the scenario where this optimistic case fails. It is not the scenario where experts become irrelevant. It is the scenario where the correction never comes — not because reality stops punishing bad understanding, but because people stop connecting the punishment to the cause. If the gap between a fluent answer and real comprehension is invisible to the person experiencing it, they never learn to demand better. They do not reject the expert. They simply never discover they need one. The illusion of understanding does not announce itself. It feels exactly like the real thing, which is precisely what makes it dangerous.
Which scenario dominates is not a technology question. It is a question about whether cheap understanding generates enough visible failures to create demand for the real thing — or whether it is good enough, often enough, that most people never notice the difference.
V. The Honest Reckoning
The Encyclopédie did not destroy expertise. It destroyed the information monopoly that a particular class of experts had enjoyed. What replaced it was not a world without experts but a world where expertise was valued for different reasons — for judgment rather than memory, for accountability rather than access, for the ability to generate new knowledge rather than to recite old knowledge.
We are living through the same transition at a speed Diderot could not have imagined. LLMs have made the “explain it to me” layer of expertise nearly free. This is, on balance, a remarkable good. More people will understand more things at a surface level than at any previous point in history. The democratization of explanation is real, and it matters.
But the same force that democratizes explanation also industrializes the illusion of understanding. People will feel they know things they do not know. They will make decisions — medical, legal, financial, personal — on the basis of fluent summaries that omit the caveats that matter most. And the correction will come, as it always does, from reality — from the moment when the confident answer turns out to be wrong and someone has to deal with the consequences.
This is where the expert re-enters, not as a source of information but as a source of warranted belief. Not as someone who tells you things, but as someone who can be held responsible for the things they tell you. The distinction sounds small. Economically, it is enormous.
Diderot wanted to put all knowledge in a book so that ordinary people could think for themselves. He succeeded beyond his wildest projections — not with a book, but with a machine that thinks for them, fluently, confidently, and without any skin in the game. The experts he tried to displace are still here. They have just moved upstairs — from the ground floor of explanation to the penthouse of accountability.
The answers have never been cheaper. Knowing which ones to trust has never been more expensive.