On what technology makes abundant, what it leaves scarce, and how to tell which one you are
She has been writing code for eleven years. She knows the architecture of her company’s systems the way a surgeon knows anatomy — not just what connects to what, but why it was built that way, what would hurt if you cut it wrong. She can hold a complex distributed system in her head while she’s in the shower. That isn’t a metaphor. She actually solves problems while showering, surfaces them into the day already half-solved, hands them to her colleagues with the kind of confidence that comes from having earned it slowly, over a decade of being wrong in instructive ways.
Six months ago, something happened. Not a dramatic firing. Something quieter and harder to name. She started using AI tools for the first time and noticed — gradually, then suddenly — that a significant portion of what she did all day could be generated in seconds by something that had never taken a shower in its life.
She still has her job. Her judgment is still valued, her colleagues still come to her, her salary hasn’t changed. And yet. Something shifted in how the room reads her. Or maybe just in how she reads herself. She is, for the first time in her career, uncertain about what exactly she is worth — and uncertain, which is worse, about what exactly she is.
This essay is about that feeling. Not as a personal crisis but as an economic signal. Because that feeling — of having your value suddenly made uncertain, of the ground shifting beneath something you’d built your identity on — is what a technological redistribution feels like from the inside.
Technology tends to relocate value rather than destroy it.
That tendency is real — I trace it through the history of electrification in Cheap Thinking, and it holds well enough to treat as a working assumption. But “tends to” is doing serious work in that sentence. Treat it as a law and you get comfortable. And comfort is the thing this shift most reliably punishes.
Every major technology is, at its core, an abundance machine. It takes something scarce — and therefore precious — and makes it cheap. The printing press didn’t end writing. It ended the scribe. It made copying, the specific thing scribes were paid for, trivially inexpensive. Value didn’t vanish. It migrated. The new scarcity wasn’t reproduction — it was having something worth reproducing. The writer emerged from the ashes of the scribe.
The internet did the same thing to information. For most of human history, knowing things was genuinely expensive. Expertise took decades to accumulate, libraries were rare, the bottleneck was access. Then the bottleneck dissolved overnight. But the economy didn’t collapse — it reorganized around the new scarcity, which turned out to be human attention. There was suddenly too much to read, too much to decide about. The valuable thing shifted from producing information to filtering it. Entire industries — search, social media, newsletters, streaming — exist because someone read the new scarcity correctly before most people recognized it had changed.
The sharper question, in any technological shift, is never what can this do? It’s: what does this make abundant? And what does it leave scarce?
Now AI is making cognitive labor abundant. Not all of it, not perfectly — but directionally and unmistakably. The cost of generating a competent first draft, a functional piece of code, a market analysis, a legal summary, is collapsing. Which means the question presses again, harder than it has in decades. And this time, the answer to what does it leave scarce? is less obvious than it was with the printing press or the internet. The tendency might hold. It might not — and that uncertainty is where the real question lives.
This is where most analyses reach for a tidy list of the things AI can’t replicate — creativity, empathy, judgment — and then stop, reassured. I don’t trust that reassurance. The list is too convenient. It always seems to contain exactly the things the person writing it happens to do.
So let me try to be more precise.
What seems genuinely to survive — what AI fails to replicate in ways that matter — isn’t raw skill. It’s the things that skill was always secretly in service of. The things underneath.
Consider accountability. An AI system can produce a recommendation, but it cannot be wrong in the way a person can be wrong. It cannot feel the specific weight of having told a room full of people that the acquisition would work, and then watching it not work, and having to live with that, and carry it into the next recommendation, and be shaped by it. That weight is not a bug. It’s the source of a particular kind of trustworthiness that turns out to be irreplaceable in the domains where it matters most. When a doctor tells you what she thinks is wrong with you, part of what you’re paying for is the fact that she has been wrong before, in ways that cost her something, and has adjusted accordingly. Her judgment comes with history attached. Her history is inseparable from her credibility.
The objection writes itself: doctors also learn from textbooks. Medical students spend years processing described cases before they treat a live patient. What exactly distinguishes that from a model trained on the same cases at far greater scale?
The difference is what happens when consequences pass through a person rather than through a training pipeline. A resident who misdiagnoses and then sits with the family at midnight, who carries that conversation into her next shift and the one after, who now pauses at certain presentations in a way her colleagues have quietly come to read as rigor — she has processed something that can’t be captured in any document she later wrote about it. The knowing is inseparable from what it cost to acquire. It lives in reflexes, in the quality of attention, in what she notices before she knows why she’s noticing it. Consequences that pass through a person don’t just update their beliefs. They restructure the person.
A model trained on case reports has access to the description of the consequence but not to the consequence’s effect on the knower. That isn’t a quantity-of-data problem. It’s a different category of thing — knowing-as-having-been-changed versus knowing-as-having-processed-information-about-change. In the domains where accountability matters most, those are not the same. This isn’t a limitation more data will fix. It’s structural.
This is why the developer’s situation is more nuanced than it first appears. She’s not being replaced. She’s being clarified. The AI is separating out the components of her expertise and assigning them different valuations. The part that was execution — the syntactical, the structural, the translating-of-problem-into-code — is being made cheaper. But the part that was judgment, accountability, and the hard-won knowledge of what not to do, is not. If anything, it’s more visible now, because the execution is no longer obscuring it.
That visibility is uncomfortable. It requires her to make a claim she’d previously never had to make explicitly: this, specifically, is what I’m worth.
Here is where the pattern breaks.
Not every scarcity migrates. Sometimes abundance doesn’t relocate value — it eliminates it.
Stock photography is the case I keep coming back to. When AI image generation arrived, the redistribution story would predict that stock photographers would find their value migrating to some higher-order function — curation, perhaps, or concept development, or the kind of trust that comes from knowing a specific human took a specific photograph at a specific moment. Some of that has happened. But a significant portion of the stock photography market was simply destroyed. The value didn’t move to a new scarce thing. It evaporated, because it turned out what buyers wanted was cheap images, not photographers. The scarcity that had made stock photography valuable wasn’t a proxy for something deeper. It was the thing.
The scribe analogy flatters us. The stock photo case does not.
So the honest question — the one worth sitting with rather than rushing past — is this: is the thing I do actually in service of something deeper, or is it the thing itself that people want? The developer can probably answer yes to that question. Her systems knowledge is in service of judgment, accountability, and institutional understanding that doesn’t transfer to a model. But a junior developer whose primary contribution is writing boilerplate CRUD operations? That’s closer to the stock photographer. The value wasn’t a proxy. It was the execution, full stop.
That’s the standard: judgment constituted by consequence, restructured by what it cost to be wrong. Which is exactly why this argument cuts deeper than the junior/senior framing suggests. The question isn’t whether someone is senior. It’s whether their seniority was actually built that way. And often, it wasn’t. Some of what gets called wisdom is pattern-matching that has been intellectualized — by the person holding it — into something weightier than it is.
Some senior developers have been stock photographers in scribe’s clothing for a decade, and the execution bottleneck was generous enough to let them. The AI doesn’t only threaten entry-level roles. It threatens the credibility of seniority itself in any domain where seniority was partly constructed from the execution gap — from being able to do the thing when doing the thing was the scarce part. The most uncomfortable application of this argument isn’t to the junior developer writing boilerplate. It’s to the senior developer whose accumulated “judgment” is mostly aesthetic preferences and institutional memory that was never genuinely tested, just never needed to be.
I don’t think we know yet, for most knowledge work, which side of that line we’re on. Anyone who tells you they do know is selling confidence they haven’t earned. The honest position is uncertainty, held carefully, while watching for signals.
Which brings me back to her, in the shower, not solving problems the way she used to.
When I said she’s uncertain about what she is worth, I was being slightly imprecise. What she’s actually uncertain about — and this is the sharper thing — is what she is. Her expertise didn’t just have economic value. It had ontological value. It was the story she told about herself to herself. When she said I’m a software engineer, she meant something specific: a person who can hold complexity, who has earned the right to be trusted with systems, whose decade of accumulated scar tissue makes her uniquely capable of certain things. That story was real. It still is. But its legibility has changed.
What AI has done, in a way that the economic framing keeps missing, is make her expertise legible in a way she didn’t choose. The components have been separated out. Some have been repriced. And that process of separation — of having the thing you thought you were disassembled and examined — is not merely financially threatening. It’s existentially disorienting. Not because the conclusion is necessarily bad, but because the examination was not invited.
There’s a further layer that’s harder to name. Professional identity isn’t assembled solely from capability. It’s assembled from capability plus the fact that others couldn’t easily see its components. The expert’s authority was never purely a product of what she could do — it was also a product of what others lacked the instruments to price separately. When you had to evaluate a senior engineer, you largely accepted the package, because disaggregating it was impractical. That opacity wasn’t fraud. It was just the normal condition of expertise operating in environments where the evaluation tools weren’t precise enough to see the seams.
AI changes the instruments. And changing the instruments doesn’t just reprice the components — it changes what it means to know what you are.
People build self-knowledge partly through what others believe about them. When the market’s assessment of your components shifts, it doesn’t only revise your income. It contests the self-story at the level of evidence. She knew she was worth something. Now she’s being asked to know what, specifically — and the answer is both more granular and less certain than the original story permitted.
This is the texture of anxiety that jobs-and-wages framing consistently fails to capture. People aren’t only worried about being replaced. They’re worried about being seen — evaluated, disaggregated, found to be partly what they thought and partly something cheaper — in a process they can’t opt out of and didn’t consent to.
The people who navigate these shifts well aren’t the ones who successfully defend the old scarcity. They’re the ones who get curious about the new one before the window closes. The scribes who became editors. But also — and this second part is almost always omitted — the scribes who simply stopped being scribes and discovered that the rest of what they were was enough to build on. The transition isn’t always about finding a clever reapplication of the same skill. Sometimes it’s about finding out which parts of yourself you’d been systematically undervaluing because the execution was doing so much of the visible work.
The deepest mistake, in moments like this, is treating the technology as the thing to respond to. It isn’t. The technology is the mechanism. The real event is the redistribution — and redistributions are hard to see because they arrive first as feelings. A vague unease. The sense that the old moves don’t land the same way. Confidence shifting in rooms without anyone announcing that it has.
The skill that survives disruption isn’t adaptability in the generic motivational-poster sense. It’s the willingness to ask, with genuine openness and without rushing to a reassuring answer, where has value gone? And then to follow that answer even when it leads somewhere inconvenient — including, sometimes, to the conclusion that your particular value hasn’t migrated. That it was the thing itself, not a proxy for something deeper.
That question is nearly impossible to ask honestly about yourself. The answers come pre-loaded.
It’s easier to assume you’re the scribe who became the editor. It’s harder to reckon with the possibility that you’re the stock photographer.
Harder still to sit with not knowing which one you are, and to keep watching for signals rather than reaching early for a verdict you can live with.
The question doesn’t have a permanent answer. But it always has a current one. And the current one is changing faster than most people are looking — including, sometimes, the person in the shower, waiting for a solution that used to come.