Skip to content
islam.ninja
Go back

Attention Isn't Enough

11 min read

The AI economy runs on three scarce resources. We’ve only been pricing one.

You probably mass-produced something today. A message drafted by AI before you tweaked two words and hit send. A summary of a document you didn’t read, written by a system that didn’t understand it, forwarded to people who won’t either.

None of this felt like mass production. It felt like productivity. And that’s the trick — because the same force that makes you more productive makes everyone else more productive too, which means the total volume of polished, plausible, ready-to-consume output flooding the world just increased by an order of magnitude, and the thing it’s all competing for — your attention — didn’t grow at all.

For twenty years, the internet’s economic logic has been simple: information is abundant, attention is scarce, and whoever captures attention wins. That logic built the algorithmic feed, the creator economy, programmatic advertising, and the entire infrastructure of modern media. It was a good model. It explained a lot. It’s no longer sufficient.


Plausible and True Used to Mean the Same Thing

Here is the change that matters most, and it has nothing to do with productivity metrics.

A well-written article used to carry an implicit guarantee. Someone invested time, expertise, and reputation in producing this — so it’s probably worth your attention. The quality of the output was a proxy for the reliability of the source. Not a perfect proxy, but a functional one. The effort required to produce something polished was high enough that polish itself was a signal.

AI broke that signal. The cost of producing plausible text has decoupled from the cost of producing true text, and everyone can feel it even if they can’t name it. A flawless paragraph could have been written by an expert who spent a week researching, or by someone who typed three words into a prompt and didn’t check the output. The paragraph looks the same either way. The investment behind it doesn’t.

This is not a minor calibration problem. It’s a structural break in how information markets work. When polish was expensive, attention allocation was rough but functional: you could skim, assess quality, and make decent bets about what deserved deeper engagement. Now polish is free, and the signal it used to carry is gone. You’re pattern-matching on a pattern that no longer correlates with what you’re trying to detect.

The internet already had a noise problem. AI gave it a plausibility problem. And plausibility is harder to filter than noise, because noise is obviously bad and plausibility looks fine until you check — and checking is exactly the kind of slow, effortful work that attention scarcity makes people skip.


The Flood From the Inside

Most commentary about AI-generated content focuses on the public internet — the slop in social feeds, the fake news sites, the SEO farms. That’s real, and it’s ugly. But the less discussed version of the same problem is happening inside organizations.

When creation is cheap, people create more. More memos, more decks, more emails, more “quick summaries,” more documents generated to look like work was done. Researchers have a name for this now: workslop.

AI saves you twenty minutes writing a document that then costs five other people fifteen minutes each to read, evaluate, and determine whether any of it is trustworthy. The net attention cost can be negative.

This is important because it means AI’s productivity story has a second chapter that most boosters aren’t reading. Yes, individual task completion gets faster. Yes, support agents resolve more tickets per hour. Yes, junior employees close the gap with senior ones on first-draft quality. All of that is real and measurable.

But at the system level, the volume of output that demands attention grows faster than the attention supply, and the reliability of that output becomes harder to assess. The individual gets more productive. The organization gets noisier. These can both be true at the same time, and they are.


What Actually Becomes Scarce

If content is effectively unlimited and attention is still finite, then the binding constraint has moved. It’s no longer “who gets noticed?” That’s still necessary, but it’s no longer sufficient. The question that matters now is: who gets believed?

The answer is: whoever can demonstrate provenance, consistency, and accountability.

Provenance means you can verify where something came from and what happened to it. This used to be trivially easy — if it was published in the New York Times, it came from the New York Times. In a world of synthetic media, remixed content, and AI-generated everything, origin is no longer obvious. The fact that entire technical standards bodies now exist to build chain-of-custody infrastructure for digital content tells you how far the problem has gone. You don’t build expensive plumbing for problems that aren’t real.

Consistency means the signal holds up over time. One good output means nothing when good outputs are free. A track record of reliable judgment — publicly visible, independently verifiable — becomes the asset that separates trusted sources from plausible ones. This is why “identity + audience” is becoming more valuable than “content volume.” Content is the commodity. The person behind it is the scarce resource, but only if their history is legible.

Accountability means someone is on the hook. When the cost of being wrong approaches zero — because an AI generated it, not you; because you just forwarded it, you didn’t write it; because everyone is doing it — then willingness to stake your name on a claim becomes a genuine differentiator. Accountability is expensive precisely because consequences are real, and that expense is what makes it a credible signal.

Together, these three things amount to a single concept: credibility. And credibility, not attention, is the scarce resource that the AI economy is reorganizing around. Attention gets you in the room. Credibility determines whether anyone acts on what you say once you’re there.


The Judgment Paradox

There’s a strange consequence of all this that’s worth sitting with — one I’ve explored from different angles in Cheap Thinking and When the Code Writes Itself. Those essays asked what happens to craft when execution gets cheap. This one asks what happens to markets.

As knowledge work gets cheaper, judgment becomes the job. Everything that can be templated, summarized, first-drafted, reformatted, translated, or explained can now be done by a machine for approximately nothing. The supply of “explain this,” “rewrite that,” “draft a version of the other thing” has gone to infinity. The market-clearing price for generic cognitive labor is collapsing — not to zero, not yet, but the direction is unmistakable and the pace is accelerating.

What remains expensive is knowing which question to ask, recognizing what “good” looks like, deciding what to ignore, and being willing to commit to a direction when the information is ambiguous. None of these are things AI does well, and all of them are things that most knowledge workers were rarely asked to do explicitly, because they were bundled with the execution work that filled their days.

The unbundling is the disruption. When AI strips away the execution layer, what’s left is the judgment layer — and a lot of people are discovering that judgment was a smaller portion of their job than they thought. The ones who were mostly executing will need to find new ways to be valuable. The ones who were mostly judging just got leverage they’ve never had before.

This is not equally distributed, and it won’t be. The returns to good judgment are increasing at the same time that the returns to competent execution are decreasing, which means the gap between “person who knows what to build” and “person who builds it” is widening. AI is a progressive tax on the mediocre and a subsidy for the decisive. That’s an uncomfortable sentence, and it’s probably the most important economic sentence in this essay.


When the Machines Start Acting

But judgment assumes a human is still in the loop. The newest generation of AI doesn’t.

Everything so far has been about AI that generates — text, images, code, slides, the raw material of knowledge work. But the most important shift isn’t about generation. It’s about action.

In the last year, AI has crossed from “systems that produce content” to “systems that do things.” Book flights. Send messages. Monitor inboxes. Order groceries. File forms. Execute purchases. Negotiate prices. Not in demos — in production, for real users, with real consequences.

This is the break that matters, because it disrupts the foundational assumption of the entire attention economy: that the participants are humans. When content competed for human attention, the constraint was mental focus. You had sixteen waking hours, one stream of consciousness, and a finite capacity for processing. Everything — advertising, media, platforms, the creator economy — organized itself around capturing some share of that budget.

But an AI agent doesn’t need to pay attention. It doesn’t get tired, distracted, or bored. It can browse, click, purchase, post, and interact at machine speed without any human in the loop. When agents start acting on behalf of users, the scarce resource is no longer “did a human notice?” It’s “does this agent have permission to act, and should it?”

The scarcity shifts from attention to agency: authenticated permissions, tool access, payment credentials, and trusted action pathways. Who gets to act, on whose behalf, with what authority, under what constraints. These are the questions the next economy will be organized around, and almost nobody is thinking about them carefully yet.

The security implications alone should give people pause. When an agent can read files, run commands, send messages, and move money, a compromise isn’t a data breach — it’s an action breach. Someone doesn’t just see the wrong thing; a machine does the wrong thing, at machine speed, at machine scale. The researchers working on this problem are clear-eyed about how hard it is to secure always-on autonomous systems, and how far we are from solving it.

And there’s a deeper problem. If a growing share of web traffic, clicks, views, and transactions is generated by agents rather than humans, then the metrics the attention economy runs on — the views, the engagement, the click-through rates — increasingly measure machine activity, not human interest. The currency isn’t just being debased by synthetic content. It’s being debased by synthetic attention. At some point, a meaningful fraction of the economy’s visible activity is machines interacting with machines, and the human whose preferences were supposedly being served is somewhere else entirely, doing something the metrics will never capture.


The Three Scarcities

So here is the picture, as cleanly as I can draw it.

The attention economy didn’t end. It grew a basement.

Attention is still the top layer — still scarce, still zero-sum for humans, still the entry point for everything. Without it, nothing else follows. AI has massively increased the supply of things competing for attention without expanding the resource itself.

Beneath attention sits credibility — the ability to convert someone’s noticing into someone’s believing. When output is cheap and provenance is unclear, credibility is where value concentrates. It’s built slowly, on consistency, accountability, and verifiable track records. It cannot be generated on demand.

Beneath credibility sits agency — the ability to turn belief into action, increasingly without the human’s continuous involvement. As AI systems gain the power to act, the scarce resource becomes the right to execute: permissions, credentials, trust frameworks, and safe orchestration.

Each layer depends on the one above it. You need attention before you can establish credibility. You need credibility before anyone grants you agency. And at every layer, AI is simultaneously creating value and destroying the signals that used to make allocation work.

The attention economy trained us to ask: who gets noticed?

The credibility economy asks: who gets believed?

The agency economy will ask: who gets to act?

We’re living through the transition between the first question and the second. The third is arriving while the permission systems, liability frameworks, and security models it requires are still sketches on a whiteboard.


Share this post on:

Previous Post
Selling to the Machine
Next Post
The Last Bottleneck