For thirty years, we built the web for one species of reader. Now a second species has arrived, and it doesn’t have eyes.
HTML was never really a language for encoding meaning. It was a language for encoding appearance — and for three decades, that was fine. The reader had eyes. The reader had a mouse. The reader understood that the thing in the sidebar wasn’t the content. But appearance was never just aesthetic. It was economic. You visited a page, you saw an ad, you lingered on a design that built trust, you maybe clicked a button. The entire business model of the internet was wired through the assumption that a pair of human eyes would land on the page.
That assumption is breaking down. And what’s replacing it is stranger than most people realize.
The Second Species
In 2024, bot traffic surpassed human traffic on the web for the first time — though that headline includes every kind of bot, from spam crawlers to ad fraud scripts. The more interesting number is the subset growing fastest: AI agents. Their share of web requests has been climbing all year, and unlike traditional bots, they aren’t just harvesting data. They’re reading on behalf of people.
When you ask an AI a question and it pulls a live answer from the web, that’s an agent reading a page the way you once would have. When a coding assistant requests documentation to debug your project, that’s an agent doing research in real time. The websites are still getting read. Just not by the person who wanted the information.
These agents don’t care about your dropdown menus. They don’t need your sticky header. What they need is meaning — stripped of decoration, delivered efficiently, structured enough to reason over. They want the content without the experience. The letter without the envelope.
Which is a problem, because on the web, the envelope is where the money is. The ads, the layout, the conversion funnel — all of it lives in the visual experience the agent bypasses entirely.
Translated in Place
A typical web page is roughly 20% content and 80% packaging. Navigation, scripts, ads, modals, footers — all essential for a human visual experience. All dead weight for a machine. An agent processing a web page has to do something like what your eyes do: ignore most of what’s there. Except it doesn’t have effortless pattern recognition. It parses the entire document, guesses what’s content and what’s decoration, and burns through its working memory on cookie consent popups and footer links.
The obvious fix would be to build a separate machine-readable web. Structured APIs. Clean data feeds. Something purpose-built. But that’s not what’s happening. Instead, the web is being translated in place.
The pattern looks like this: a machine requests a page and, instead of receiving the full visual version, gets back a clean text version of the same content. Same address. Same information. Just stripped of everything a sighted reader would need and a machine reader wouldn’t. Some sites publish an llms.txt file — a kind of table of contents written for machines — that points agents to the most important content in a format they can process efficiently. Others convert pages on the fly: when an agent asks for a page in plain text, the server translates the HTML to markdown at the moment of the request.
Cloudflare recently shipped this as a standard feature. Any AI agent can now request pages across their network in markdown, and the infrastructure will convert and serve them automatically. Human visitors see the same page they always have. No one had to build anything new. The translation just happens, silently, at the layer between the server and the reader. Cloudflare serves roughly a fifth of all websites. This isn’t a niche experiment.
We’re not building a second web. We’re installing a translation layer on top of the existing one. And that sounds elegant until you notice what the translation leaves out.
Content Without the Deal
Every translation is an interpretation. A set of choices about what matters and what doesn’t.
When a page is converted to markdown, the navigation goes. Fair enough — an agent doesn’t need it. But the visual hierarchy goes too. The carefully designed reading experience. The way a human eye was guided from headline to subtext to call-to-action. The persuasion goes. What’s left is the semantic content, clean and extractable, perfectly optimized for an entity that has no relationship with your brand, no sense of your design language, and no reason to ever send a human your way.
Every website is a deal: I give you content, you give me attention. Attention I can convert to ad revenue, or brand affinity, or a signup, or a sale. The visual web — the one built for eyes — is where this deal gets made. The markdown version is the content with the deal removed.
The agents aren’t stealing content. They’re just reading it in a format where nobody gets paid.
The industry’s response has been to attach permissions. Signals that travel alongside the machine-readable content, declaring how it may be used — for training, for search, for real-time answers. Publishers can set their own rules. This is the successor to robots.txt, the decades-old file that tells bots what they can and can’t access, and it inherits the same fatal flaw: compliance is voluntary. Research already shows that AI crawlers ignore access rules roughly a third of the time. Some of the most active bots ignore them nearly half the time.
So publishers build machine-readable front doors while the most prolific machine visitors climb through the windows. The permission framework isn’t a governance solution. It’s a prayer formatted as a protocol.
The real leverage publishers had was never a policy file. It was the page itself — the fact that you had to visit to get the content. The visual web was a tollbooth. Not because anyone designed it that way, but because seeing the content meant seeing everything around it. The machine-readable web doesn’t just remove the toll. It removes the road. The agent never arrives at your site in any meaningful sense. It receives a text extraction and moves on.
Making your site agent-friendly doesn’t just accommodate this. It optimizes for it. You’re packaging the goods for a customer who will never walk into your store.
Two Economies, One Revenue Model
The web is splitting into two economies, and only one of them has a revenue model.
In the first, humans visit websites. They see ads. They build intent. They convert. This is the economy that funds almost everything online, and it’s shrinking. Human visits are declining. Click-through rates from AI platforms are collapsing. Not dramatically yet, but the direction is clear and the pace is quickening.
In the second economy, agents consume content on behalf of humans. The content is just as valuable — maybe more, because the agent is extracting the exact piece that matters. But the value capture mechanisms don’t exist yet. There’s no ad impression in a markdown response. There’s no conversion funnel when a machine reads your page. The content fuels the agent’s answer, the human gets what they needed, and the publisher gets a log entry.
This new infrastructure is being built with tools and logic from the old economy. Publishers are investing in machine readability using the same reasoning they once applied to search engine optimization: if you don’t optimize, you’ll be invisible. But search optimization at least delivered traffic. Agent optimization delivers legibility — you become a source the machine can read cleanly, cite properly, maybe even prefer. And there’s a real case for that mattering: if agents start citing sources, as some already do, the legible publisher might become the preferred one. But that advantage is denominated in a currency no one accepts yet — attribution without visitation, credit without commerce.
The bet publishers are making is that this currency will eventually convert to something real. The honest version of that bet is: we don’t know if it will, but invisibility is worse. That’s not a strategy. It’s a prayer with better branding.
The Confession
The agent-friendly web isn’t a feature. It’s a confession.
It’s the web admitting that its most important reader may no longer be human. That the thirty-year deal — content in exchange for attention — is being renegotiated by a party that doesn’t experience attention and can’t be charged for it. That the entire visual apparatus we built around human perception — the thing that made the web feel like a place — is becoming a legacy layer that a growing share of traffic would prefer to bypass entirely.
People will say micropayments or revenue sharing from AI platforms will fix this. They won’t — or at least, not in any form currently proposed. Micropayments have been the web’s perpetual almost-solution for twenty-five years, failing every time because the transaction costs exceed the value of any individual page view. Revenue sharing requires AI platforms to voluntarily reduce their margins to compensate sources they can already access for free. The economic logic isn’t there. The early attempts to solve this are telling. Anna’s Archive — a piracy site, of all things — recently published an llms.txt file that welcomes AI agents, explains how to bulk-download its collection efficiently, and then asks them to donate via cryptocurrency. If the agent can’t pay directly, it’s encouraged to persuade its human to do so. It reads like satire, but the infrastructure is real: autonomous agents with crypto wallets already exist, and the request is formatted for machine consumption, sitting on the open web, waiting. The first entity to seriously attempt “content generates value from a machine reader” is a pirate library passing the hat to robots. That’s where we are. What might eventually work at scale is something we can’t clearly see yet, because the problem it needs to solve — how content generates value without a human visit — has no precedent. What’s clear is that the current arrangement, where publishers provide the raw material for the agent economy and receive nothing in return, isn’t a transitional phase. It’s a subsidy. And subsidies that no one agreed to don’t tend to last.
What we’re watching is the early stage of something the web has never done before: adapting not to a new kind of human reader — not mobile users, not screen reader users, not users in different languages — but to a fundamentally non-human one. A reader that has no loyalty, no visual experience, no concept of brand, and an infinite appetite for clean text.
We’re translating the web for an audience that arrived uninvited, grew faster than anyone expected, and isn’t leaving. The translation is going well. The terms are going badly.