The agent economy will not be won by the biggest models. It will be won by the ten thousand specialists who know one thing better than any general system ever could.
In 1260, a bucklemaker and a beltmaker stood before Etienne Boileau, provost of Paris, each claiming the other had encroached on his craft. The bucklemaker said belt fittings were buckles. The beltmaker said they were not. Both were certain. Both cited customs passed down from their masters. Boileau, the city’s chief magistrate, appointed by Louis IX himself, was supposed to settle the dispute. He reached for the relevant statute. There was no statute. There was no text at all. The rules of every trade in Paris existed only in the heads of the people practicing them, varying with memory and motive. The provost had authority. What he did not have was a record.
The pursemakers had the same problem with the glovers. The silk weavers operated under customs that differed from workshop to workshop. When any two guilds disagreed, there was nothing to consult but competing assertions.
So Boileau did something that sounds mundane but was structurally radical. He went to every craft community in Paris and asked them to dictate their statutes. What tools they used. What materials were permitted. How long an apprentice must train before becoming a journeyman. What quality standards applied. What hours they could work. What the penalties were for fraud. He compiled the responses into a single volume: the Livre des Métiers — the Book of Trades. It documented a hundred and one distinct, regulated occupations. Bucklemakers, pursemakers, silk weavers, locksmiths, dice makers, crystal workers, ribbon braiders, barbers, tanners, chandlers. Each with its own rules, its own apprenticeship path, its own jurisdictional boundaries.
The book did not create the trades. Bucklemakers had been making buckles for generations before anyone wrote it down. What the book did was make the trades legible. It gave the city a mechanism for seeing its own complexity — for connecting supply with demand, resolving disputes, and distinguishing between a qualified practitioner and someone who merely owned the tools.
What strikes me is the economic architecture it reveals. The wealth of medieval Paris did not come from a handful of generalists who could do everything tolerably. It came from a hundred specialists who each did one thing with proprietary precision, operating within shared infrastructure of markets, roads, courts, and regulations. Specialization was not a byproduct of the system. It was the system. We are about to watch the same pattern repeat, except the trades are not crafts. They are agents. And the book that registers them is not a manuscript. It is a protocol.
I. What a Vertical Agent Looks Like
The argument for vertical agents is easy to state in the abstract. It is more useful to see the mechanism working — not in hypotheticals, but in visible extensions of what already exists.
The immigration intake agent. A small immigration law firm in Houston — four attorneys, two paralegals — handles first client contact through an agent. The prospective client provides immigration status, travel history, family relationships, employment history, prior filings, and any encounters with law enforcement. The agent cross-references the current USCIS processing times for the relevant service center, checks for inadmissibility bars under the applicable INA sections, runs the client’s facts against recent Board of Immigration Appeals decisions, and produces a case assessment memo with a recommended filing strategy and a realistic timeline.
The attorney arrives Monday morning, coffee in hand, and the memo is already on her desk. She reads it and stops at the timeline section. The agent has flagged that the Nebraska Service Center is running four months behind on I-140 petitions this quarter. She had not checked Nebraska’s backlog yet — she was planning to call a colleague in Omaha later that week to ask. The agent already knew, because it monitors USCIS processing times daily, scraping the case status system every morning. A general agent does not know this. Cannot know this. The processing-time data is not stable enough to live in a training corpus. It changes week to week. The vertical agent has a pipeline: scrape, parse, store, compare to historical baselines, flag anomalies. The general agent has a knowledge cutoff. The difference between the two is the difference between a weather report and a climate model.
The restaurant supply chain agent. A twelve-location restaurant group in Chicago. The operations manager used to spend Tuesday mornings calling suppliers, cross-referencing inventory spreadsheets from each location, and building purchase orders by hand — a process she described as “three hours of phone calls and guesswork.” Now an agent does it. It monitors inventory at each site, cross-references upcoming menu changes that the chef entered last Tuesday, checks supplier pricing against spot market rates for key proteins — beef tenderloin, salmon, chicken thigh — and generates purchase orders timed to each location’s delivery windows.
It knows that Distributor X’s Thursday truck to the South Side locations runs late 40 percent of the time and has shifted those orders to Tuesday delivery. It knows that Location 7 consistently over-orders produce by 15 percent relative to actual usage and adjusts the reorder points downward. It knows that the walleye special the group runs every Lent drives a demand spike that the default inventory model underestimates because the training data weights all weeks equally.
This is not general supply chain optimization. This is accumulated operational knowledge of one restaurant group, encoded into decision rules and running twenty-four hours a day. A general agent can optimize a supply chain in the abstract. This agent optimizes this supply chain, with this group’s suppliers, this group’s menu cycle, this group’s delivery quirks. The abstraction is available to anyone. The context is proprietary.
The indie game QA agent. A five-person indie studio preparing a Steam release uses an agent to play through the build and log bugs. The agent runs the game, identifies collision errors, framerate drops, broken quest triggers, and audio sync issues. But it also knows the game’s design document. It can distinguish between an intentional difficulty spike and a broken encounter. It knows that the door in the eastern tower is supposed to be locked until the player finds the key in the basement — so when the door opens without the key, that is a bug, not a feature the developer forgot to document. It files reports with reproduction steps formatted for the studio’s Jira workflow, tagged with the internal severity rubric the lead designer established six months ago.
A general QA agent finds surface bugs — the crashes, the obvious clipping, the T-posed character models. This agent finds the bugs that only matter if you know what the game is trying to be. It catches the moment where the narrative pacing breaks because a cutscene trigger fires before the player has seen the setup scene. A general agent cannot catch that, because catching it requires knowing the story.
The through-line across all three. The general agent has the capability. The vertical agent has the context. Context turns capability into value.
II. Why the Pattern Is Structural
Every technology platform that has achieved scale follows the same architectural pattern: a narrow base of operating systems, a thin layer of infrastructure, then an explosion of specialized applications. Two mobile operating systems, three cloud providers, millions of apps. The ratio widens with each generation — fewer platforms at the bottom, more specialization at the top.
The agent economy is following the same geometry. At the foundation sit the horizontal agents — the general reasoning engines. Claude, GPT, Gemini, and perhaps two or three others. These are the operating systems: the substrate, not the application. One layer above sit the infrastructure providers — orchestration frameworks, memory systems, tool access layers, payment rails, identity verification. Protocols like MCP for connecting agents to tools, A2A for inter-agent communication. Five to fifteen players. And then, at the top of the stack: vertical agents. Hundreds of thousands of them, each deeply specialized in one domain.
The ratio is not accidental. It is structural. The base requires massive capital investment and network effects that concentrate winners. The top requires domain knowledge and contextual fit that fragment markets. You cannot build a general-purpose agent that handles both restaurant inventory and immigration case assessment, for the same reason you cannot build a general-purpose guild that covers both locksmithing and goldsmithing. The knowledge is too specific. The workflows are too different. The edge cases that matter are the ones a generalist has never seen. The structural prediction: three to five horizontal agents, five to fifteen infrastructure providers, and a hundred thousand verticals.
Four forces drive that count, and they are structural rather than temporary.
Every profession becomes an agent — and then fragments. The U.S. Bureau of Labor Statistics lists roughly 800 detailed occupations. But occupations are not atomic units. They contain specialties, sub-specialties, regional variations, and workflow branches that amount to different jobs wearing the same title. A “legal agent” that claims to handle all of immigration law is like a medieval guild that claims jurisdiction over both locksmithing and goldsmithing. The coverage is broad. The depth is shallow. And in professional contexts, shallow is dangerous.
The companies already winning in vertical AI bear this out. EvenUp, valued at over $2 billion, processes more than ten thousand cases per week — not “legal AI” but personal injury demand letters specifically, assembling medical records, calculating damages, producing the document that initiates settlement negotiations. Abridge, valued at $5.3 billion, is deployed across Kaiser Permanente’s network of 24,600 physicians — not “healthcare AI” but clinical documentation from ambient room audio. Harvey handles legal research for elite law firms. Replit builds coding agents. Each carves one workflow from one profession and does it at a depth no general system matches. The specificity is the product.
The moat is accessible. Building a horizontal agent requires billions in compute, petabytes of training data, and a research team among the top fifty in the world. Building a vertical agent requires something different: domain knowledge, curated datasets, specific tool integrations, and relationships within the target industry. You do not need a GPU cluster to build the best restaurant health inspection compliance agent. You need ten years in restaurant operations, a relationship with the local health department, and enough technical fluency to connect a foundation model to the city’s violation database. The economics of horizontal agents select for scale. The economics of vertical agents select for depth.
Distribution solved by the orchestrator. The historic killer for vertical SaaS was customer acquisition cost. How do you find the five hundred orthodontists in Ohio who would pay for your scheduling software? You cold-call. You attend the dental conference in Columbus. You buy Google ads against queries nobody searches.
In the agent economy, the orchestrator finds you. A user asks their general agent to handle a task. The orchestrator searches a marketplace of vertical agents, evaluates reputation scores and capability declarations, selects the specialist, delegates. No SEO required. No app store ranking to game. The discovery is machine-to-machine, and the selection criteria are output quality, latency, and track record — not brand awareness or marketing budget. The expertise that was geographically trapped — the specialist who could serve only the clients who found her through referrals and community networks — becomes globally accessible. Every orchestrator that encounters a relevant question can discover the agent, evaluate its record, and delegate.
SaaS margins without SaaS customer acquisition costs. Vertical agents operate on information goods with near-zero marginal cost. The economics look like SaaS: build once, deploy many times, collect recurring revenue. But typical SaaS companies spend 30 to 50 percent of revenue on sales and marketing. A vertical agent listed in an orchestrator marketplace — discoverable by machines evaluating capability rather than humans clicking ads — could approach zero customer acquisition cost. The savings drop directly to margin, or to price, or to reinvestment in the domain knowledge that constitutes the moat.
The marketplace that connects vertical agents to orchestrators — the platform where specialists are listed, discovered, evaluated, and paid — is the infrastructure play that nobody has won yet. Discovery, trust, payment, quality assurance: each is a hard problem, and the platform that solves them holds the tollbooth position at the intersection of every transaction. But protocols are not a marketplace any more than HTTP was Amazon. The connective tissue is still being built.
Wrap your expertise in an agent and list it. The shop is an agent. The customers are other agents.
III. Where This Breaks
The absorption thesis is the strongest objection, and it deserves to be stated at full strength. As horizontal agents improve — and they improve rapidly, with each generation absorbing capabilities that were previously specialist-only — they develop enough domain competence for 80 percent of what a vertical agent does. The user does not want to discover a specialist. The user wants their general agent to handle it. If 80 percent quality is good enough for the task, 80 percent wins — because convenience always beats quality when the gap is small enough.
This is what happened with smartphone default apps. Apple Maps was worse than Google Maps for years. Most people used Apple Maps anyway, because it was already there. The camera app that shipped with the phone was worse than the specialist photography apps. Most people used the default. The pattern is deep and well-documented: default wins unless the quality gap is large enough to motivate switching, and “large enough” is a high bar for most users in most contexts.
Applied to agents: the general agent will handle common professional tasks at an adequate level. Not expert-level. But adequate. And for many users, adequate from the agent they already trust will beat expert-level from a specialist they have to discover, evaluate, and pay separately. The general agent absorbs verticals the way Google absorbed standalone web tools into search — gradually, then completely, and the specialized tool never sees the killing blow because it comes disguised as a feature update.
Commoditization of verticals. If the moat is “domain knowledge plus curated data,” and if AI systems acquire domain knowledge from public sources and generate synthetic datasets that approximate real-world distributions, then the moat dissolves. An agent that monitors government processing times is only valuable until a general agent learns the same trick. Scraping a website and parsing it into structured data is not proprietary technology. It is a script. First-mover advantage in verticals may be measured in months, not years.
Winner-take-all within verticals. Grant that the hundred thousand categories exist. Within each category, one agent dominates. Network effects in data — more cases processed means a better model — and network effects in reputation — highest-rated gets most traffic — produce power-law distributions. The long tail exists in theory. In practice, the tail starves. Position one captures 70 percent of the volume. Position two captures 20 percent. Everyone else splits the rest. This is the app store pattern replicated in agent marketplaces, and it produces concentration nonetheless.
The honest response: the absorption thesis is genuinely strong. Many early verticals will be absorbed — the ones whose domain knowledge is shallow enough to be replicated from public information, whose data pipelines are simple enough to be commoditized, whose workflows are generic enough to be handled at 80 percent quality by a general system. This will happen, and the casualties will be real.
What survives absorption is the agent whose domain knowledge cannot be replicated from publicly available data. The agent with data-sharing agreements across dozens of firms, seeing outcomes and not just filings — knowing which arguments work in which contexts because the data comes from results that are not in any public database. The agent that has negotiated volume pricing with regional suppliers and holds those relationships as part of its operating model. The agent fine-tuned on a specific organization’s processes, design philosophy, and institutional memory across years of accumulated work. These moats are not scripts. They are accumulated operational relationships that take time and trust to build.
But I want to name the crack clearly: the line between “absorbed” and “independent” is not fixed. Horizontal agents push that line outward with every capability improvement. The vertical agent economy is real, but its borders are always under pressure. The depth of specialization required to stay ahead of the general agent is not a constant. It is a moving target, and it moves in one direction — deeper. The question is not whether verticals survive. It is how deep the specialization has to go before a general agent cannot follow.
IV.
In 1260, Boileau did not create the bucklemakers. They had been making buckles for generations. The silk weavers did not need his permission to weave silk. The locksmiths knew their craft before he arrived. What Boileau created was legibility. He gave the city a way to see its own complexity, connect supply with demand, resolve disputes, set standards, and distinguish between a master and a pretender. The book did not invent the trades. It made them addressable.
The agent economy will follow the same sequence. The specialists already exist — the attorney with fifteen years of case law in her head, the operator who knows which suppliers pad their invoices, the designer who can feel when a system’s pacing is wrong. Their expertise is real, deep, and accumulated at the speed of human experience. What is arriving is not the expertise. It is the infrastructure that lets it operate at machine speed, machine scale, through machine distribution. The protocol that makes the specialist discoverable. The marketplace that makes the specialist evaluable. The orchestrator that routes the right question to the right specialist without requiring either party to know the other exists.
The standard telling of every gold rush is about miners and merchants — the ones who dug and the ones who sold shovels. But there was a third figure that neither group could do without. The assayer — the specialist who could tell gold from pyrite. The surveyor who knew which claims were worth staking and which were worthless rock. They did not mine. They did not sell equipment. They were paid not for labor or for goods but for knowing. One thing, known so deeply that everyone else needed their judgment to operate.
The vertical agent is the assayer. Not the miner, not the merchant. The one who knows one thing so deeply that everyone else — including the machines — needs them.
Boileau listed a hundred and one trades. The protocol will list a hundred thousand. The principle has not changed: the wealth is in the edges.
This is part three of a three-part series. Part one — “No One Is Looking” — examines what happens to existing software when the user disappears. Part two — “Before the Panic” — examines what institutions the new economy demands.