Skip to content
islam.ninja
Go back

The Last Bottleneck

9 min read

What happens to open source when writing code is free but trusting it isn’t.

For decades, software has been built on a simple bargain: writing code is expensive, so we share it. One person builds a useful tool, publishes it, and ten thousand others use it instead of building their own. That’s the economic engine behind open source.

AI is breaking that bargain. Not by making shared code worse, but by making writing new code so cheap that the reason to share starts to erode.

A developer with Codex or Claude Code can build in minutes what used to take days. A non-technical founder can prompt their way to a working prototype before lunch. When production is nearly free, why learn someone else’s tool when you can generate exactly what you need?

This sounds like an open source death spiral. It isn’t. But something stranger is happening—and it starts with a bottleneck nobody was watching.

The Flip

Open source has always had two scarce resources: people who write code and people who review it. For most of its history, the first constraint dominated. Getting enough contributors to build and maintain a project was the hard part.

AI obliterates that constraint. What it doesn’t touch is the second one.

Reviewing code, deciding whether to trust it, understanding its security implications, maintaining it across years—these are judgment tasks that remain stubbornly expensive. And AI has made them harder, not easier, by flooding the zone with volume.

This isn’t theoretical. In early 2025, the maintainer of a widely-used open source project publicly described being overwhelmed by AI-generated bug reports—plausible-sounding but wrong, each one burning hours of triage time. The Linux kernel team tightened submission rules after a wave of low-quality AI-assisted contributions. Multiple security bounty platforms started flagging or rejecting reports that showed obvious signs of AI generation without human verification.

The pattern is consistent: AI increases the supply of code while doing nothing to increase the supply of qualified attention. The bottleneck flips from production to judgment.

And when you follow that flip to its logical conclusion, the entire shape of open source rearranges.

The Layer Cake

Here’s the thing about throwaway code: it’s never fully throwaway. Even the most disposable prompt-generated script runs on something. It needs an operating system, a programming language, a web browser. It probably touches security or user authentication somewhere. It gets packaged, deployed, scanned for vulnerabilities.

So while AI might replace the need to adopt someone else’s software at the application layer, it intensifies dependence on everything beneath it. Operating systems, programming languages, web frameworks, security tools—all of it becomes more critical, not less.

Think of it as a layer cake. At the bottom, shared infrastructure stays shared and long-lived. AI changes nothing here except to increase demand. In the middle, frameworks persist but shift roles—they become the foundation AI builds on, rather than systems developers study and conform to. At the top, application code fragments into a million personal variants that never get shared publicly. And above even that, a new layer emerges: AI-made microtools, used once and forgotten, like Post-it notes that happen to be software.

Open source doesn’t shrink. It sinks. The gravity pulls value downward—toward infrastructure, toward the foundations, toward the layers that can’t be prompt-generated because they have to actually be right. And the people maintaining those layers become more essential precisely as the people building on top of them stop thinking about them.

The Closing of the Bazaar

Eric Raymond’s famous essay described open source as a bazaar—noisy, decentralized, anyone can set up a stall. For decades, that metaphor held. The magic of open source was its radical openness: no credentials required, no gatekeepers, just show up with something useful.

AI is turning the bazaar into a guild. Not by ideology, but by immune response.

When anyone with a chatbot can produce plausible-looking code, the question for maintainers shifts from “is this useful?” to “can you explain what you just submitted?” Projects respond by building quality walls. Contributions require explanations, not just code changes. Tests aren’t optional. Provenance matters: where did this code come from, and can you demonstrate you understand it? You have to prove you belong before you’re allowed to shape what gets built.

This is a real loss. The old model—where a curious newcomer could submit a small fix and gradually earn trust—was one of open source’s genuinely beautiful features. It was the on-ramp that turned users into contributors and contributors into maintainers. AI doesn’t destroy that path. It buries it under noise. The newcomer’s genuine small fix now sits in a queue behind fifty AI-generated submissions that look similar on the surface but aren’t.

The community narrows. Not because anyone chose this, but because the economics demand it. Fewer contributors, more scrutiny, higher bars. The software stays open. The process of shaping it becomes a guild—merit-based in theory, but with the gate pulled tighter than anyone intended.

There’s a deep irony here. Open source won by outcompeting closed development on a simple principle: more eyes on the code meant fewer bugs. But that principle had a hidden assumption—that the eyes were attached to brains. Flood the system with synthetic eyes and you don’t get better oversight. You get a million confident glances and no one actually looking.

The Provenance Fog

There’s a quieter tension running underneath all of this, and it may matter more than anything else.

Nobody’s quite sure where AI-generated code comes from.

AI coding tools were trained on vast quantities of public open source. When they generate a function, is it an original composition? A statistical remix? A lightly shuffled copy of something published under a license that requires attribution or sharing? The honest answer is that neither the tool makers nor the users can say with certainty. This creates a fog around intellectual lineage that makes the entire licensing system—the legal backbone of open source—harder to trust.

There’s something almost parasitic about the dynamic. Open source created a commons. AI companies harvested that commons to build proprietary tools. Those tools now generate code that flows back into the commons with no clear provenance, potentially contaminating the very licenses that made the commons possible in the first place. The ecosystem is being asked to absorb the outputs of a machine trained on its own body — the same extraction-and-enclosure dynamic playing out in the cognitive commons at large.

Some maintainers are already getting nervous about accepting AI-assisted contributions. Not because the code looks bad, but because the lineage is unknowable. If you can’t trace where code came from, how do you know your freely-licensed project isn’t quietly absorbing fragments of something with incompatible terms?

And the fog extends beyond code. “Open source” itself is being stretched to cover AI models, where companies share the finished product but nothing about how it was made—claiming the label while hollowing out its meaning. The word “open” spent thirty years accumulating trust. Now everyone wants to inherit that trust without honoring the obligations that built it.

Can the Bottleneck Break?

There’s a counterargument worth taking seriously: AI eventually makes reviewing as cheap as it’s made writing.

And in some areas, it’s already happening. Automated tools can scan for known security vulnerabilities across thousands of dependencies in seconds—work that would take a human reviewer weeks. AI-assisted code analysis catches common bugs, flags outdated patterns, enforces style consistency. For the mechanical layer of review, the bottleneck is genuinely dissolving.

But mechanical review was never the hard part. The hard part is determining whether a design decision is wise. Whether a solution will hold under pressure. Whether a contribution moves a project in a direction its community actually wants to go. Whether something that works today creates problems that won’t surface for years.

Because review isn’t just verification. It’s taste.

It’s the accumulated judgment of someone who has maintained a project for years and can feel when something is wrong before they can explain why. It’s knowing that a contribution is technically correct but structurally corrosive. It’s the difference between code that works and code that belongs. That’s the hardest thing to automate—and the last thing AI will replace, if it ever does.

AI can tell you if code is correct. It can’t yet tell you if code is wise. And that’s the gap the entire bottleneck lives in.

So the optimistic future is real but conditional on a capability that doesn’t exist yet. Meanwhile, the bottleneck is here, the noise is rising, and open source is adapting the only way it can: by raising the walls.

What Follows

The code got cheap. The judgment didn’t. And that gap is the story of open source for the next decade.

The version that survives will look different from the one we romanticize. Less bazaar, more guild. Less “anyone can contribute,” more “prove you understand what you’re touching.” The infrastructure layer will matter more than ever. The application layer will dissolve into private, personal, disposable code that nobody shares because regenerating is cheaper than collaborating.

The maintainers—those chronically underfunded, overworked, quietly essential people holding up the digital world—will find themselves in an even stranger position. More essential than ever, more overwhelmed than ever, guarding the last bottleneck that AI can’t route around.

But here’s what makes this more than a software story. Open source was the proof of concept for a radical idea: that strangers could collaborate at scale, in public, and produce something better than any company could build alone. If AI makes that model harder to sustain—not by breaking the software, but by drowning the humans who hold it together—the loss extends far beyond code.

We built a digital commons that worked. Now we have to figure out whether it survives the machines we trained on it.


Share this post on:

Previous Post
Attention Isn't Enough
Next Post
Control the Center or Trust the Edges