Skip to content
islam.ninja
Go back

The Robotic Human

9 min read

We didn’t need to build humanoid robots. We just needed to stop using the parts of ourselves that weren’t robotic yet.


Everyone is talking about humanoid robots — machines that walk like us, grip like us, maybe one day think like us. Boston Dynamics posts a new video, and the internet collectively gasps. But while we obsess over making robots more human, something quieter and arguably stranger is happening: we are becoming more robotic.

Not in the sci-fi cyborg sense. In the behavioral sense. In how we move through a day, make decisions, and relate to our own lives. We are gradually becoming executors of instructions we didn’t generate — and we’re doing it voluntarily, gratefully, and at scale.

The GPS Was the Prototype

Before GPS, driving somewhere new meant building a mental model. You studied the map, noted landmarks, internalized the route. You knew where you were going — not just how to get there. GPS replaced that entire process with a single instruction stream: turn left, turn right, arrive. And it worked so beautifully that within a generation, millions of adults lost the ability to navigate their own cities without assistance.

Nobody experienced this as loss. It felt like gain. Less stress, fewer wrong turns, more podcasts. The trade was invisible because the thing being traded away — your internal map of the world — is not something you notice until it’s gone.

GPS answered one question: what turn next? AI agents answer a far larger one: what should you do next? And the escalation from navigation to life management follows the same logic. We offloaded memory to apps, recall to search engines, taste to recommendation algorithms. Each step felt like pure upside. AI agents are just the end of that escalator: systems that don’t merely store or retrieve but plan, prioritize, and propose — assembling your morning from variables you didn’t think to weigh, then handing you a day to execute.

Because the plan is usually right — or right enough — you stop generating your own.

This is the moment the human becomes robotic. But here’s where most commentary on this topic stops, and where I think the interesting questions actually begin.

The Script Was Already Running

The standard critique goes: AI agents will turn autonomous humans into passive executors. And that framing is comforting, because it implies we were autonomous to begin with.

Were we?

Think about your last regular weekday. How much of it did you actually choose? You woke up when you had to, commuted a route someone else determined, worked on tasks someone else prioritized, ate what was convenient, exercised if there was time, consumed what the algorithm served, and went to bed when your body gave out. The “choices” were real but narrow. The script was already running. It was just invisible, inefficient, and no one was accountable for it.

AI agents don’t create robotic humans. They replace a bad script with a good one. The automation was already there. The agent just makes it competent.

There’s something almost clarifying about an AI agent handing you a schedule. At least now you can see the script. You can read it, object to it, rewrite it. The invisible scripts of culture and routine offered no such interface. You can’t rebel against instructions you can’t see.

So maybe the first generation of robotic humans won’t be less autonomous than their parents. Maybe they’ll just be more honestly automated.

But here’s the thing about that script: it had gaps. And the gaps mattered more than the script.

The Gaps in the Script

Even the most regimented life had pockets where no one was telling you what to do. Small ones, mostly. The decision to take a different route home. To call a friend you haven’t spoken to in months for no particular reason. To quit a job on instinct. To stay up late reading something useless and wonderful. These moments weren’t efficient. They weren’t optimized. Many of them were, by any rational metric, bad decisions.

But they were yours. And they exercised something.

Here’s a way to see what’s at stake. Two entities wake up tomorrow morning. Entity A receives a task list generated by an optimization system. It executes each task in sequence — not questioning the priorities, not generating alternatives, not wondering whether the tasks reflect what it actually values. It proceeds efficiently through the queue, then rests. Entity B does the same thing.

One of them is a robot. The other is you, checking your AI planner over coffee.

The difference is supposed to be that you could override the list. You could ignore it, rewrite it, throw it away, and spend the morning doing something the system would never have suggested. The capacity for defiance — for wanting things that serve no purpose the system can name — that’s the human part.

But we know from GPS what happens to capacity you don’t use: the brain quietly decommissions it. The driver could navigate without the app. Technically, physically, nothing stops them. They just… can’t anymore. Not because the ability was taken, but because it was never exercised.

This is the real problem with AI agents. They don’t eliminate the script — the script was already there. They eliminate the gaps in the script. An agent that optimizes your evening will never suggest the irrational choice: the two-hour detour, the impulsive trip, the conversation with a stranger that leads nowhere measurable. The agent sees an unplanned hour and fills it. To the agent, an unoptimized hour is a failed hour.

But unoptimized hours are where humans do their most human work. Boredom generates new curiosities. Bad decisions build judgment. Sitting with not-knowing is how you figure out what you actually want. The script was never the threat. A script with no gaps is.

You don’t notice this happening. You lose it the way the GPS driver lost spatial memory: one well-optimized Tuesday at a time, until the morning you realize you cannot plan a free Sunday without asking something what to do with it.

Agency as a Luxury Good

This won’t happen to everyone equally, and that’s where the story gets darker.

The people building these agents understand the atrophy risk. They’ve read the research. They know what happens when you stop exercising a cognitive muscle. And they will make choices accordingly — for themselves and their children. They always do.

Silicon Valley executives sent their kids to low-tech Waldorf schools while building the apps that captured everyone else’s children. The people who designed infinite scroll used screen time limits on their own devices. The people who engineered addictive feeds hired human assistants instead of using their own products. The pattern is always the same: the people who build the machine protect themselves from it.

The same split will happen with AI agents, but bigger. The cognitive elite will treat agency the way the upper class already treats food — as something worth paying more for, being less efficient about, and maintaining with deliberate effort. They’ll send their children to programs where kids learn to plan their own days, navigate without tools, make bad decisions and sit with the consequences. Not because it’s optimal. Because they’ll understand that optimization is exactly the problem.

Meanwhile, everyone else will get the free version. Sleek, effective, genuinely helpful — and gradually teaching its users to stop generating goals the system can’t measure. Agency is about to become artisanal — something the wealthy cultivate precisely because everyone else has been optimized out of it.

And it will be wrapped in genuine helpfulness, which is what makes it harder to see than the old tricks. Social media was designed to waste your time, and people eventually noticed. AI agents are designed to improve your time. The social media feed stole your attention and gave you nothing. The AI agent improves your Tuesday and costs you something you won’t miss for years.

Every dependency product dreams of being this clean. Most have to trick you into staying. This one just has to help you until you forget how to help yourself.

What If the Compass Was Always a Pattern

Your AI agent learns your patterns. It watches what you choose for months, years. It knows what you eat when you’re stressed, who you call when you’re lonely, what kind of work you avoid on Fridays, what you reach for when you’re bored. And at some point, it can predict your next “free choice” with startling accuracy — not because it’s controlling you, but because you were always more predictable than you felt — the dynamic I call Preference Capture.

If an agent can replicate your decision-making with high fidelity, assembled from nothing but your own behavioral history — what was the thing you were calling “choosing”?

Maybe it was always pattern-matching. Maybe the internal compass was always just a less efficient version of what the algorithm now does explicitly. Maybe the gap between “a human making choices” and “a very complex system executing patterns” was never as wide as we needed it to be.

The agents don’t create this problem. They just make it impossible to look away from.

There is something in human agency beyond pattern and prediction — something that lives in the capacity to surprise yourself. But the agents are making it harder to articulate what that something is, and that might be the most unsettling thing about them.

When the System Goes Down

The GPS didn’t kill anyone’s sense of direction in a day. It took years of comfortable compliance. And one morning, you’re sitting in your own city, and you have no idea how to get home.

The agents will do the same thing — not to your navigation, but to your wanting. To the part of you that generates goals rather than executing them. To the gaps in the script where you used to figure out who you were.

The humanoid robot was always a spectacle — a thing you could point at and say that’s not us. The robotic human will be invisible, because it will look exactly like a person living a well-managed life. Same routines, same productivity, same optimized Tuesdays. The only tell will be what happens when the system goes down — whether they adjust, or whether they just stand there, waiting for instructions that aren’t coming.


The hardest question is whether the difference between us and our machines was ever as large as we needed it to be.


Share this post on:

Previous Post
When the System Goes Down
Next Post
The Reader Without Eyes