Skip to content

The AI Prisoner's Dilemma

Published: at 09:44 PM

AI coding tools went from experimental to seemingly everywhere in just a few years. Each incremental step in AI adoption feels reasonable, but we might all be stuck in what I’m calling the AI prisoner’s dilemma.

If we avoid using AI, we may miss out on productivity gains and fall behind. If we embrace AI, we get those benefits, but we might also accelerate toward outcomes we can’t fully predict. It could be a future where many jobs change significantly, as Dario Amodei recently suggested, or something we haven’t even considered yet.

What happens to our craft when we use tools built to replace it? It’s like the classic coordination problem where everyone makes sensible individual choices that add up to something nobody wants. And like the prisoner’s dilemma, we can’t easily avoid this pattern.

The AI Ratchet

Once you’re on the AI path, it feels inevitable. GitHub Copilot launched, and suddenly AI handled more boilerplate and common patterns. As tools become more sophisticated, the developer’s role shifts from musician to conductor.

Developers still need to read, review, and own every line, even when they’re not the ones typing it. The work that humans do tends to be higher-leverage: architecture, tricky logic, and directing. Basically, the parts that benefit from taste and experience.

There’s a possible future where developers don’t write code directly, and instead manage teams of AI agents that implement what they describe. The major tools are all moving toward an agent orchestration model: OpenAI’s Codex, Claude Code with a Claude Code MCP, Cursor’s Background Agents. The direction seems clear to me.

The Sports Gear Analogy

A friend recently compared AI adoption to sports equipment evolution: “Do you really want to play baseball in 1930s gear? Or basketball in what they wore in the 70s?” He mentioned that ultramarathons had to be made harder over time because gear improvements made the original courses too easy.

The analogy misses how big this change feels. Imagine if cycling went from Tour de France bicycles to e-bikes to motorcycles, all within five years. At some point, you’re not cycling anymore.

Magic Hour

There’s a moment in photography called magic hour. That brief period after sunset when the sky is still light and night approaches. The light glows, and you know it’s temporary.

Magic hour at Pike Place Market Magic hour at Pike Place Market

I think that’s where we sit with AI and knowledge work. We’re in this moment where AI multiplies our capabilities rather than replacing them. The tools feel genuinely helpful. AI can handle tedious tasks, which creates space for more creative problem-solving.

Daytime represents how we’ve historically written code - humans in control. Sure, our tools have changed significantly over time. We’ve gone from punch cards to the internet to mobile and cloud computing. Meanwhile, we stayed in the driver’s seat. Night represents a future where AI codes while we tell it what we want.

We’re in between right now, in that beautiful but uncertain transition. Night approaches, and I genuinely don’t know what’s on the other side of it.

Where this might go

I’m not sure where this all goes. That said, here are a few possible futures:

Soft Landing: AI becomes a powerful collaborator and doesn’t fully replace human cognitive work. We find a new state where humans and AI work together, similar to how spreadsheets made calculations easier and created new types of analysis work.

Hard Transition: Displacement happens quickly, and we figure it out. Maybe universal basic income, shorter work weeks, or new economic models emerge. Humans have adapted to big changes before, even when they felt overwhelming at the time.

Post-Hype Crash: AI hits limits faster than expected. The technology plateaus, investment dries up, and we’re left with useful but incremental tools rather than significant change. The dilemma resolves itself as adoption naturally slows.

Overshoot: We automate faster than we can adapt. Social and economic systems strain under the change. A small group captures the benefits of AI while the effects spread through society faster than we can build safety nets.

What’s in our control

When I feel overwhelmed by all this, I try to focus on what’s actually in my control versus what isn’t.

Out of my control: Industry direction, macro economic changes, whether AI development slows down or speeds up, what other companies or developers choose to do.

In my control: My curiosity about these tools, willingness to learn how to use them effectively, staying informed, and how I adapt to changes.

Even with that perspective, the bigger question remains.

Living in the question

Honestly, I don’t know which future we’re heading toward, and I’m not sure how much our individual choices matter. We’re all making decisions that make sense for us personally, though I’m not sure any of us really knows where it goes.

Recognizing this pattern doesn’t mean we’re stuck with it. If we’re going to keep adopting AI tools anyway, we might as well do it intentionally rather than drifting into whatever happens next.

The choices we make now are shaping the future in ways we can’t really see yet. What gives me hope is that some of the best innovations have come from periods of uncertainty, when we had to figure things out as we went along.

Magic hour doesn’t last forever, and neither does the uncertainty. Eventually, we’ll find our footing in whatever comes next.


Thanks to Brian Sandoval, Noam Katz, and Andrew Nickerson for their feedback.


Previous Post
Why I write (and why you might want to too)
Next Post
I made a game with AI and I don't know how to feel about it