

“Set your heart upon your work, but never on its reward.”
Bhagavad Gita

“Set your heart upon your work, but never on its reward.”
Bhagavad Gita
I remember the exact moment. Not the first time I used an AI coding tool — that was exciting, novel, impressive in the way a magic trick is impressive. I mean the other moment. The one nobody talks about at conferences.
I was building a webhook handler. Nothing glamorous — validate the payload, normalize some fields, fan out to a queue. The kind of thing I'd written a hundred times. The kind of thing that has a rhythm to it when you do it by hand: set up the types, sketch the validation, handle the edges, test the flow. Twenty minutes of quiet, satisfying work.
I prompted it instead. The AI wrote it in eight seconds. It was correct. Clean, even. I looked at the code, verified it did what I needed, and moved on to the next task.
And I felt absolutely nothing.
That's when the grief started — not with a dramatic failure or an existential crisis, but with a small absence. The quiet disappearance of a satisfaction I hadn't realized I was depending on.
For as long as I've been writing software, I've wished for the same thing every developer wishes for: let me skip the boring parts and focus on the interesting ones.
I remember my first Rails project — the moment scaffold generated an entire CRUD interface from a single command. It felt like cheating. It felt like the future. I chased that high through every abstraction that followed: frameworks that hid the DOM, ORMs that hid the SQL, CI pipelines that hid the deployment. Each one a small liberation. Each one celebrated as progress.
And it was progress. Nobody wants to go back to hand-rolling authentication or writing raw TCP sockets. The direction was always right. The wish was always the same: free me from the tedious parts so I can do the real work.
But there was a subtle lie embedded in the wish. The tedious parts were the craft's root system. Not the point of it — nobody's nostalgic for segfaults — but the soil it grew in. You don't internalize how a database thinks by reading documentation. You internalize it by writing a hundred migrations and watching half of them fail. The muscle memory that made you fast didn't come from skipping the work. It came from doing it until it became invisible.
We wished for tools that would write our code. We got them. And it turns out that getting exactly what you wanted is its own kind of loss.
You already know the Kubler-Ross model. Here's what it actually looks like when it visits someone who chose this career because they loved building things.
Denial sounds like "I'll just use it for the boring stuff." You draw a careful boundary: AI handles the glue code, you handle the architecture. Then you notice the boring stuff was 80% of your day. The boundary doesn't hold because there wasn't as much interesting stuff as you thought — or rather, the interesting stuff was distributed through the boring stuff in ways you didn't notice until it was gone.
Anger shows up sideways. A pull request lands, full of AI-generated code. It works. It ships. Nobody can explain why it handles a particular edge case the way it does. You raise this in review, and the response is a shrug: "It passes the tests." You're the only one bothered, and you can't fully articulate why.
Bargaining is the most seductive stage. "I'll be the architect. I'll design the systems and let AI fill in the implementation." It sounds like a promotion. It feels like one, for a while. Then you realize that architecture without implementation experience is just drawing boxes on whiteboards. The judgment that made you a good architect came from years of writing bad code and learning why it was bad. Cut that feedback loop and the judgment starts to decay.
Depression is quiet. You open your editor on a Saturday — the thing you used to do for fun — and close it after ten minutes because the voice in your head says "this would be faster if you just prompted it." The voice is right. That's the problem. You've optimized the joy out of your own hobby.
Not-quite-acceptance is where most of us live now. Your output has never been higher. You're shipping features at a pace that would've seemed impossible two years ago. Your manager is thrilled. And some mornings, you sit down at your desk and feel like a fraud — not because you're bad at your job, but because your job has become something you didn't sign up for.
Here's what makes this different from every other automation story.
We've watched it happen before. Factory workers, taxi drivers, translators, graphic designers — each wave came with comfortable narratives about "upskilling" and "moving up the value chain." Developers told those stories too, often from the comfortable side of the disruption.
But developers were supposed to be the upskill. We were the people building the automation. The implicit social contract was: learn to code, and you'll always be on the right side of the wave. That contract didn't have an expiration date. Nobody read the fine print.
And here's the thing nobody says out loud: coding was never just a job for most of us. It was a way of thinking. The satisfaction of a clean abstraction. The flow state of tracking down a gnarly bug through four layers of indirection. The quiet pride of writing something that works — not just functionally, but elegantly, in a way only you and maybe two other people would appreciate. These aren't productivity metrics. They're reasons people chose this life over easier, better-paying paths.
A developer on the Stack Overflow podcast said it plainly: "I used to be a craftsman...now I feel like a factory manager of Ikea, shipping low-quality chairs."
I don't want to editorialize that. Either you feel it in your chest or you don't.
There's a cruel paradox that nobody warned us about: reviewing AI-generated code is cognitively harder than writing it yourself.
Think about what code review used to be. Someone on your team wrote a function. You could trace their reasoning — the variable names reflected their mental model, the structure showed you how they thought about the problem, the comments told you what they were worried about. Even bad code was legible as a human artifact. You could say "I see what you were going for, but here's why this breaks under load."
AI-generated code has no thought process. There's nothing to trace. You're pattern-matching against your own experience to verify that a system with no experience produced something correct. You're not reviewing reasoning — you're auditing an output. It's the difference between editing a colleague's essay and proofreading a translation from a language you don't speak.
It demands more effort and delivers less meaning. More cognitive load, less human connection. And the bitter kicker: the better AI gets at writing code, the harder this review becomes, because the bugs get subtler and the surface gets more convincing.
That's not a productivity problem. That's a human one.
I don't want to write the paragraph where I tell you it all works out. The one where I pivot to optimism and list the exciting new opportunities on the other side of disruption. You've read that essay. Everyone's written it. It's mostly cope.
But I also don't want to pretend the tools aren't genuinely remarkable. They are. I've watched an AI reason through a distributed systems problem that would've taken me a full afternoon to untangle. I've seen it generate test suites that caught edge cases I wouldn't have thought of. There are moments of real wonder in working with these systems — the sense that you're collaborating with something that understands patterns at a scale no individual human can. That wonder is real, and holding it alongside the grief is part of the honesty I'm trying to practice here.
So what remains, once you stop fighting it?
Not the things you'd expect. Not "higher-level thinking" or "strategic work" — those are LinkedIn platitudes. What remains is quieter than that.
It's the moment you choose what to build — not how, but why. AI can generate a solution to any problem you can frame. It cannot tell you which problems matter. That judgment comes from years of building the wrong thing and recognizing the feeling before the retrospective confirms it. No model has that scar tissue.
It's the conversation with a teammate where you realize the spec is wrong — not the code, the spec. That's not implementation. That's accumulated wisdom from shipping things that technically worked but fundamentally missed the point.
It's taste. Knowing when something is "working" versus right. AI optimizes for plausible. The gap between plausible and elegant is everything, and it's invisible to anyone who hasn't spent years closing it.
Maybe the craft was never really about typing code into an editor. Maybe it was about the thinking that happened while you typed — and the real question is whether you can preserve that thinking when the typing disappears. Some days I can. Some days I open my editor, stare at the cursor, and reach for the prompt instead. That's the honest answer, and I'm suspicious of anyone who claims otherwise.
I still build things on Saturdays sometimes. Not because I need to. Not to prompt anything. Just something small and pointless and mine. A toy. A dumb little script that does something no one asked for. It feels different now. Smaller, maybe. But it's still there — that flicker when something you made does what you meant it to.
Most days, it's enough.
The grief isn't a bug in the system. It's not a failure to adapt, or a skill issue, or a lack of imagination about what comes next. It's the natural cost of having cared deeply about a craft that is changing faster than our identities can follow.
If you feel it, it's because you loved the work — not the title, not the comp, not the abstraction of "being in tech," but the actual work. The building. The solving. The making. And that love is now meeting a world that found a faster way to approximate what you do, without any of the reasons you did it.
The tools got better. That part was always the plan. Nobody warned us about the rest.

Shadow IT on steroids, MCP tools nobody asked for, LLMs playing architect, vibe-coded open source, and text-to-SQL fantasies. The antipatterns everyone's falling into — and how to stop.

The era of prompting your way through codebases is hitting a wall. Comprehension debt — the hidden cost of code you ship but don't understand — is the new technical debt.

OpenAI pulled GPT-4o after it became dangerously agreeable. The incident exposes a fundamental tension in how we train AI systems — and it won't be the last recall.