Developers, AI Changed Everything. No, You're Not Doomed
Why the best developers are getting better, not obsolete
There’s an essay making the rounds by Simon Højberg called “The Programmer Identity Crisis.” If you haven’t read it, the core argument resonates with many developers right now: AI is fundamentally disrupting the craft of programming, and for those who identify deeply with that craft, it feels like an existential threat.
I empathize with Simon. His concerns are valid, and his experience is real. For developers who find deep satisfaction in the artisanship of code, the elegance of a well-crafted function and the satisfaction of refactoring toward clarity, AI’s “autocomplete on steroids” approach can feel like vandalism.
If you’re like me, and you came away from that essay feeling more anxious than before, I want to offer an additional lens. Not as a rebuttal to Simon’s perspective, but as an alternative frame that might ease some of the dread you’re carrying.
Yes, This Time Is Different
Simon’s essay expresses a real fear: that AI is collapsing the quality floor in software development. When he describes AI-generated code that “works” but violates basic craft principles, he’s articulating something many of us feel viscerally.
My take is less dire, but let’s be honest about what’s happening. This disruption operates across two dimensions that we need to understand:
Dimension 1: The Scale of Who Can Build
We’ve seen this before. The Stack Overflow era gave us the “copy-paste developer” meme. But even then, you needed baseline knowledge to use Stack Overflow effectively. You needed to know what to search for, evaluate solutions, and integrate the pieces. Stack Overflow offered missing Legos; you still had to build the structure.
AI collapsed that barrier entirely. It dropped from “understand code syntax” to “describe your problem.” The population operating with complete trust in code they did not write didn’t just grow, it exploded. People with minimal technical understanding are now shipping applications.
Dimension 2: The Risk Awareness Gap
The deeper problem is that many of these new builders don’t understand that a spectrum of rigor has always existed in software development. They’re shipping AI-generated code to production systems handling customer data, financial transactions, and authentication. These are contexts where “mostly works” creates real risk.
This spectrum isn’t new. We’ve always matched our rigor to context. YOLO for weekend projects, architectural discipline for payment systems. What changed is the scale. The barrier to building dropped, but awareness of when different levels of rigor matter didn’t keep pace.
And yet AI is a tide that lifts all boats. It lowered the floor for those operating at high trust, but it also raised the ceiling for developers who understand this spectrum. The leverage available to those who can navigate between approaches and apply the right level of rigor to the right context is genuinely transformative.
The Trust Spectrum We’ve Always Navigated
What’s changed is that AI amplifies both ends. It dramatically expands who can build software (the access end) while simultaneously increasing the potential impact when things go wrong (the risk end). This amplification demands we become more intentional about where we operate on that spectrum at any given moment.
Keep reading with a 7-day free trial
Subscribe to Altered Craft to keep reading this post and get 7 days of free access to the full post archives.


