Every engineering job description I read in 2026 still lists the same things. Python. React. AWS. “5+ years experience.” Bonus points for Kubernetes.
These are the ingredients of a role that is disappearing. Not because those skills don’t matter, but because they’re no longer the thing that separates good from great. They’re table stakes. And we’re still hiring as if they’re the test.
The Ratio Has Flipped
Here’s what my day actually looks like now. Roughly 80% of it is judgment work: assessing plans, reviewing architecture, catching the plausible-but-wrong output that looks correct but isn’t. 20% is hands-on intervention. I’m not writing code. I’m deciding whether the code the AI produced is actually right. And when it isn’t, I’m not fixing the code. I’m fixing the instruction so the AI gets it right next time.
That’s a fundamentally different job.
The Engineer as Decision-Maker
I’ve started thinking about this role in three zones.
In the first zone, AI helps me do deep discovery fast. Requirements that used to take weeks of workshops now get synthesised in hours. Architectural decisions that are hard to reverse if you get them wrong get identified and prioritised early, not discovered in month three when the codebase has calcified around a bad assumption. The human’s job here is judgment about what to build and why. AI accelerates the thinking. It doesn’t replace it.
In the second zone, the AI agent builds. But not autonomously. It follows a structured process: plan, decompose, assess confidence, execute, review. At every gate, the human decides whether to approve, reject, or re-plan. This is where experience earns its keep. A junior engineer looks at an AI-generated plan and thinks it looks reasonable. A senior engineer spots that the integration pattern will collapse under production load and sends it back before a line of code is written.
In the third zone, the human validates that the output matches the intent. Did we build the right thing? Not just “does it work?” but “does it work correctly, securely, and in a way that won’t create problems we can’t see yet?”
AI is present in all three zones. The human’s contribution is judgment at every stage. What changes is the type of judgment: what to build, how to build it, and whether it was built right.
The Craft Hasn’t Died. It’s Moved.
There’s a version of this story that sounds like “engineering is being deskilled.” I don’t think that’s true. I think the craft has moved.
The old craft was writing elegant code. Knowing the quirks of a framework. Having memorised the right design pattern for the right situation. Those things took years to develop and they mattered.
The new craft is knowing what good looks like at a systems level. It’s the ability to hold an entire architecture in your head while reviewing a single component. It’s spotting when each individual piece is fine but the whole system is slowly losing coherence. It’s having the discipline to reject output that passes every test but solves the wrong problem.
That last one is the hardest. AI is extraordinarily good at producing plausible output. It looks right. It reads right. The tests pass. But it’s subtly wrong in a way that only someone with deep experience can detect. That detection is the craft now.
Your Job Description Is Probably Broken
If you’re hiring engineers today, ask yourself what you’re actually testing for.
If your interview is a timed coding exercise, you’re testing typing speed and pattern recall. That’s the 20%. You’re ignoring the 80% that determines whether this person can do the job.
What you should be testing: give them an AI-generated architecture with deliberate issues. See if they find the structural problems, not the cosmetic ones. Present them with an AI-generated build plan and ask whether they’d approve it. See if they ask the questions the AI didn’t think to ask.
The personality tells you everything. The wrong hire says “AI wrote it and the tests pass, so it’s fine.” The right hire says “the tests pass, but are we testing the right things?”
The Experience Paradox
The engineers best suited to this way of working are often the most experienced ones. They’ve built the judgment over years. But many of them resist the shift because it looks like their craft is being devalued.
It isn’t. Those years of experience didn’t become irrelevant when AI started writing code. They became the bottleneck. Not in a bad way. In the way that matters: the ceiling on what AI can deliver is set entirely by the quality of the human steering it.
What Comes Next
This is the role. But the role only works if the process around it is right. Getting AI to build fast is easy. Getting it to build the right thing is the hard part. That starts long before the first line of code: with discovery, with architecture decisions, with building the context that determines whether the AI produces something valuable or something that merely compiles.
James Weeks is the founder of Code Velocity Labs. He builds data platforms for retailers and tracks what’s actually possible when AI-native delivery methods replace traditional approaches. Connect on LinkedIn or email to compare notes.