Qualifying, Not Outsourcing
Something I'd been putting off for months took two hours.
Setting up an ISMS. Claude project with context loaded from local files. Copilot pulling from email threads. Monday.com for the board. Risk register populated. Documentation underway. Actually moving.
That should have felt good. It felt unsettling.
What the speed revealed
What surprised me wasn't how fast it went. It was what the speed made visible.
I wasn't the person doing the work anymore. I was the person checking it. Knowing what good looks like. Knowing what to push back on. Knowing which questions to ask the humans who actually specialise in this, rather than asking them to do the whole thing.
That's a different kind of value. It took me a moment to sit with that.
I've spent years building expertise partly so I could do things myself. So I could go deep, understand the domain, and produce something that held up to scrutiny. The doing was part of how I justified the expertise.
Now the doing is increasingly not the point.
The distinction that matters
The point is knowing when it's right. Knowing when it's wrong. Knowing which questions to ask and which outputs to trust.
Qualifying, not outsourcing.
That distinction matters more than it might seem. Because anyone can prompt. Not everyone knows what the output should feel like when it's genuinely good — when the risk register actually reflects the threat model, when the documentation reads like something people will use rather than something written to check a box.
The difference between a useful ISMS and a compliant-looking one is not something the model can evaluate. That judgement comes from exposure, from having seen what happens when these things fail, from understanding the gap between the letter and the intent.
The model doesn't have that. I do.
What this means for how we position ourselves
For most of my career, expertise and execution went together. You demonstrated what you knew by doing things with it. The output was the evidence.
That's shifting. The output is increasingly table stakes — producible by anyone with a decent prompt and a few tools. The differentiation is in the qualification: the ability to shape the work before it starts, evaluate it as it comes back, and push it in directions the model wouldn't find on its own.
This is closer to how senior people in any field actually operate. The architect doesn't draw every line. The senior lawyer doesn't draft every clause. The value is in the judgement, the pattern recognition, and the understanding of what matters and what doesn't.
AI is accelerating that shift across more roles, earlier in people's careers. Which is both an opportunity and a pressure.
The honest question
I'm still working out what this means for how I position myself day to day.
What I do know is that "experienced human in the loop" is a more accurate description of the role than most current job titles admit. And as the tools get better, that role matters more, not less — but only if the human in the loop is actually bringing something the model can't supply on its own.
The question worth sitting with is not whether AI can do your job. It's whether you can tell the difference between a good output and a convincing one.
That's the bit that doesn't get prompted.