The Human in the Loop: Why Understanding Still Matters in the Age of AI

October 23, 2025 5 min read
Featured image for The Human in the Loop: Why Understanding Still Matters in the Age of AI

After a couple of years working with large language models, building on top of them, using them daily, and watching others do the same, I’ve developed some strong opinions about how we should be using these tools. This isn’t about whether AI will replace humans or any of that philosophical stuff. This is about right now, today, and what I think we’re getting wrong.

The Problem with Black Box Outputs

Here’s what bugs me: I keep receiving emails, technical arguments, and even code from people who clearly don’t understand what they’re sending me. It’s obvious they’ve copied and pasted something from ChatGPT or Claude, and look, I’m not judging that part, I do the same thing. We’d be fools not to use these tools to be more productive.

But here’s the thing: if you send me something, I don’t care where it came from. As far as I’m concerned, you generated it. Which means you need to be able to defend it and understand every line of it.

I’ve gotten technical solutions from people who are definitely not technical. They’ll throw some jargon at me, make some technical argument, and it’s painfully clear they have no idea what they’re talking about. They asked an LLM, it spit out a response, and they just forwarded it along without the skills to understand it, refine it through follow-up questions, or verify that it’s actually correct.

And honestly? At that point, why do I need them at all? Why don’t I just ask the AI myself?

What I’m Actually Looking For

What I want is simple: talk to the AI. Please, use it. Get more productive. But then look at what comes out. Really look at it. Do a thorough review. Make sure you agree with it and understand it. Then, and only then, send it to me.

When you do that, you own the document. You’re the one who sent that data, and you can stand behind it. That’s accountability.

The Coding Question

Let’s extend this to code. Yes, we have a lot of people coding now who weren’t before, and that’s brilliant. AI has democratized programming in a really exciting way. But, and this is important, that works great for basic websites, simple single-layer applications, maybe a Next.js app that grabs data, processes it, and displays it nicely.

But what about enterprise-ready software? Software that other people will maintain, troubleshoot, and extend? That’s a different story.

For that, you need to understand what you’re doing. You need to make sure your code follows certain patterns and practices. And here’s the kicker: if you don’t know what SOLID principles are, you’ll probably never think to ask the LLM to implement them. Even if you do, you still understand your business logic better than any model, so you should be able to evaluate whether those services are right, whether those interfaces make sense.

I’m not saying you need to be an expert. But you need to know what’s happening.

My Current Reality

I’m working on a complex desktop application right now with multiple dependencies and all that complexity. I don’t write a single line of code myself, the AI does the heavy lifting. But I own every line of code. I understand what’s happening, at least at the higher level: how the design patterns work, why I’m using SOLID, how dependency injection fits in, and why it should be done that way.

Because eventually, this project will have thousands of lines of code, and I need to make sure it does what it’s supposed to do and stays maintainable. I’m using AI to generate maintainable code, code that’s maintainable not just by humans, but by future AI agents too. If you really mess things up, even the best AI tools will struggle to help you later.

The “But You Can Just Instruct the AI” Argument

I know what some of you are thinking: “But you can write instructions for the AI! Give it your architectural principles, and it’ll follow them!”

Sure, to a degree. But it’s not perfect, and here’s why: no matter how much you write, you still need to keep nudging it. “Use this file with the architectural principles.” “Remember the coding standards.” And that’s not really a criticism of the AI, it has limited context, it’s trained in a generic way, and it doesn’t know which specific approach needs to be implemented in your particular scenario.

So you have to keep telling it what to do, and that’s fine. But here’s the simple truth: if you’ve written architectural instructions or coding standards, you need to understand what they are. You need to be able to check whether those things are actually being followed.

The Bottom Line

Look, we’re not at the level where AI has general intelligence and just knows what to do without us. If we were, I wouldn’t be sitting here telling it what to do, it would figure it out itself. But we’re not there. There’s still a human in the middle.

And that human in the middle needs to not be dumb.

Use AI. Absolutely use it. Be more productive. Let it generate your emails, your arguments, your code. But own the output. Understand it. Be able to defend it. Because the moment you can’t, you’ve become just a pass-through for a black box, and honestly, we don’t need that.

We need humans who use AI as a tool while still bringing their own understanding, judgment, and accountability to the table. That’s the sweet spot. That’s how we make this work.