Engineers at Anthropic now say that nearly all of their production code is AI-generated. The head of Claude Code hasn't written a line by hand in months. And they're still hiring engineers. Aggressively.
If that seems contradictory, you might be confused as to what engineering actually is, or what value an agency like Centarro can bring.
We're seeing this tension play out in real time with our clients. People are sending AI-generated pull requests, AI-suggested architecture decisions, and AI-produced business requirements to our team for review. It's accelerating, and it's not going to slow down. But there's a gap between what AI produces and what a complex eCommerce project actually needs.
And that gap is where things get expensive.
Code that looks “mostly good”
Recently, we reviewed code provided by a client to add a new API endpoint. The code looked reasonable on the surface, though it was obviously AI-generated. It worked. However, it didn't follow the patterns established in the rest of the codebase.
That kind of inconsistency is how technical debt compounds. Not from code that breaks, but from code that works differently than everything around it. Multiply that across dozens of contributions, and you've got a maintenance nightmare.
AI answers the question it was asked. It doesn't know what questions weren't asked. It doesn’t know if you asked it the right question for a given context. An endpoint that accomplished the same thing might have already existed, and it would have spent time writing a duplicate anyway.
Writing code is not the most valuable thing a good software engineer does.
An uninformed consultant
Another time a client had consulted an LLM, which recommended a specific feature. The LLM told the client it would be simple. But it wasn’t. Significant portions of the backend would need to be restructured to support this requirement, something we could have gotten ahead of had we been involved in the conversation earlier.
AI oversimplified the problem because it didn't understand the existing architecture, the business context, or the downstream implications.
Know what “good” tastes like
You can dip a mass spectrometer into a glass of wine, and it will tell you more about the particulate matter than any human ever could. But it can't tell you if it's a good wine or the best wine for a given occasion.
The same is true of code. AI can analyze syntax, generate endpoints, and follow instructions. But it lacks taste. It doesn't know when code smells off, an instinct experienced developers have when something looks technically correct but is architecturally wrong. The subtle feeling that a solution is over-engineered, that it's solving the wrong problem, or that it's going to create friction six months later.
A trained cheesemonger can smell a Roquefort and know it's exactly where it should be—ripe, pungent, and perfect. Someone without that training just smells something terrible. Knowing the difference requires domain knowledge. It requires years of experience. Code is no different. An eCommerce platform, and all its connecting systems and components, is no different.
When Anthropic explains why they're still hiring engineers even as AI writes their code, they point to exactly this. The hardest parts of engineering were never typing code. They're architecture. They're understanding what to build. They're talking to customers and translating ambiguous requirements into sound technical decisions. They're reviewing output with the discernment that comes from years of building systems that had to survive contact with reality.
Use AI, but let us help you use it better
We're not here to tell you to stop using AI. You're going to keep using it, and you should. But bring the experts into the conversation earlier. Don't reduce our role to last-mile cleanup, right before something gets merged into production. Otherwise, you might be asking us to force a puzzle piece into a spot you’ve already pointed out, but it might be the wrong piece. Or worse, the piece might belong to another puzzle entirely, and no one knew to ask or check.
Reviewing code is the least valuable place for us in the process. And it means you, the client, never actually get better at using the tool. You don't learn what to include in your prompts to ensure the output follows your codebase's patterns. You don't learn which questions to ask before the AI starts generating solutions.
Let us help you use AI more effectively. We can help you craft better prompts, establish architectural guardrails, and build the context that AI needs to produce code that actually fits your system. Everyone benefits.
Bring us in earlier.
Proper AI use matters for Drupal Commerce
Drupal Commerce was built with well-defined patterns, a modular architecture, and clear extension points. That structure is actually more valuable in an AI-driven world, not less. When you want to rely more on prompt engineering than code engineering, you need a platform that's already robust and well-organized. AI works better when the codebase it's building on has consistent patterns to follow.
But even with the best-structured platform, someone needs to know the domain. Someone needs to understand why an order management system works the way it does, why certain entities are separated, why a particular integration approach was chosen over another.
That knowledge comes from two decades of building ecommerce systems, hundreds of conversations with merchants (and experience being a merchant), and the hard-won understanding of what actually works when the orders start flowing.
AI is a powerful tool. But a tool without taste is just fast. And fast in the wrong direction is the most expensive kind of mistake.
Add new comment