Architecture matters more with AI

In an earlier post I wrote that code communicates to the computer and to future readers. With AI coding agents, there’s a third audience: the agent itself. The agent reads your code to understand how to extend it. Good architecture makes this communication clearer. Bad architecture makes the agent confidently generate more bad code.

AI agents are very good at using well-designed components. They are not very good at designing them. They can implement against a clear interface, follow established patterns, and generate code that fits into an existing structure. They struggle with deciding what the interfaces should be, knowing which abstractions will age well, and understanding the domain deeply enough to decompose it correctly.

This is the same gap that separates a strong junior engineer from a senior architect. The difference is that an agent can produce code at a pace that makes both good and bad architecture decisions compound faster than before.

Architecture as force multiplier

Good architecture has always been a force multiplier. What’s different now is the magnitude of that multiplication.

When an agent can generate implementations quickly, the rate at which code accumulates against your interfaces accelerates. If the architecture is solid, this is pure gain. The agent produces code that fits cleanly, the system grows in maintainable ways, and you spend your time on design decisions rather than typing.

If the architecture is weak, the same acceleration works against you. The agent will happily generate code that fits a bad design. It will propagate the bad patterns. It will build on shaky foundations without hesitation. And because more code gets written faster, you end up with more technical debt accumulating in less time.

The feedback loop tightens in both directions.

The agent is a pattern amplifier

In my post about power tools I called agents “pattern amplifiers”—they observe the patterns in your codebase and reproduce them. Good patterns get reproduced. Bad patterns get reproduced. Inconsistent patterns get reproduced inconsistently.

This means the first few implementations of any pattern carry more weight than they used to. If you establish a clean way to handle errors, the agent will follow it. If you establish a messy way, the agent will follow that too. The agent won’t clean it up. It won’t refactor toward consistency. It will do what it sees.

In my post about testing I described how test suite design affects agent productivity. The same principle applies to codebase design more broadly. The codebase itself becomes a specification. The agent reads it and generates more of the same.

This has always been true of human developers too, especially new team members. But humans eventually develop judgment. They start to notice when a pattern is awkward and propose improvements. The agent doesn’t—at least not yet. It optimizes for fitting in, not for improving the neighborhood.

Where human judgment matters

The best engineers have always focused on problem definition, abstractions, and component boundaries first. With agents handling implementation, human focus can tighten on the areas where judgment and taste have the most impact.

The agent can read code and understand behavior. It can’t infer intent from behavior alone. And it can’t tell you whether the abstraction you chose will still make sense in six months. Capture the constraints, trade-offs, and reasoning behind decisions—when circumstances change, that’s what lets you revisit them.

Plan to throw one away

Fred Brooks wrote that you should plan to throw one away, because you will anyway. The advice has been contentious for decades—throwing away code feels wasteful, and iterative development was supposed to make it unnecessary.

With agents, the calculus changes. In the power tools post I noted that cheap prototypes change the design process. Writing one to throw away is cheap now. The agent can produce a working implementation in hours. You learn from it. You see where the abstractions chafe, where the boundaries are wrong, where you misunderstood the problem. Then you throw it away and build it right.

This isn’t the same as iterative refinement. Iterative refinement assumes the foundation is sound and you’re polishing the details. Throwing one away means recognizing that the foundation was wrong and starting over with what you learned. The cost of starting over used to be high enough that people avoided it even when they knew the foundation was shaky. That cost has dropped dramatically.

I’ve found this liberating. When I’m uncertain about a design, I can just try it. Have the agent build it out. See how it feels. If it’s wrong, I’ve invested minutes or hours instead of weeks and I’ve gained understanding.

The design phase doesn’t have to happen entirely in your head anymore. You can design by building prototypes, as long as you’re willing to throw them away. The key is recognizing when to throw it away rather than continuing to build on a bad foundation. The agent won’t tell you. That judgment is still yours.

Working with the grain

Good architecture becomes a precondition for getting maximum value from agents. The agents work with the architecture, amplifying whatever’s there.

I’ve been framing this series around the idea that the code is not the point, solving problems is the point. Architecture is how you organize the solution. With agents handling more of the code, the architecture becomes a larger fraction of the human contribution.

This is not a new skill. Software architects have been doing this work for decades. What’s new is that good architecture is becoming table stakes for effective development. The teams that invest in clean interfaces, clear boundaries, and well-documented intent will see multiplicative gains from agents. The teams that don’t will find agents multiplying their problems instead.

The craft relocates again.