Modern large language models (LLMs) are a new kind of power tool that disrupts software engineering. Our field has shifted several times before, but this one is a doozy. A conversation with a friend led to him asking “What would you encourage new computer science graduates or current CS students to focus on to make themselves successful?” It’s a natural question when it feels like a sea change is transforming the field.
[Read More]Pattern pickup in practice
In my post about AI agents as power tools I wrote that agents are good at picking up existing patterns. Here’s a concrete example of what that looks like—both the strengths and the limits.
The setup
I have a project using SQLite with a bespoke database versioning and migration system. Nothing fancy, just a version number stored in the database and a set of migration functions that upgrade from one version to the next. The kind of thing you write when you need migrations but don’t need a framework.
[Read More]The agent is not your colleague
In my post about AI agents as power tools I ended with “don’t confuse the tool for the craftsperson.” This is worth expanding, because the confusion runs in both directions and both cause problems.
The mentorship trap
AI agents communicate in natural language. They say “I” and “I think” and “let me try.” They apologize when they make mistakes. This makes it easy to treat them like junior colleagues who work very fast.
[Read More]Architecture matters more with AI
In an earlier post I wrote that code communicates to the computer and to future readers. With AI coding agents, there’s a third audience: the agent itself. The agent reads your code to understand how to extend it. Good architecture makes this communication clearer. Bad architecture makes the agent confidently generate more bad code.
AI agents are very good at using well-designed components. They are not very good at designing them. They can implement against a clear interface, follow established patterns, and generate code that fits into an existing structure. They struggle with deciding what the interfaces should be, knowing which abstractions will age well, and understanding the domain deeply enough to decompose it correctly.
[Read More]Testing for AI coding agents
AI coding agents can move fast. The constraint on their productivity is correctness. They are always confident, but they need clear, automatic signal about correctness otherwise a human ends up providing all of that signal (slowly). A robust test suite can ensure the confidence aligns with correctness. The shape of the test suite matters as much as its existence.
Interface tests vs internal tests
Tests that cover publicly exposed interfaces without depending on internal implementation details are a force multiplier for AI agents (and humans). These tests define what correct behavior looks like without dictating how that behavior is achieved. An agent can refactor freely, restructure internals, rewrite implementations entirely—and as long as the tests stay green, the changes are probably safe. A “perfect” test suite would cover the entire set of visible behavior, so a green suite would mean correct software. Don’t let the difficulty of a perfect suite prevent building a good one, and consider how lower development costs and higher ROI on tests may mean aiming closer to “perfect” than you might have once.
[Read More]AI agents are power tools
In my last post I called AI coding agents “power tools for software developers.” The characteristics of power tools explain the capabilities, current limitations, and exciting opportunities of coding agents.
A table saw doesn’t know what you’re building. It doesn’t care if you’re making a bookshelf or a coffin. It will cut whatever you feed it, exactly where you guide it, with tremendous speed and force. The saw has no judgment. It has no taste. It won’t tell you that your design is ugly or that the joint you’re about to cut won’t hold weight. It does precisely what you tell it to do, including cutting your fingers off if you put them in the wrong place.
[Read More]Solving problems
Years ago I wrote about my enthusiasm for automation of toil. The advent of coding agents is the first time I’ve faced automation of a task I enjoy. I enjoy developing software, and I enjoy coding.
For fun projects, the ‘return’ on the investment is ‘fun’—and sometimes the fun is in the coding, sometimes it’s in solving the problem, and sometimes it’s in solving the problem by coding.
I found programming fairly young. I was lucky to have a computer and a technologist father who encouraged my interest. I enjoyed coding: making something “go”. I also enjoyed solving problems. Initially these were problems like “how do I make it do what I want” or pursuing an interest in a language or tool.
[Read More]MAVEN Launch
It was Monday, November 18, 2013 around 10:00 a.m. and I was standing in a humid Florida parking lot in the midst of a large crowd of people. We were all waiting to get on one of the many buses that had gathered there.
We all had good reason to be waiting in that parking lot. The buses were going to the NASA Causeway where we would get to see a rocket launch a spacecraft on its way to Mars.
[Read More]Mock service dependencies
Suppose you’re building a service that depends on several other services to work. You write a bunch of code and carefully include error handling code and have a plan for what happens if each service your new service calls fails. Naturally, you want to test your code. These services are invoked over a network. Perhaps they’re web services but they may be some other network protocol. Suppose further your code is nicely factored so there’s a “client” class that presents the network service as a library API to the rest of the service.
[Read More]Communicating in code
Code is communicating. Communicating with the computer to make it do something useful. Communicating with the future people that will read and maintain the code.
The former doesn’t care how clever you are. The latter may know where you live. The latter may be you.