In my post about AI agents as power tools I ended with “don’t confuse the tool for the craftsperson.” This is worth expanding, because the confusion runs in both directions and both cause problems.
The mentorship trap
AI agents communicate in natural language. They say “I” and “I think” and “let me try.” They apologize when they make mistakes. This makes it easy to treat them like junior colleagues who work very fast.
Consider how you’d work with a junior engineer on a new feature. Good mentorship involves calibrated feedback. You don’t give them all the sharp criticism you see. You give enough to ensure quality while leaving room for their design to remain theirs. The goal is to ship the feature and develop the engineer.
Sometimes you let them go down a path you think won’t quite work. Not because you want the feature to fail, but because some lessons only stick when you experience them. You’ve mitigated the business risk. The slightly longer timeline is an investment in their growth.
You temper criticism so they feel ownership and accomplishment. A junior engineer whose work is constantly torn apart and rebuilt stops contributing ideas. They need wins to build confidence. They need to see their designs ship, even imperfect ones, to develop judgment about what matters.
This is good mentorship. It produces senior engineers.
It’s also exactly wrong for agents.
Why mentorship instincts fail
Agents don’t learn from experience. They don’t build confidence. They don’t need to feel ownership. They won’t become senior agents if you nurture them carefully.
Every interaction with an agent starts mostly fresh. Some tools persist context between sessions, but that’s recall, not growth—the agent doesn’t develop better judgment from the experience. The calibrated correction you gave yesterday might be retrievable, but it didn’t teach the agent anything. The lesson you let it discover by going down a wrong path taught it nothing—you just spent extra time getting to the same place.
When you see a design flaw, say so directly. When the approach is wrong, have it rebuild. The cost of rebuilding is minutes now, not the days or weeks it would cost a human. There’s no growth happening that justifies the slower path. There’s no relationship to preserve. There’s no confidence to build.
Blunt feedback with agents isn’t harsh—it’s efficient. “This abstraction is wrong. The database layer shouldn’t know about the HTTP layer. Restructure it with the repository pattern.” A junior engineer might need that delivered with more care. The agent will just use the information.
The instincts that make you a good mentor can make you a worse agent user. Letting flawed work ship because “it’s good enough and they need a win” makes no sense when there’s no one who needs a win. If you’re driving the agent, the human in the picture is you. Softening feedback to preserve the relationship preserves nothing. Going slow to let lessons sink in wastes time on lessons that won’t be retained.
Where the learning happens
Agents don’t learn between sessions, but your codebase does. Your guidance files do. Your prompts and patterns do.
When you discover that the agent repeatedly makes a certain kind of mistake, the right response isn’t patient correction each time. It’s updating your guidance so the mistake stops happening. Encode it in a linter rule. Write a test that catches it. Establish a pattern in the code that demonstrates the right approach.
This is the only way agents “learn.” Not through experience, but through the constraints and examples you provide. The investment is improving the rails the agent runs on.
When I find myself giving the same feedback twice, I stop and ask whether this should be encoded somewhere. If the agent keeps putting database logic in the wrong layer, maybe I need a clearer architectural pattern in the code, or an explicit note in my guidance files about where data access belongs. The third occurrence is a signal to fix the system, not keep correcting the output.
Building agents that feel like people
The flip side: designing agents to seem more human is also an antipattern.
Adding personality, warmth, or conversational flourishes to agent interfaces makes them feel more like colleagues. This actively works against effective use. If the interface encourages you to think of it as a person, you’ll interact with it like a person, with all the inefficiencies that implies. Humans anthropomorphize things with pasted-on googly eyes: of course we’ll anthropomorphize entities that communicate in natural, friendly language.
The best tool interfaces are transparent about what the tool does. A table saw doesn’t pretend to be a carpenter. A good agent interface shouldn’t pretend to be a colleague. An agent that says “I’d be happy to help with that!” before every response is training you to anthropomorphize it. An agent that just does the thing is being a tool.
The asymmetry problem
The real damage is asymmetric. If you interact with an agent the way you’d interact with a colleague, you waste some efficiency but the agent doesn’t care. If you interact with colleagues the way you should interact with an agent, you damage relationships.
Blunt, no-context communication works great with agents. It’s efficient. It gets results. Try it with a colleague and you’re being a jerk.
The habits you build in one context leak into the other. Spend all day barking commands at an agent and you might find yourself being curt with people. Spend all day being collaborative and collegial with an agent and you’re being less effective than you could be.
This argues for keeping the mental models distinct. The agent is a tool. Use it like a tool. Colleagues are people. Treat them like people. Using the same approach for both causes problems in both directions.
The distinction isn’t total. Clear writing and complete ideas work everywhere. Articulating deeper, more complete feedback will reduce round-trips with agents and people.
Tools and people
The distinction matters because both tools and people matter, differently.
Good tools make you more capable. Good colleagues make you more capable too, but through different mechanisms—collaboration, complementary judgment, shared understanding, trust built over time. Conflating them diminishes both.
With junior engineers, be a mentor. Their growth is real and compounds. With agents, be an operator. Give direct feedback. Demand rebuilds when warranted. Invest your patience in improving the rails—guidance files, patterns, tests—not in nurturing something that can’t grow.
Use the agent like a tool. It’s a very good tool. Treat your colleagues like people. They’re not interchangeable, and you need to develop the skills to work with both.