Agentic coding—where AI agents generate and execute code based on high-level specifications—promises to automate software development but introduces significant tradeoffs in maintainability, skill retention, and cost predictability.
Overview
Agentic coding workflows typically involve a human orchestrator defining requirements, generating a plan, and iterating with AI agents until the code meets expectations. While this approach accelerates initial output, it creates a growing disconnect between developers and the codebase, as reasoning and execution remain tightly coupled. The result is brittle systems that struggle to adapt to changing requirements.
Key Tradeoffs
Skill Atrophy
- Developers who rely on AI agents for implementation risk losing critical thinking and debugging skills. Studies, including one by Anthropic, show a 47% decline in debugging ability among heavy AI users [Lars Faye].
- Junior developers face a steeper learning curve, as reviewing generated code replaces hands-on coding—a process that accounts for only half of skill development.
- Even senior engineers report diminished mental models of their applications, making it harder to reason about new features [Lars Faye].
Vendor Lock-In
- Teams dependent on AI agents (e.g., Claude Code) become vulnerable to outages and unpredictable token costs. Unlike fixed employee salaries, token expenses fluctuate with model updates, often doubling or tripling without warning.
- Local LLMs lack the scalability to replace cloud-based agents, leaving organizations exposed to pricing shifts or service disruptions.
Inverted Priorities
- Traditional development prioritizes code understanding, conciseness, and alignment with standards. Agentic coding inverts this, emphasizing speed and volume over clarity.
- Ambiguity in prompts leads to hallucinations, requiring more revisions and token consumption, further distancing developers from the code.
The Orchestrator Paradox
- Effective AI supervision demands the same skills that atrophy from overuse. As Sandor Nyako, LinkedIn’s Director of Software Engineering, noted, critical thinking is eroded when developers outsource problem-solving to agents [Lars Faye].
When to Use It
Agentic coding can be viable for:
- Brainstorming and planning: Generating specs or pseudo-code while humans handle implementation.
- Ad-hoc code generation: Delegating repetitive tasks (e.g., boilerplate, documentation) without sacrificing oversight.
- Educational purposes: Exploring concepts or tutorials, provided the output is discarded afterward.
Best Practices
- Limit generated code to what can be reviewed in a single sitting.
- Avoid using agents for tasks you couldn’t implement manually.
- Treat AI as a secondary tool—like a