Rosa Del Mar

Daily Brief

Issue 84 2026-03-25

Bottleneck Shift: Throughput Up, Verification And Review Become Limiting

Issue 84 Edition 2026-03-25 6 min read
General
Sources: 1 • Confidence: High • Updated: 2026-04-13 03:54

Key takeaways

  • The document asserts that agentic engineering trends are incentivizing developers to maximize code output speed with insufficient discipline and regard for downstream consequences.
  • The document asserts that writing by hand may be insufficient as a mitigation by itself, and that renewed discipline is still needed because typing is no longer the primary bottleneck.
  • The document asserts that orchestrated use of many agents removes experiential feedback from manual development, allowing small mistakes to compound into unsustainable complexity that becomes evident only later.
  • The document asserts that delegating too much agency to autonomous coding agents can leave developers with reduced understanding of system state as agents generate and amplify complexity.
  • The document asserts that agent-generated mistakes can accumulate faster than human-generated mistakes because humans previously constrained error injection via a throughput bottleneck.

Sections

Bottleneck Shift: Throughput Up, Verification And Review Become Limiting

  • The document asserts that agentic engineering trends are incentivizing developers to maximize code output speed with insufficient discipline and regard for downstream consequences.
  • The document asserts that agent-generated mistakes can accumulate faster than human-generated mistakes because humans previously constrained error injection via a throughput bottleneck.
  • The document asserts that coding agents can compress delivery timelines such that changes previously deliberated for weeks can land within hours.

Process Countermeasures And Dispute Over What Mitigations Are Sufficient

  • The document asserts that writing by hand may be insufficient as a mitigation by itself, and that renewed discipline is still needed because typing is no longer the primary bottleneck.
  • The document recommends mitigating agent-driven complexity by allocating time for reflection, explicitly rejecting unnecessary work, and limiting daily agent-generated code to what can be competently reviewed.
  • The document recommends that system-defining elements such as architecture and APIs be written by hand rather than delegated to agents.

Feedback-Loop Degradation: Compounding Complexity And Delayed Pain

  • The document asserts that orchestrated use of many agents removes experiential feedback from manual development, allowing small mistakes to compound into unsustainable complexity that becomes evident only later.
  • The document asserts that rapid agent-driven evolution can push a codebase beyond developers' ability to reason about it clearly, creating cognitive debt.

Governance Risk: Loss Of Situational Awareness Under Autonomy

  • The document asserts that delegating too much agency to autonomous coding agents can leave developers with reduced understanding of system state as agents generate and amplify complexity.

Unknowns

  • What objective, pre/post adoption metrics (defect density, incident rate, rollback frequency, review latency, test flakiness, build times) support or refute the claimed increase in error injection and complexity acceleration under coding agents?
  • Which specific conditions (team size, codebase maturity, domain criticality, extent of agent autonomy/orchestration) determine when agent-driven speed becomes net harmful versus net beneficial?
  • What is the effective verification capacity needed to safely match increased generation throughput (e.g., review bandwidth, test coverage, automated checks), and how should teams set practical daily change budgets?
  • Does keeping architecture and APIs human-authored measurably reduce long-run complexity and cognitive debt compared with agent-authored design, and under what workflows?
  • How should organizations measure 'cognitive debt' reliably (e.g., onboarding time, incident comprehension, documentation accuracy) and connect it to concrete outcomes?

Investor overlay

Read-throughs

  • Verification and review tooling and workflows may become a higher priority as code generation throughput rises, shifting spending focus from generation to testing, CI, code review automation, and governance controls.
  • Teams may adopt change budgets and stricter process discipline to match verification capacity, favoring products that measure and throttle delivery velocity relative to review and QA bandwidth.
  • Demand may rise for ways to quantify and manage cognitive debt and situational awareness as agent autonomy increases, emphasizing observability of development process health rather than raw output speed.

What would confirm

  • Organizations report rising review latency, defect density, incident rate, rollback frequency, or test flakiness after adopting autonomous or orchestrated coding agents, alongside higher change throughput.
  • Enterprises formalize policies such as code generation caps aligned to review capacity, mandatory reflection time, or rules keeping architecture and APIs human-authored to limit downstream complexity.
  • Teams begin tracking cognitive debt proxies such as onboarding time, incident comprehension time, and documentation accuracy and tie them to governance decisions on agent autonomy.

What would kill

  • Post adoption metrics show unchanged or improved defect density, incident rate, rollback frequency, and review latency despite higher change throughput, suggesting verification is not the limiting factor.
  • High autonomy agent workflows do not reduce situational awareness as measured by stable incident comprehension, maintenance throughput, and developer understanding of system state over time.
  • Keeping architecture and APIs human-authored does not measurably affect long run complexity or outcomes compared with agent-authored design under comparable workflows.

Sources

  1. 2026-03-25 simonwillison.net