Rosa Del Mar

Daily Brief

Issue 74 2026-03-15

Core Definitions And Scope Of Agentic Engineering

Issue 74 Edition 2026-03-15 6 min read
General
Sources: 1 • Confidence: High • Updated: 2026-04-13 03:59

Key takeaways

  • Agentic engineering is the practice of developing software with the assistance of coding agents.
  • While LLMs do not learn from past mistakes, coding agents can improve if humans deliberately update instructions and tool harnesses based on lessons learned.
  • An agent runs tools in a loop to achieve a goal.
  • The guide "Agentic Engineering Patterns" is a work in progress that will add chapters and update existing ones as techniques and understanding evolve.
  • Software engineering still requires navigating many solution options and tradeoffs to decide what code to write for specific circumstances and requirements, even if agents can write working code.

Sections

Core Definitions And Scope Of Agentic Engineering

  • Agentic engineering is the practice of developing software with the assistance of coding agents.
  • Coding agents can both write and execute code.
  • Claude Code, OpenAI Codex, and Gemini CLI are examples of coding agents.

Execution And Verification As Enablers And Bottlenecks

  • While LLMs do not learn from past mistakes, coding agents can improve if humans deliberately update instructions and tool harnesses based on lessons learned.
  • Code execution is the defining capability enabling agentic engineering because it allows iteration toward demonstrably working software rather than unvalidated outputs.
  • Getting strong results from coding agents requires giving them appropriate tools, specifying problems at the right level of detail, and verifying and iterating on outputs until they are robust and credible.

Agent Loop Mechanism Llm Plus Tools

  • An agent runs tools in a loop to achieve a goal.
  • In an LLM-based agent, software calls an LLM with a prompt and tool definitions, executes requested tools, and feeds tool results back into the LLM.

Expectations And Watch Items

  • The guide "Agentic Engineering Patterns" is a work in progress that will add chapters and update existing ones as techniques and understanding evolve.
  • Used effectively, coding agents enable teams to be more ambitious and should help produce more and higher-quality code that solves more impactful problems.

Human Role Problem Framing And Tradeoffs

  • Software engineering still requires navigating many solution options and tradeoffs to decide what code to write for specific circumstances and requirements, even if agents can write working code.

Watchlist

  • The guide "Agentic Engineering Patterns" is a work in progress that will add chapters and update existing ones as techniques and understanding evolve.

Unknowns

  • What measurable changes in throughput, defect rates, and maintenance burden occur when teams adopt coding agents under a verification-first workflow?
  • What specific tooling and harness components are required to make agent code execution safe and auditable in real development environments?
  • Which kinds of software tasks benefit most from an agent loop versus simpler LLM-assisted workflows without execution or looping?
  • How should organizations operationally distinguish prototype-only "vibe coding" from production-intended agentic engineering in policy and review processes?
  • What update cadence and substantive changes will occur in the evolving "Agentic Engineering Patterns" guidance, and which recommendations remain stable over time?

Investor overlay

Read-throughs

  • Verification first workflows could shift software spend toward testing, evaluation harnesses, and execution sandboxing since iterative execution and verification are framed as key enablers and bottlenecks.
  • Demand may rise for auditable tool interfaces and safe agent execution boundaries because agents run tools in loops, making tool definitions and execution controls central design elements.
  • Ongoing updates to an evolving patterns guide could create recurring need for process changes and enablement as teams iterate instructions and harnesses based on lessons learned.

What would confirm

  • Published, measurable changes in throughput, defect rates, or maintenance burden from teams using coding agents under verification first workflows.
  • Clear, repeatable reference components for safe and auditable agent code execution in real development environments, including tool harness and logging expectations.
  • Stable guidance emerging from the evolving patterns guide, with consistent recommendations that organizations can operationalize beyond prototype vibe coding.

What would kill

  • Evidence that verification first agent loops do not improve outcomes or raise maintenance burden despite disciplined harness updates.
  • Practical inability to make agent tool execution safe and auditable at scale, leading teams to avoid execution and revert to non looping LLM assistance.
  • Guidance remains highly unstable or contradictory across updates, preventing standardization of policies distinguishing prototype workflows from production agentic engineering.

Sources

  1. 2026-03-15 simonwillison.net