Definitions-And-Scope-Of-Agentic-Engineering
Sources: 1 • Confidence: High • Updated: 2026-03-17 15:15
Key takeaways
- The term "agent" is difficult to define and has frustrated AI researchers since at least the 1990s.
- An agent runs tools in a loop to achieve a goal.
- Code execution is the defining capability enabling agentic engineering because it allows iteration toward demonstrably working software rather than unvalidated outputs.
- Even if agents can write working code, software engineering still requires navigating solution options and tradeoffs to decide what code to write for specific circumstances and requirements.
- It is a mistake to define vibe coding as any LLM-generated code; the term is most useful for unreviewed, prototype-quality code not yet brought to production-ready standards.
Sections
Definitions-And-Scope-Of-Agentic-Engineering
- The term "agent" is difficult to define and has frustrated AI researchers since at least the 1990s.
- Agentic engineering is the practice of developing software with the assistance of coding agents.
- Coding agents can both write and execute code.
- Examples of coding agents include Claude Code, OpenAI Codex, and Gemini CLI.
Agent-Loop-Architecture-And-Core-Mechanism
- An agent runs tools in a loop to achieve a goal.
- In an LLM-based agent, software calls an LLM with a prompt and tool definitions, executes requested tools, and feeds tool results back into the LLM.
Execution-And-Verification-As-Enabling-Conditions
- Code execution is the defining capability enabling agentic engineering because it allows iteration toward demonstrably working software rather than unvalidated outputs.
- Getting strong results from coding agents requires providing appropriate tools, specifying problems at an appropriate level of detail, and verifying and iterating on outputs until they are robust and credible.
Human-Role-And-Compounding-Improvement-Through-Harnesses
- Even if agents can write working code, software engineering still requires navigating solution options and tradeoffs to decide what code to write for specific circumstances and requirements.
- LLMs do not learn from past mistakes, but coding agent performance can improve if humans deliberately update instructions and tool harnesses based on lessons learned.
Vibe-Coding-Versus-Production-Governance-Boundaries
- It is a mistake to define vibe coding as any LLM-generated code; the term is most useful for unreviewed, prototype-quality code not yet brought to production-ready standards.
- The term "vibe coding" was coined by Andrej Karpathy in February 2025 to describe prompting LLMs to write code while the user forgets that the code exists.
Watchlist
- The guide "Agentic Engineering Patterns" is a work in progress that will add chapters and update existing ones as techniques and understanding evolve.
Unknowns
- What measurable changes in throughput, defect rates, and rework (before/after) occur when teams adopt coding agents with execution and verification loops?
- What specific verification harnesses, tests, or evaluation gates are required to make agent-written code production-credible, and how costly are they to build and maintain?
- What are the dominant bottlenecks when scaling agentic engineering: tool reliability, execution environment constraints, review capacity, or requirements/specification quality?
- How should organizations operationally distinguish prototype "vibe coding" from production-ready agentic engineering (e.g., required reviews, tests, documentation, ownership)?
- How should teams structure the human feedback loop for improvement (instruction updates, harness/tool changes), and what artifacts should be versioned and audited over time?