Rosa Del Mar

Daily Brief

Issue 87 2026-03-28

Workflow Framing And Labor Shift Toward Architecture Decisions

Issue 87 Edition 2026-03-28 5 min read
General
Sources: 1 • Confidence: High • Updated: 2026-04-13 03:54

Key takeaways

  • Matt Webb describes his current practice as "vibing" rather than "coding" or "vibe coding."
  • Matt Webb claims that agentic coding tends to eliminate problems via exhaustive iteration that can incur extremely high token and compute cost.
  • Matt Webb states that the desired outcome for AI coding agents is fast solutions that remain maintainable, adaptive, and composable so that improvements elsewhere can lift the whole stack.
  • Matt Webb argues that high-quality libraries with interfaces that make the correct approach the easiest approach are a strong foundation for agentic and developer productivity.
  • Matt Webb claims that in a "vibing" workflow, developers may read fewer lines of code while making more architecture-level decisions.

Sections

Workflow Framing And Labor Shift Toward Architecture Decisions

  • Matt Webb describes his current practice as "vibing" rather than "coding" or "vibe coding."
  • Matt Webb claims that in a "vibing" workflow, developers may read fewer lines of code while making more architecture-level decisions.

Agentic Iteration Drives High Compute/Token Cost

  • Matt Webb claims that agentic coding tends to eliminate problems via exhaustive iteration that can incur extremely high token and compute cost.

Success Criteria Shift From Task Completion To Lifecycle Quality

  • Matt Webb states that the desired outcome for AI coding agents is fast solutions that remain maintainable, adaptive, and composable so that improvements elsewhere can lift the whole stack.

Libraries And Interfaces As A Leverage Point For Reliability And Productivity

  • Matt Webb argues that high-quality libraries with interfaces that make the correct approach the easiest approach are a strong foundation for agentic and developer productivity.

Unknowns

  • What is the actual distribution of token usage and wall-clock time for agentic coding runs in real workflows, and how often do long iterative loops occur?
  • Under what conditions do agent-produced solutions remain maintainable, adaptive, and composable over multiple iterations of change?
  • Do high-quality shared libraries and interfaces measurably reduce defects, rework, and variability in agent-generated code compared to ad-hoc implementations?
  • How prevalent is the “vibing” workflow across teams, and what is its impact on review practices, incident rates, and long-term codebase coherence?
  • Is there any direct decision-readthrough (operator, product, or investor) supported by this corpus beyond general suggestions to monitor cost and quality metrics?

Investor overlay

Read-throughs

  • Rising demand for observability and governance of AI coding agents, specifically tooling that measures token usage, wall clock time, and iterative loop frequency to control cost and reliability.
  • Greater emphasis on maintainability, adaptability, and composability as evaluation criteria for AI generated code, creating a need for lifecycle quality metrics, automated reviews, and long term code health tooling.
  • Shift of developer effort from line level coding to architecture decisions could increase value of shared libraries and interface design systems that make correct patterns the easiest path for humans and agents.

What would confirm

  • Teams publish or report standardized metrics for agent runs such as tokens per task, time to completion, and frequency of long iterative loops, and treat them as cost or productivity KPIs.
  • Engineering orgs adopt acceptance gates for AI generated code tied to maintainability and composability outcomes, such as measurable rework rates, defect rates, or change success over multiple iterations.
  • Increased investment in shared libraries and interface constraints, with internal reports showing reduced variability, fewer defects, or less rework in agent assisted code versus ad hoc implementations.

What would kill

  • Real world measurements show agentic runs rarely enter long iterative loops and token and compute costs are consistently bounded, reducing urgency for specialized cost observability.
  • Longitudinal change data shows agent produced solutions degrade maintainability or composability over iterations, and teams revert to heavier manual coding and review practices.
  • No measurable improvement from shared libraries and interface design on defects, rework, or variability in agent assisted code, implying limited leverage from this intervention.

Sources

  1. 2026-03-28 simonwillison.net