Rosa Del Mar

Daily Brief

Issue 78 2026-03-19

Verification Becomes The Binding Constraint

Issue 78 Edition 2026-03-19 8 min read
General
Sources: 1 • Confidence: Medium • Updated: 2026-04-11 17:42

Key takeaways

  • AI agents can produce large volumes of work quickly, but the output is often subtly flawed in ways that are easy to miss without careful review.
  • Tasks tied to Knightian uncertainty and unknown unknowns remain hard to automate because future states are not measurable enough to assign reliable probabilities.
  • As coding becomes easier to automate, software engineering work shifts from writing code toward verification and alignment with business and user goals.
  • Creating labels, evaluations, and training data for verification can accelerate displacement of peers performing similar work.
  • Cryptographic provenance and identity can reduce verification costs and restore trust in digital information, making crypto complementary to AI.

Sections

Verification Becomes The Binding Constraint

  • AI agents can produce large volumes of work quickly, but the output is often subtly flawed in ways that are easy to miss without careful review.
  • As coding becomes easier to automate, software engineering work shifts from writing code toward verification and alignment with business and user goals.
  • Extremely low automation costs increase systemic risk because AI-generated artifacts are rationally shipped with unverified errors that humans cannot fully review at scale.
  • Firms face a near-term incentive conflict where investing in verification tooling and cryptographic primitives is expensive and slows shipping while benefits accrue later.
  • Automation improves as tasks become measurable, and verification is the residual work of ensuring outputs match intent under real-world nuance and exceptions.
  • A useful way to prioritize automation is to split work into measurable tasks with low verification cost versus non-measurable tasks where correctness depends on complex or consensus-driven judgment.

Automation Boundaries: Measurability, Lived Experience, And Uncertainty

  • Tasks tied to Knightian uncertainty and unknown unknowns remain hard to automate because future states are not measurable enough to assign reliable probabilities.
  • Some tasks are not improvable via measurement because they are social constructs or status games where groups coordinate on shared meaning rather than objective correctness.
  • A key automation boundary is whether what matters is measured outside a human brain versus inside an individual's lived experience.
  • As people carry devices that capture richer audio/video and other signals, the barrier to externalizing and measuring individual experience will decrease.
  • Seasoned engineers differ from code-reading machines because lived struggles create uniquely trained internal models that help weight rare or hard-earned examples.

Org And Role Redesign Around Agents

  • As coding becomes easier to automate, software engineering work shifts from writing code toward verification and alignment with business and user goals.
  • An 'AI sandwich' firm structure can be organized with a small group of human directors setting intent, a swarm of agents executing, and a bottom layer of top verifiers ensuring outputs match intent at scale.
  • Incremental improvements in AI agents have crossed a threshold where agents can complete long-running tasks with much less step-by-step guidance, making them feel like coworkers rather than tools.
  • A motivated individual can buy the functional leverage of multiple employees for roughly a few hundred dollars per month using AI tools.

Labor Dynamics: Displacement And The Verifier Treadmill

  • Creating labels, evaluations, and training data for verification can accelerate displacement of peers performing similar work.
  • Top verifiers must continually move up the value stack to stay ahead as automation expands, a dynamic described as the 'codifier's curse'.
  • Many jobs that are easy to automate and easy to verify will disappear soon, creating a large need for retraining that pushes people up the knowledge frontier.
  • AI systems will increasingly verify other AI systems, and human verification will become a final step that is sometimes unnecessary depending on the task.

Trust, Provenance, And Crypto As Verification Infrastructure

  • Cryptographic provenance and identity can reduce verification costs and restore trust in digital information, making crypto complementary to AI.
  • A high-trust future with cheap automation requires a strong verification stack to maintain ground truth and resist fake identities and coordinated manipulation such as Sybil attacks.
  • In at least one reported case, switching from legacy payments to stablecoin rails made an agentic commerce system more reliable because transaction signals were fully on-chain rather than behind brittle APIs and intermediaries.
  • As AI fragments activity into many more small companies and weakens traditional moats, credibly neutral blockchain networks become more attractive coordination rails for identity, reputation, provenance, payments, and insurance among agents and firms.

Unknowns

  • What is the measured relationship between AI tool spend (hundreds of dollars per month) and realized output per person across representative teams and tasks?
  • How autonomous are current agents in practice (task duration, supervision frequency, completion rate) on real long-running workflows?
  • What are defect, security, and incident rates attributable to AI-generated artifacts versus human-only baselines when deployed to production?
  • Are organizations increasing verification investment (tooling, process, headcount) fast enough to match the growth in AI-generated output volume?
  • To what extent will machine-on-machine evaluation pipelines replace human verification in high-stakes settings, and what failure modes remain?

Investor overlay

Read-throughs

  • As AI generation scales, spend and urgency shift toward verification layers such as testing, evaluation, monitoring, security assurance, and governance since correctness checking becomes the bottleneck and risk concentrates in validation.
  • Organizations redesign roles toward intent setting, agent supervision, and expert verification, creating demand for tooling and process that supports long running delegated work, measurable outcomes, and structured handoffs between humans and agents.
  • Trust erosion in digital information increases interest in cryptographic provenance and identity as verification infrastructure, making crypto related rails complementary to AI by reducing verification cost and restoring reliability in coordination and commerce.

What would confirm

  • Budgets and headcount move toward verification functions such as eval creation, QA, security review, incident response, and AI output auditing, and verification investment growth matches or exceeds AI tool spend growth.
  • Real world agent deployments show multi hour tasks with meaningful autonomy and clear supervision cadence, plus organizations adopt formal templates separating intent, agent execution, and expert verification.
  • Broader adoption of provenance and identity workflows that reduce disputes and fraud in AI mediated content or transactions, with measurable reductions in verification time or cost versus prior processes.

What would kill

  • Verification does not become a binding constraint because AI outputs reach consistently high correctness on representative workflows, and organizations reduce human review without rising defect, security, or incident rates.
  • Agent autonomy fails to materialize in practice, requiring near constant supervision and yielding low completion rates on long running workflows, limiting the need for new verification oriented org redesign.
  • Cryptographic provenance and identity do not reduce verification costs or improve trust outcomes in practice, and adoption remains niche with no observable reliability or coordination benefits.

Sources

  1. 2026-03-19 a16z.simplecast.com