Rosa Del Mar

Daily Brief

Issue 82 2026-03-23

Attention-Economics Framing For Ai Output Quality

Issue 82 Edition 2026-03-23 4 min read
Not accepted General
Sources: 1 • Confidence: Medium • Updated: 2026-04-13 03:52

Key takeaways

  • In this corpus, "slop" is defined as content that requires more human effort to consume than it took to produce.
  • In this corpus, sending raw Gemini output to a coworker is characterized as disrespecting the recipient's time rather than exercising creative freedom.

Sections

Attention-Economics Framing For Ai Output Quality

  • In this corpus, "slop" is defined as content that requires more human effort to consume than it took to produce.
  • In this corpus, sending raw Gemini output to a coworker is characterized as disrespecting the recipient's time rather than exercising creative freedom.

Unknowns

  • What workflows are being discussed (e.g., requirements, design docs, code review notes, status updates), and how does the cost-asymmetry definition apply differently across them?
  • How is "effort to consume" measured (time, number of clarification cycles, error rate, rework, cognitive load), and what thresholds define unacceptable slop?
  • Does requiring summarization/cleanup before sharing AI output actually reduce downstream review time or errors in practice in the referenced context?
  • What is the baseline comparison for production effort (human-only vs AI-assisted), and does AI use change total system effort when including verification and iteration?
  • Are there explicit exceptions where sharing raw model output is acceptable (brainstorming, personal notes, early drafts) and how should those be labeled to avoid hidden costs?

Investor overlay

Read-throughs

  • Growing emphasis on recipient time as a cost metric could increase demand for enterprise AI features that enforce summarization, validation, and formatting before sharing outputs.
  • Workplace norms that label raw model output as disrespectful may drive adoption of AI governance and communication standards, including required labeling of draft versus final AI content.
  • If organizations operationalize slop as effort to consume versus effort to produce, vendors enabling measurement of review time, rework, and clarification cycles could see increased interest.

What would confirm

  • Organizations publish or adopt explicit policies requiring AI output cleanup, summarization, or verification before sharing internally, framed around protecting recipient time.
  • Pilots report measurable reductions in downstream review time, clarification cycles, or rework after introducing mandatory summarization or structured AI output templates.
  • Enterprise AI platforms add controls or defaults that block or warn on sharing unedited model output, or require labels indicating draft, brainstorm, or verified content.

What would kill

  • No observable change in review time or error rates after enforcing cleanup and summarization, suggesting the norm adds process overhead without reducing recipient effort.
  • Teams treat raw AI output sharing as acceptable in common workflows without labeling, and no governance standards emerge beyond informal guidance.
  • Measurement of effort to consume proves impractical or too subjective, preventing thresholds for unacceptable slop and limiting implementation beyond rhetoric.

Sources

  1. 2026-03-23 simonwillison.net