Rosa Del Mar

Daily Brief

Issue 103 2026-04-13

Ai Authored Code Share And Adoption Rate

Issue 103 Edition 2026-04-13 9 min read
General
Sources: 1 • Confidence: Medium • Updated: 2026-04-13 04:04

Key takeaways

  • Dario Amodei was reported as saying that roughly 70–90% of code written at Anthropic is written by Claude and that remaining human work shifts toward managing AI systems rather than reducing headcount.
  • Ramp was reported to have used an internal agent system to identify the 20 most common Sentry issues, spawn 20 agents to fix them, and open 20 pull requests that worked.
  • The statement that “AI writes 90% of the code” was described as difficult to evaluate because the term “code” and the measurement scope are ambiguous.
  • A mechanism was proposed that AI agents make experimentation cheaper emotionally and operationally because discarding failed work feels less costly than discarding a teammate’s effort.
  • A watch item was raised that AI coding may undermine junior developer skill formation because tools reduce the incentive to learn fundamentals that remain necessary to guide and debug agents.

Sections

Ai Authored Code Share And Adoption Rate

  • Dario Amodei was reported as saying that roughly 70–90% of code written at Anthropic is written by Claude and that remaining human work shifts toward managing AI systems rather than reducing headcount.
  • An Anthropic developer (Boris) reported that 100% of his recent work across 259 pull requests was produced using Claude Code and Opus, and that he now rarely opens an editor.
  • A forecast was made that AI will write about 90% of code within 3–6 months and essentially all code within 12 months.
  • The episode cited reported figures claiming roughly 30% of code at Microsoft and over 25% of code at Google was AI-written as of late 2024, and that surveys report many senior developers get at least half their code from AI.
  • A viewer poll was reported to find that 61% believe Opus 4.5 is a better developer than they are, and the host reported that workflow change is accelerating to roughly every three months.
  • An expectation was stated that total code output increased materially year-over-year due to AI code generation tools, potentially doubling from 2024 to 2025 even if much of the generated code is discarded.

Workflow Shift From Ide Coding To Agent Delegation And Pr Review

  • Ramp was reported to have used an internal agent system to identify the 20 most common Sentry issues, spawn 20 agents to fix them, and open 20 pull requests that worked.
  • A mechanism was proposed that AI coding use is shifting from novices filling skill gaps to experienced developers filling time gaps by delegating backlog work to models and reviewing the result.
  • A mechanism was proposed that, as engineers become more senior or move toward management, their work shifts from writing code to orchestrating and reviewing, which can feel less productive despite shipping more.
  • A condition was stated that even if AI writes most code, humans must still specify goals, system design constraints, integration requirements, and security judgments.
  • The host reported filing many pull requests by generating code with AI and reviewing it on GitHub rather than spending time in an editor.

Bottleneck Shift To Measurement Review And Verification

  • The statement that “AI writes 90% of the code” was described as difficult to evaluate because the term “code” and the measurement scope are ambiguous.
  • A CodeRabbit analysis was reported as finding that, across 470 pull requests (320 AI-coauthored), AI pull requests had about 1.7× more issues on average and more extreme high-issue outliers, with measurement done per pull request rather than per line.
  • A mechanism was proposed that AI coding can enable more extensive testing and verification because agents are willing to generate extensive tests and benefit from tight feedback loops.
  • The host claimed to have produced a roughly 12,000-line code project in a day with Opus.
  • A viewer poll was reported to indicate that many developers dislike code review.

Organizational Levers For Adoption And Iteration

  • Ramp was reported to have used an internal agent system to identify the 20 most common Sentry issues, spawn 20 agents to fix them, and open 20 pull requests that worked.
  • A mechanism was proposed that AI agents make experimentation cheaper emotionally and operationally because discarding failed work feels less costly than discarding a teammate’s effort.
  • The host reported considering setting a minimum monthly inference spend per team member (for example, $200) to force experimentation with AI tooling.

Labor And Talent Pipeline Implications As Watch Items

  • A watch item was raised that AI coding may undermine junior developer skill formation because tools reduce the incentive to learn fundamentals that remain necessary to guide and debug agents.
  • Dario Amodei was reported as saying that roughly 70–90% of code written at Anthropic is written by Claude and that remaining human work shifts toward managing AI systems rather than reducing headcount.
  • A mechanism was proposed that if software creation becomes much cheaper, demand for custom software may rise enough to increase total engineering work rather than decrease it (Jevons paradox).

Watchlist

  • A watch item was raised that AI coding may undermine junior developer skill formation because tools reduce the incentive to learn fundamentals that remain necessary to guide and debug agents.

Unknowns

  • What operational definition is being used for “AI-written code” (token-level suggestion, accepted completion, AI-authored diff, AI-initiated pull request, or post-review merged code)?
  • How generalizable are the Anthropic and Ramp examples to typical enterprise teams, different programming languages, and higher-stakes systems?
  • What happens to post-merge outcomes (production incidents, security vulnerabilities, rollback rates, and maintenance burden) under high AI-assisted throughput?
  • Does increased AI-assisted code generation actually shift time toward verification, or do teams also increase automation enough that human review time does not rise?
  • What is the net effect on headcount and hiring over 12–24 months in organizations reporting high AI code shares?

Investor overlay

Read-throughs

  • AI assisted coding shifts engineering bottlenecks from implementation to review, verification, and measurement, implying rising relative importance of tooling and workflows that manage pull request volume and quality risk.
  • Parallel agent workflows may increase experimentation and throughput by lowering the perceived cost of discarding work, potentially increasing inference spend per developer as a managed productivity input.
  • High AI code share claims may not translate across enterprises without clear definitions, and could create talent pipeline risk if junior skill formation degrades, affecting longer term engineering effectiveness.

What would confirm

  • Organizations report that AI generated pull requests and diffs rise while human time moves toward code review, acceptance, testing, and incident prevention, with explicit internal metrics defining what counts as AI authored code.
  • Teams institutionalize agent delegation patterns such as spawning many agents to address common issues and opening multiple pull requests in parallel, alongside policies that budget minimum inference spend per person.
  • Post merge outcomes remain stable or improve under higher AI assisted throughput, evidenced by unchanged or lower production incidents, security issues, rollback rates, and maintenance burden.

What would kill

  • AI written code share claims remain inconsistent across teams due to incompatible definitions and measurement scopes, preventing operational adoption beyond anecdotes.
  • AI assisted throughput increases but review and verification do not scale, leading to worse post merge outcomes such as more incidents, vulnerabilities, rollbacks, or higher maintenance burden.
  • Adoption stalls because junior capability erosion creates debugging and constraint setting gaps, causing managers to revert to more manual coding and reduce reliance on agent based workflows.

Sources

  1. youtube.com