Rosa Del Mar

Daily Brief

Issue 56 2026-02-25

Tests As Spec And Ai Accelerated Reimplementation Risk

Issue 56 Edition 2026-02-25 5 min read
General
Sources: 1 • Confidence: High • Updated: 2026-03-02 19:33

Key takeaways

  • A comprehensive test suite can enable a fresh reimplementation of an open-source library from scratch, potentially in a different language.
  • tldraw is described as not technically open source because its custom license requires a commercial license for production use.
  • An issue that seemed to indicate tldraw would move its test suite to a private repository was later revealed to have been intended as a joke.
  • tldraw filed a joke issue proposing translating its source code to Traditional Chinese as a defense against external AI coding agents replicating the project.
  • A tldraw maintainer argued that moving tests to another repository would complicate and slow development, and that development speed is a higher priority.

Sections

Tests As Spec And Ai Accelerated Reimplementation Risk

  • A comprehensive test suite can enable a fresh reimplementation of an open-source library from scratch, potentially in a different language.
  • The risk of test-driven reimplementation is presented as especially concerning for projects that pair open distribution with a commercial business model.
  • Cloudflare reportedly ported Next.js to use Vite in about a week using AI.

Operational Tradeoffs And Non Code Moats Under Replication Pressure

  • tldraw is described as not technically open source because its custom license requires a commercial license for production use.
  • A tldraw maintainer argued that moving tests to another repository would complicate and slow development, and that development speed is a higher priority.
  • A tldraw maintainer suggested the project's defensible value is in continually making strong product decisions for users rather than preventing others from recreating the code.

Closed Tests Narrative Corrected As Joke

  • An issue that seemed to indicate tldraw would move its test suite to a private repository was later revealed to have been intended as a joke.
  • tldraw filed a joke issue proposing translating its source code to Traditional Chinese as a defense against external AI coding agents replicating the project.

Unknowns

  • Are there documented examples where an independent team produced a close behavioral reimplementation primarily from a public test suite (without source), and how long did it take versus typical development?
  • What were the precise scope, quality bar, and methodology of the reported Next.js-to-Vite port (what was ported, what worked, what was omitted, and what 'using AI' entailed)?
  • Are other commercially oriented projects actually moving tests (or other high-signal artifacts) out of public repositories, and under what conditions?
  • What are the exact terms of the tldraw license described (definitions of 'production use', enforcement posture, and any exceptions)?
  • Does keeping tests public measurably increase AI-enabled cloning risk relative to other publicly available artifacts (docs, examples, type definitions), and what mitigations preserve development velocity?

Investor overlay

Read-throughs

  • Public test suites may be treated as high signal specifications, increasing risk that AI tools enable faster behavioral reimplementations of developer libraries, pressuring monetization models that rely on code uniqueness.
  • Commercial source available style licensing may become more common or more enforced as a response to perceived AI cloning risk, shifting competition from code access toward legal terms and product execution.
  • Maintainers may prioritize development velocity over secrecy, implying limited near term adoption of private tests and a continued norm of open development artifacts despite replication concerns.

What would confirm

  • Multiple commercially oriented projects publicly announce moving tests or other high signal artifacts out of public repositories, explicitly citing AI enabled reimplementation risk.
  • Documented cases emerge of close behavioral reimplementations built mainly from public tests with materially shorter timelines than typical development, especially when AI is used.
  • Clarified tldraw license terms and enforcement posture show tighter restrictions or more active enforcement around production use, indicating a legal rather than secrecy based defense.

What would kill

  • Evidence shows AI assisted reimplementations from tests are not reliably faster or accurate versus traditional development, reducing the practical impact of public tests as a spec.
  • A broad review finds few projects actually privatize tests because velocity costs dominate, and the idea remains mostly discourse rather than behavior change.
  • License clarification indicates permissive production use or weak enforcement, implying cloning risk is not meaningfully mitigated by legal terms and thus less investable as a theme.

Sources