Rosa Del Mar

Daily Brief

Issue 86 2026-03-27

Ai-Assisted Porting Outcomes And Cost/Time Metrics

Issue 86 Edition 2026-03-27 4 min read
General
Sources: 1 • Confidence: Medium • Updated: 2026-04-13 03:54

Key takeaways

  • The write-up claims a first working Go version was built in 7 hours with approximately $400 in token spend.
  • The write-up claims the key enabling factor for the vibe-porting effort was JSONata's existing test suite.
  • The team used a one-week shadow deployment running old and new versions in parallel to confirm the new implementation matched the old behavior.
  • The author states that the “saved $500K/year” framing is somewhat hyperbolic.
  • The write-up frames the project as rewriting JSONata with AI in a day and saving $500K per year.

Sections

Ai-Assisted Porting Outcomes And Cost/Time Metrics

  • The write-up claims a first working Go version was built in 7 hours with approximately $400 in token spend.
  • The write-up frames the project as rewriting JSONata with AI in a day and saving $500K per year.
  • The project is presented as a vibe-porting case study that produced a custom Go implementation of the JSONata JSON expression language.

Constraints And Prerequisites: Tests As An Enabler

  • The write-up claims the key enabling factor for the vibe-porting effort was JSONata's existing test suite.

Risk Management Pattern: Shadow Deployment For Equivalence

  • The team used a one-week shadow deployment running old and new versions in parallel to confirm the new implementation matched the old behavior.

Dispute/Qualification: Roi Framing Credibility

  • The author states that the “saved $500K/year” framing is somewhat hyperbolic.

Unknowns

  • What is the underlying cost model behind the $500K/year savings framing (what costs existed before, what changed after, and over what time horizon)?
  • How much additional engineering time and token spend was required after the initial 7-hour working version to reach production readiness and maintainability?
  • What is the scope and quality of the referenced JSONata test suite (coverage, types of tests, oracle quality), and how directly did it translate to the Go implementation’s confidence?
  • What criteria and metrics were used during the one-week shadow deployment to determine behavior matching (exact output equivalence, tolerances, performance, error handling)?
  • What compatibility surface was required for this JSONata reimplementation (Node-RED integration expectations, jq-like semantics, edge-case compatibility) and where did it diverge, if anywhere?

Investor overlay

Read-throughs

  • AI assisted porting plus a strong test suite may compress time to first working rewrite, lowering rewrite barriers for mature libraries.
  • Shadow deployment as a validation pattern may be becoming a standard de risk step for swapping behavior sensitive components.
  • ROI narratives around AI rewrites may be overstated without cost breakdowns, implying investor attention should shift from headline savings to verifiable engineering metrics.

What would confirm

  • Disclosed breakdown of the claimed annual savings, including baseline costs, post change costs, and time horizon, aligning with the author’s caveat about hyperbole.
  • Reported total effort from first working version to production readiness, including additional engineering time, token spend, and ongoing maintenance burden.
  • Quantified verification quality, such as test suite coverage and shadow deployment equivalence metrics, demonstrating reliable behavior matching beyond anecdote.

What would kill

  • Evidence that production readiness required substantial additional time or rework, making the 7 hour result not representative of total cost.
  • Shadow deployment or post deployment issues show meaningful behavior divergence or edge case incompatibilities, undermining the validation approach.
  • Savings claims cannot be substantiated with a transparent cost model, reinforcing that ROI framing is not decision grade.

Sources