Ai-Assisted Porting Outcomes And Cost/Time Metrics
Sources: 1 • Confidence: Medium • Updated: 2026-04-13 03:54
Key takeaways
- The write-up claims a first working Go version was built in 7 hours with approximately $400 in token spend.
- The write-up claims the key enabling factor for the vibe-porting effort was JSONata's existing test suite.
- The team used a one-week shadow deployment running old and new versions in parallel to confirm the new implementation matched the old behavior.
- The author states that the “saved $500K/year” framing is somewhat hyperbolic.
- The write-up frames the project as rewriting JSONata with AI in a day and saving $500K per year.
Sections
Ai-Assisted Porting Outcomes And Cost/Time Metrics
- The write-up claims a first working Go version was built in 7 hours with approximately $400 in token spend.
- The write-up frames the project as rewriting JSONata with AI in a day and saving $500K per year.
- The project is presented as a vibe-porting case study that produced a custom Go implementation of the JSONata JSON expression language.
Constraints And Prerequisites: Tests As An Enabler
- The write-up claims the key enabling factor for the vibe-porting effort was JSONata's existing test suite.
Risk Management Pattern: Shadow Deployment For Equivalence
- The team used a one-week shadow deployment running old and new versions in parallel to confirm the new implementation matched the old behavior.
Dispute/Qualification: Roi Framing Credibility
- The author states that the “saved $500K/year” framing is somewhat hyperbolic.
Unknowns
- What is the underlying cost model behind the $500K/year savings framing (what costs existed before, what changed after, and over what time horizon)?
- How much additional engineering time and token spend was required after the initial 7-hour working version to reach production readiness and maintainability?
- What is the scope and quality of the referenced JSONata test suite (coverage, types of tests, oracle quality), and how directly did it translate to the Go implementation’s confidence?
- What criteria and metrics were used during the one-week shadow deployment to determine behavior matching (exact output equivalence, tolerances, performance, error handling)?
- What compatibility surface was required for this JSONata reimplementation (Node-RED integration expectations, jq-like semantics, edge-case compatibility) and where did it diverge, if anywhere?