Agent-Assisted Porting/Rewriting Into Rust And Claims Of Performance Wins
Sources: 1 • Confidence: Medium • Updated: 2026-04-12 10:09
Key takeaways
- Simon Willison reports he asked Claude Code to build a Rust word-cloud CLI tool and that Claude Code successfully produced it.
- Max Woolf states he believes Opus 4.5 and later models are an order of magnitude better at coding than models from just months earlier.
- Max Woolf states that publicly claiming Opus 4.5 and later models are an order of magnitude better at coding than models from months earlier can sound like hype.
- The post is presented as part of a broader narrative claiming coding agents became notably effective around November.
- Max Woolf describes a sequence of coding-agent projects that increase in ambition from simple scripts (e.g., YouTube metadata scraping) to substantially larger builds.
Sections
Agent-Assisted Porting/Rewriting Into Rust And Claims Of Performance Wins
- Simon Willison reports he asked Claude Code to build a Rust word-cloud CLI tool and that Claude Code successfully produced it.
- Max Woolf claims he is using agents to develop a Rust crate named "rustlearn" that implements fast versions of standard machine-learning algorithms such as logistic regression and k-means.
- Porting Python's scikit-learn to Rust with comparable features is characterized in the post as an extremely ambitious task.
- Max Woolf asserts that a described three-step pipeline can outperform scikit-learn implementations even for simpler algorithms.
Anecdotal Step-Change In Coding-Agent Capability And Long-Horizon Robustness
- Max Woolf states he believes Opus 4.5 and later models are an order of magnitude better at coding than models from just months earlier.
- The post is presented as part of a broader narrative claiming coding agents became notably effective around November.
- Max Woolf reports he tried to break Opus and Codex with complex tasks that would take him months alone, but they kept completing them correctly.
Credibility And Adoption Friction From Rapid-Claims Vs. Verification
- Max Woolf states that publicly claiming Opus 4.5 and later models are an order of magnitude better at coding than models from months earlier can sound like hype.
- Max Woolf states he believes Opus 4.5 and later models are an order of magnitude better at coding than models from just months earlier.
Watchlist
- The post is presented as part of a broader narrative claiming coding agents became notably effective around November.
Unknowns
- Are the referenced artifacts (e.g., the Rust crate and the Rust CLI tool) publicly available with reproducible build steps and licenses?
- What acceptance tests or correctness criteria were used to judge that complex tasks were completed correctly, and what was the observed failure rate?
- Which exact models/versions and tool configurations correspond to the reported improvement and robustness claims, and what changed across time?
- What is the described three-step pipeline, and under what conditions does it allegedly outperform scikit-learn?
- How representative are the reported tasks of typical production engineering work (integration, legacy constraints, security, deployment, observability)?