Abstraction-Layer Pressure From Vendor Api Evolution
Sources: 1 • Confidence: High • Updated: 2026-04-13 03:35
Key takeaways
- The author is working on a major change to the LLM Python library and CLI tool.
- To design a new abstraction layer, the author used Claude Code to review the Python client libraries for Anthropic, OpenAI, Gemini, and Mistral and to craft curl commands that access raw JSON in streaming and non-streaming modes across scenarios.
- The scripts and captured outputs from the author’s LLM API research have been published in a new repository.
- Some vendors introduced features over the past year, including server-side tool execution, that the current LLM abstraction layer cannot handle.
- LLM uses a plugin system that provides an abstraction layer over hundreds of different LLMs from dozens of vendors.
Sections
Abstraction-Layer Pressure From Vendor Api Evolution
- The author is working on a major change to the LLM Python library and CLI tool.
- Some vendors introduced features over the past year, including server-side tool execution, that the current LLM abstraction layer cannot handle.
- LLM uses a plugin system that provides an abstraction layer over hundreds of different LLMs from dozens of vendors.
Cross-Vendor Api Trace Research To Ground Interface Design
- To design a new abstraction layer, the author used Claude Code to review the Python client libraries for Anthropic, OpenAI, Gemini, and Mistral and to craft curl commands that access raw JSON in streaming and non-streaming modes across scenarios.
- The scripts and captured outputs from the author’s LLM API research have been published in a new repository.
Unknowns
- What specific breaking changes (API/CLI surface) are planned in the major LLM library/CLI change, and what migration path will be provided?
- Which vendors/features are driving the abstraction gap (beyond the example of server-side tool execution), and what minimal common model is being targeted in the redesign?
- How will the redesigned abstraction represent and execute tool calls when tools run server-side, and how will it expose resulting side effects, logs, and error semantics?
- What is the repository location, how complete is provider coverage, and is there automation/CI to keep the captured outputs current as APIs drift?
- Is there any direct decision-readthrough (operator/product/investor) described for adopting LLM vs. vendor SDKs, and under what conditions the abstraction layer should be bypassed?