Rosa Del Mar

Daily Brief

Issue 75 2026-03-16

Economics-And-Incentives-Of-Openness

Issue 75 Edition 2026-03-16 7 min read
General
Sources: 1 • Confidence: Medium • Updated: 2026-03-17 15:17

Key takeaways

  • Competing to build leading open models can cost billions of dollars, so most businesses lack a direct monetary incentive to do so.
  • Open models are unlikely to win on absolute performance unless a breakthrough is kept from leading labs or frontier models hit a genuine performance wall.
  • There is a large, underexplored enterprise demand for highly specific small open models, and focusing attention on open models catching the frontier distracts from this demand.
  • Modern AI capabilities increasingly come from systems that combine model weights, tool environments, and a product harness rather than from model weights alone.
  • The most successful open models may be small, highly specific models that offload repetitive subtasks from expensive frontier agentic models and can be 10x to 100x cheaper.

Sections

Economics-And-Incentives-Of-Openness

  • Competing to build leading open models can cost billions of dollars, so most businesses lack a direct monetary incentive to do so.
  • Releasing a single strong open model can rapidly generate usage and mindshare without enterprise sales or major marketing.
  • Absent clearer economic reasons to build open frontier models, the open frontier will consolidate to a small number of top providers.
  • Capital allocation in open-model ecosystems is expected to shift toward specific, cheap, fast, ubiquitous open models rather than continued emphasis on open frontier model-building.
  • If open models broadly surpass closed labs, profits and revenues across the AI ecosystem will compress.
  • Over the coming years, companies are more likely to become less open rather than more open, with NVIDIA a notable exception because it benefits from selling more GPUs and learning market needs.

Open-Vs-Closed-Performance-Gap-And-Trajectory

  • Open models are unlikely to win on absolute performance unless a breakthrough is kept from leading labs or frontier models hit a genuine performance wall.
  • Historically, open models have trailed the best closed models by roughly 6 to 18 months.
  • There have been no clear performance walls in frontier-model progress to date, and leading researchers report substantial remaining low-hanging fruit.
  • The model landscape will separate into three classes: true closed frontier models, open frontier models, and small open models used as distributed intelligence that complements closed agents.
  • The open-closed capability gap is more likely to widen than shrink as agentic coding increases the value of proprietary RL environments and prompts that are easy to withhold.
  • As AI shifts to longer-horizon and specialized tasks tied to expensive gatekeepers and non-public data, performance gaps between closed and open models will grow.

Small-Open-Models-As-Cost-Saving-Delegates

  • There is a large, underexplored enterprise demand for highly specific small open models, and focusing attention on open models catching the frontier distracts from this demand.
  • The most successful open models may be small, highly specific models that offload repetitive subtasks from expensive frontier agentic models and can be 10x to 100x cheaper.
  • A scalable enterprise pattern is to deploy one small base model with a set of LoRA adapters for internal skills so frontier closed agents can outsource repeated subtasks to it.
  • Intelligence compression and small-model optimization have been explored with less depth and fewer resources than frontier model tracking.
  • Small models can approach much larger models on scoped tasks.
  • If open model builders primarily chase closed labs on general capabilities, they will largely lose and face earlier funding pain and consolidation, whereas an ecosystem approach that solves different problems can be self-reinforcing.

Systems-Not-Weights-As-The-Competitive-Unit

  • Modern AI capabilities increasingly come from systems that combine model weights, tool environments, and a product harness rather than from model weights alone.
  • Closed-model providers have a structural advantage because they can vertically integrate chips, inference software, weights, tools, and UI, while open models must work across heterogeneous inference stacks and use cases.
  • More labs may release model weights while keeping key system components closed as less can be done with weights alone.

Unknowns

  • What objective evidence supports the claimed 6–18 month historical lag between open and closed models, and is that lag changing over time?
  • How much of frontier performance improvement is currently coming from non-releasable system components (private RL environments, proprietary prompts, tool harnesses) versus from model weights and broadly reproducible methods?
  • Is there measurable enterprise demand and realized ROI for highly specific small open models (including maintenance, governance, and routing complexity costs)?
  • In real deployments, how often do agent systems successfully route subtasks to cheaper small models without reducing end-to-end task success rates?
  • Are companies becoming less open in practice (fewer or lower-quality open releases, more partial openness), and is NVIDIA meaningfully an exception?

Investor overlay

Read-throughs

  • Enterprise spend may shift toward small, highly specific open models used as cheap delegates for repetitive subtasks, while frontier models remain the premium layer.
  • Competitive advantage may concentrate in integrated systems combining model weights, tool environments, and product harnesses, making partial openness a common strategy.
  • Economics may reduce the number of organizations attempting open frontier model leadership, with openness becoming more selective or lower quality over time.

What would confirm

  • Procurement or usage data showing measurable ROI from deploying small, specific open models after accounting for maintenance, governance, and routing complexity costs.
  • Deployment evidence that agent systems can route subtasks to cheaper small models without reducing end to end task success rates or increasing operational overhead.
  • Observable trend of fewer or less complete open releases, alongside clearer differentiation driven by proprietary tool environments, private RL setups, or product harness quality.

What would kill

  • Reliable measurements showing open models rapidly closing the performance gap with frontier closed models, or the lag shrinking meaningfully over time.
  • Field results showing routing to small models frequently degrades outcomes, adds unacceptable complexity, or negates cost savings through increased human oversight.
  • Evidence that most frontier gains are broadly reproducible from weights and public methods, limiting the advantage of keeping system components closed.

Sources

  1. 2026-03-16 interconnects.ai