Value Capture, Pricing Power, And Monetization Constraints
Sources: 1 • Confidence: Medium • Updated: 2026-03-02 13:17
Key takeaways
- Van Geelen argues AI monetization is uncertain due to price compression while providers still need enough paying customers to achieve ROI on heavy compute spend.
- Van Geelen interprets Anthropic’s release of prepackaged AI tool suites as a way to close the user capability gap by providing simple, ready-made workflows that prompt new use cases.
- The “Citrini scenario” Substack post spread widely enough that sell-side research and economists reported clients asking about it, and it became a major market talking point.
- Van Geelen argues the AI capability curve has continued accelerating rather than leveling off, despite repeated attempts to model it as flattening progress.
- The hosts argue policy response is a major wild card and claim there is virtually no substantive discussion in Washington, D.C. about AI’s real economic impacts despite widespread private-sector concern.
Sections
Value Capture, Pricing Power, And Monetization Constraints
- Van Geelen argues AI monetization is uncertain due to price compression while providers still need enough paying customers to achieve ROI on heavy compute spend.
- Van Geelen argues AI capability improvements are economically constrained by vendors needing paying customers and demonstrable ROI to justify massive upfront compute and training spend.
- Van Geelen argues that even without replacing systems of record, the credible threat of AI-based substitutes can weaken incumbents’ pricing power during renewals.
- Van Geelen claims Minimax is relatively comparable to top models while being about 90% cheaper.
- Van Geelen expects markets are valuing AI-related companies on the assumption that compute capacity will keep expanding to meet demand, though the pace is uncertain.
- Van Geelen expects systems-of-record enterprise software may see near-term margin upside because AI lowers coding and maintenance costs, but later contract renegotiations could change as agentic AI capabilities become demonstrable.
Enterprise Adoption Dynamics: Intensity, Packaging, And Implementation
- Van Geelen interprets Anthropic’s release of prepackaged AI tool suites as a way to close the user capability gap by providing simple, ready-made workflows that prompt new use cases.
- Van Geelen notes enterprises may react more slowly to agentic AI than some portray because large organizations do not change vendors or systems quickly.
- Van Geelen argues that even if adoption breadth follows an S-curve, embedded AI features can drive rapidly rising intensity of use, making consumer-style adoption S-curves misleading for enterprise displacement risk.
- Van Geelen says agentic AI was mostly a buzzword during early budget resets and then saw a major perceived capability jump by late November.
- Van Geelen argues OpenAI is pursuing a forward-deployed engineers enterprise strategy by embedding teams onsite to implement solutions.
Narrative Contagion And Reflexivity In Markets
- The “Citrini scenario” Substack post spread widely enough that sell-side research and economists reported clients asking about it, and it became a major market talking point.
- The scenario piece was motivated as a narrative to connect year-to-date market moves including bond rallies and selloffs in software, fintech, and private equity.
- The hosts suggest market reactions to viral AI scenarios and rebuttals indicate elevated uncertainty and stress, with investors appearing highly sensitive to AI impact narratives.
- Van Geelen cites Paul Krugman drawing an analogy between “War of the Worlds” panic during the Depression and viral AI fears resonating in a broader climate of anxiety.
Capability And Cost Trajectory As Discontinuity Triggers
- Van Geelen argues the AI capability curve has continued accelerating rather than leveling off, despite repeated attempts to model it as flattening progress.
- A rebuttal to rapid AI progress is that the world is short on GPU/wafer capacity, but van Geelen argues algorithmic and infrastructure improvements could expand effective compute and keep capability improving.
- Van Geelen claims AI agent autonomy on intellectually complex tasks rose from about two minutes to roughly 8–16 hours over about two years.
- Van Geelen argues inference cost per cognitive task has fallen roughly 10–30x over the past year, making tasks flip from uneconomic to economic within a few quarters.
Macro/Credit Transmission Channels And Regulatory Triggers
- The hosts argue policy response is a major wild card and claim there is virtually no substantive discussion in Washington, D.C. about AI’s real economic impacts despite widespread private-sector concern.
- Van Geelen’s base case is that private credit is less susceptible to bank-run dynamics due to more permanent capital structures, but regulatory changes to private credit treatment on life insurer balance sheets are a key incremental risk.
- Van Geelen posits AI disruption could stress private credit via defaults in disrupted industries and among high-FICO white-collar borrowers, and he notes Apollo reduced software lending earlier (around early 2025) as software risk emerged.
- A potential counter to AI-driven disruption is that productivity-led disinflation and wealth creation could expand government fiscal capacity to stabilize the economy, but only if policymakers prepare a monitoring and response framework.
Watchlist
- Van Geelen’s base case is that private credit is less susceptible to bank-run dynamics due to more permanent capital structures, but regulatory changes to private credit treatment on life insurer balance sheets are a key incremental risk.
- The hosts argue policy response is a major wild card and claim there is virtually no substantive discussion in Washington, D.C. about AI’s real economic impacts despite widespread private-sector concern.
Unknowns
- What are the underlying measurement sources and definitions behind the claimed increase in agent autonomy (minutes to 8–16 hours), and do independent long-horizon evals show similar gains over the same period?
- Do inference costs for real enterprise tasks (not benchmark tokens) actually fall by the claimed 10–30x year-over-year when including orchestration, tooling, and human oversight costs?
- How binding are GPU/wafer constraints for the next 12–24 months, and to what extent can efficiency (distillation, systems, infra) offset physical shortages?
- What telemetry or empirical evidence supports the claim that enterprise displacement risk is better modeled by intensity-of-use increases inside incumbent suites than by adoption-breadth curves?
- Are enterprise renewals and procurement processes already incorporating credible AI-substitution threats in pricing (e.g., reduced uplift, higher discounting), and if so, in which software categories?