Race Dynamics And Safety Coordination Failure
Sources: 1 • Confidence: Medium • Updated: 2026-03-31 04:42
Key takeaways
- Mallaby says ChatGPT's release made the AI competition feel like a war to Hassabis, who described it as rivals having 'parked the tanks on our front lawn.'
- DeepMind was acquired by Google in 2014.
- Mallaby says Hassabis initially believed language models were insufficient for general intelligence due to lack of grounding, but GPT-3's 2020 performance forced a reassessment.
- AI progress is framed as a shift from symbolic rule-based methods to inductive, data-driven deep learning.
- Mallaby says he had extensive biography access to Hassabis (over 30 hours of one-on-one interviews) on the condition that Hassabis could veto direct quotes while Mallaby could still use the underlying information.
Sections
Race Dynamics And Safety Coordination Failure
- Mallaby says ChatGPT's release made the AI competition feel like a war to Hassabis, who described it as rivals having 'parked the tanks on our front lawn.'
- Mallaby reports that DeepMind and Anthropic were reluctant to release chatbot-style systems due to toxicity and hallucination concerns, while OpenAI released ChatGPT with fewer inhibitions, which rivals viewed as opportunistic and unfair.
- The competitive dynamics of the AI industry are described as making rapid capability releases and acceleration difficult for any single actor to avoid even if they fear catastrophic risks.
- Mallaby says Hassabis pursued safety theories including a 'singleton' scenario in which one lab develops AGI on behalf of humanity and ensures safety before release, but that ChatGPT undermined this premise by intensifying multi-actor competition.
- Mallaby says the singleton safety idea became implausible once OpenAI was founded in 2015.
- Mallaby says Microsoft publicly celebrated ChatGPT's lead over Google/DeepMind, including Nadella's 'can they dance' remark, which intensified hostility and made the competition a 'bare-knuckle fight.'
Compute Capital Constraints Shape Industry Structure
- DeepMind was acquired by Google in 2014.
- Mallaby says DeepMind's 2014 sale to Google was driven by rising capital needs for compute and talent and by Hassabis wanting to spend less time fundraising and more time on science.
- Mallaby says Hassabis rejected acquisition interest from Facebook and Elon Musk and concluded Google would be the best parent due to resources and fit.
Execution And Prioritization Over Raw Research Origin
- Mallaby says Hassabis initially believed language models were insufficient for general intelligence due to lack of grounding, but GPT-3's 2020 performance forced a reassessment.
- Mallaby says that after the 2017 Transformer breakthrough from Google Research, OpenAI pivoted quickly to apply Transformers to language while DeepMind deprioritized language and lacked a team positioned to exploit the approach.
- Mallaby describes a narrative arc in which OpenAI became the leading lab after ChatGPT's release, but by late 2025 he believes Google DeepMind's Gemini 3 was judged better than OpenAI's model.
Technical Paradigm Scale And Opacity
- AI progress is framed as a shift from symbolic rule-based methods to inductive, data-driven deep learning.
- Neural networks are described as learning categories through trial-and-error parameter updates over labeled examples rather than explicit human-written rules, often yielding internal solutions that are hard for programmers to interpret.
Information Quality And Workflow Changes Due To Llms
- Mallaby says he had extensive biography access to Hassabis (over 30 hours of one-on-one interviews) on the condition that Hassabis could veto direct quotes while Mallaby could still use the underlying information.
- Mallaby says he used large language models such as Gemini to preprocess scientists' papers and generate comparative briefings before interviews.
Unknowns
- What concrete, independently checkable evaluations support the claim that Gemini 3 was judged better than OpenAI's model in late 2025 (benchmarks, red-teaming results, enterprise adoption, or third-party rankings)?
- What were the actual internal resource allocations and organizational decisions at DeepMind regarding language models after the 2017 Transformer breakthrough (team sizes, mandates, and timelines)?
- How did the safety review and release gating processes differ across DeepMind, Anthropic, and OpenAI at the time of ChatGPT's release, and what incident rates followed?
- What specific compute and talent cost trajectories or bottlenecks drove DeepMind toward acquisition in 2014 (budgets, projected training costs, hiring constraints)?
- What measurable indicators show that AI safety concerns have receded from public discourse, and how do those indicators correlate with geopolitical/commercial pressures versus other factors?