Availability And Rollout Status
Sources: 1 • Confidence: Medium • Updated: 2026-04-12 10:16
Key takeaways
- OpenAI Codex subagents are generally available after several weeks of preview behind a feature flag.
- Custom Codex agents can include custom instructions and can be pinned to specific models, including gpt-5.3-codex-spark for speed.
- Custom Codex agents can be referenced by name in prompts to orchestrate multi-step workflows where different agents reproduce bugs, trace code paths, and implement fixes.
- Available information does not clearly explain the distinction between the worker and default Codex subagents.
- The subagents pattern is supported across multiple coding-agent platforms, including Codex, Claude Code, Gemini CLI, Mistral Vibe, OpenCode, Visual Studio Code, and Cursor.
Sections
Availability And Rollout Status
- OpenAI Codex subagents are generally available after several weeks of preview behind a feature flag.
Per Agent Configuration And Model Pinning
- Custom Codex agents can include custom instructions and can be pinned to specific models, including gpt-5.3-codex-spark for speed.
Named Subagent Orchestration For Debugging Workflows
- Custom Codex agents can be referenced by name in prompts to orchestrate multi-step workflows where different agents reproduce bugs, trace code paths, and implement fixes.
Product Semantics Ambiguity
- Available information does not clearly explain the distinction between the worker and default Codex subagents.
Cross Vendor Pattern Adoption
- The subagents pattern is supported across multiple coding-agent platforms, including Codex, Claude Code, Gemini CLI, Mistral Vibe, OpenCode, Visual Studio Code, and Cursor.
Unknowns
- What are the exact behavioral differences and intended use cases for worker subagents versus default subagents in Codex?
- What eligibility, rollout conditions, or account limitations apply to the general availability of Codex subagents (e.g., is the feature flag fully removed for all users)?
- What are the pricing, quota, or rate-limit implications of running multiple subagents in parallel within Codex?
- What are the concrete performance characteristics (latency, reliability) and tradeoffs of gpt-5.3-codex-spark relative to other pin-able models for subagents?
- How much feature parity exists across the listed subagent-supporting platforms (capabilities, orchestration primitives, agent isolation, tool access)?