Cpython Jit Performance Deltas In Python 3.15 Alpha
Sources: 1 • Confidence: Medium • Updated: 2026-04-13 03:50
Key takeaways
- In Python 3.15 alpha on macOS AArch64, the JIT is about 11–12% faster than the tail-calling interpreter.
- The CPython JIT has already met its stated (modest) performance goals over a year early on macOS AArch64 and a few months early on x86_64 Linux.
- In Python 3.15 alpha on x86_64 Linux, the JIT is about 5–6% faster than the standard interpreter.
- Python 3.15’s JIT is described as being back on track.
Sections
Cpython Jit Performance Deltas In Python 3.15 Alpha
- In Python 3.15 alpha on macOS AArch64, the JIT is about 11–12% faster than the tail-calling interpreter.
- In Python 3.15 alpha on x86_64 Linux, the JIT is about 5–6% faster than the standard interpreter.
Cpython Jit Schedule/Status Signal
- The CPython JIT has already met its stated (modest) performance goals over a year early on macOS AArch64 and a few months early on x86_64 Linux.
- Python 3.15’s JIT is described as being back on track.
Unknowns
- What were the JIT's stated performance goals (metrics, baselines, target workloads) that are described as met early?
- Which benchmark suite(s), configurations, and runtime flags produced the 11–12% (macOS AArch64) and 5–6% (x86_64 Linux) speedups, and what is the variance across workloads?
- What does "tail-calling interpreter" mean in terms of runtime configuration, and how does it relate to the default interpreter users run today?
- What specific issue(s) caused the JIT to be off-track previously, and what concrete evidence supports the claim that it is now back on track (milestones, stabilization metrics, regression closure)?
- Do these performance deltas persist through subsequent Python 3.15 releases (alpha to beta/RC), and are there known regressions or trade-offs (startup time, memory use, compilation overhead)?