SGLang TPU support is implemented via the SGLang-JAX backend, a dedicated JAX-based inference engine maintained as a separate repository at sgl-project/sglang-jax.
System Requirements
Supported TPU Hardware
TPU v6e
32 GB HBM Memory — Available on Google Cloud
TPU v7
96 GB per core HBM Memory — Available on Google Cloud
Software Requirements
Python
Version 3.12 or higher
JAX
Latest version with TPU support
Environment
Google Cloud TPU VM or compatible TPU runtime. Optional: SkyPilot for simplified cloud deployment.
Feature Support Matrix
SGLang-JAX provides comprehensive TPU-optimized features for production LLM serving:Supported Features
| Feature | Support Status | Description |
|---|---|---|
| High-Throughput Continuous Batching | ✅ | Dynamic request batching for maximum TPU utilization |
| Radix Tree KV Cache | ✅ | Memory-efficient prefix sharing between requests |
| FlashAttention Backend | ✅ | TPU-optimized attention kernel for long sequences |
| Tensor Parallelism | ✅ | Distribute models across multiple TPU cores |
| Paged Attention | ✅ | Flexible KV cache management with paging |
| Speculative Decoding (EAGLE/EAGLE3) | ✅ | 20-40% throughput improvement for compatible models |
| Chunked Prefill | ✅ | Mixed prefill-decode batching |
| OpenAI-Compatible API | ✅ | Drop-in replacement for OpenAI API |
| Data Parallel Attention | 🚧 | In development — Attention computation with data parallelism |
| Quantization | 🚧 | In development — Model quantization for reduced memory usage |
| Multi-LoRA | 🚧 | In development — Serve multiple LoRA adapters simultaneously |
Attention Backend Comparison
| Backend | Paged Attention | Spec Decoding | MLA | Sliding Window |
|---|---|---|---|---|
| FlashAttention (fa) | ✅ | ✅ | ❌ | ✅ |
| Native | ❌ | ❌ | ❌ | ❌ |
FlashAttention backend is recommended for production workloads due to superior memory efficiency and performance.
Optimized Model List
The following models have been tested and optimized for TPU deployment:| Model Family | Performance Status |
|---|---|
| Qwen 3 | ⭐ Recommended for production |
| Qwen 3 MoE | ⭐ Best performance |
| Qwen 2 | Needs improvement |
| Qwen 2 MoE | Needs improvement |
| Qwen 1.5 | Needs improvement |
| Llama/LLaMA | Needs improvement |
| Grok-2 | Needs improvement |
| Gemma 2 | Verified on TPU |
| Bailing MoE | Needs improvement |
Installation
- PyPI (Recommended)
- From Source
- Docker
- SkyPilot (Cloud TPU)
Launch the Serving Engine
- Basic: Qwen-7B
- High-Performance: Qwen3-8B
- Speculative Decoding (EAGLE3)
- Multi-Node Distributed
Key Parameters Explained
Key Parameters Explained
Enables JIT compilation caching to accelerate server startup on subsequent runs. Recommended:
/tmp/jit_cacheTensor parallelism size; match this to your TPU core count (typically
1, 4, or 8).Specifies TPU device. This is the default for
sglang-jax.Uses bfloat16 precision, which TPUs are optimized for.
Allocates this fraction of TPU HBM for static memory. Adjustable from
0.2 to 0.9.Maximum number of tokens processed in the prefill phase.
Benchmarking with Requests
- Throughput Testing
- Latency Testing
- Comprehensive Benchmark Script
Basic throughput benchmark:
Performance Optimization
Memory Optimization
Memory Optimization
Reduce memory usage:
- Lower
--mem-fraction-static(from0.8→0.5→0.3) - Decrease
--max-prefill-tokens(from16384→8192→4096) - Reduce
--max-running-requests
- Start with conservative memory settings (
--mem-fraction-static=0.5) - Gradually increase until you find the optimal balance
- Increase
--page-sizefor better memory locality (1→16→64→128)
Throughput Optimization
Throughput Optimization
To maximize tokens per second:
- Use FlashAttention backend:
--attention-backend=fa - Enable speculative decoding (EAGLE3) for Qwen3 models (20-40% improvement)
- Increase
--max-running-requeststo256+ - Set
--mem-fraction-staticto0.8+(if memory allows) - Use larger page sizes (
64-128) - Enable chunked prefill:
--chunked-prefill-size=2048
Latency Optimization
Latency Optimization
To minimize time-to-first-token (TTFT) and inter-token latency:
- Reduce
--page-sizeto1-4 - Lower
--max-running-requests(16-32) for smaller batches - Reduce
--chunked-prefill-size - Use conservative memory settings to avoid GC pauses
TPU-Specific Optimizations
TPU-Specific Optimizations
JIT Compilation Cache:Always set this environment variable to cache compiled kernels and accelerate server startup.Data Type Optimization: Use
--dtype=bfloat16 for TPU native optimization. TPUs are specifically designed for bfloat16 computations.Tensor Parallelism: Match --tp-size to your TPU core configuration (1, 4, or 8) for optimal model distribution.Attention Backend: Always use --attention-backend=fa (FlashAttention) for production workloads.Troubleshooting
OOM (Out of Memory) Errors
OOM (Out of Memory) Errors
If you encounter out-of-memory errors:
- Reduce mem-fraction-static
--mem-fraction-static from 0.8 to 0.5 or lower.- Decrease max-prefill-tokens
--max-prefill-tokens from 8192 to 4096 or 2048.- Lower max-running-requests
--max-running-requests to reduce concurrent batch size.- Increase page-size
--page-size for better memory layout efficiency.Slow Compilation / Long Startup
Slow Compilation / Long Startup
If the server takes too long to start:
Ensure
JAX_COMPILATION_CACHE_DIR is properly setUnderstand that the first run requires JIT compilation — this is normal
Subsequent runs will be significantly faster with cached compilations
Consider using
--skip-server-warmup to defer compilation until first requestLow Throughput
Low Throughput
If you’re not achieving expected throughput:
Verify
--tp-size matches your TPU core configurationCheck that
--attention-backend=fa is enabledIncrease
--max-running-requests to enable larger batch formationConsider enabling speculative decoding for compatible models
Ensure memory settings allow for sufficient batch sizes
Connection Issues
Connection Issues
If clients cannot connect to the server:
Ensure
--host=0.0.0.0 for external access (not just 127.0.0.1)Verify firewall rules allow traffic on the specified port (default:
30000)Check that the server process is running:
curl http://localhost:30000/healthAdvanced Features
Speculative Decoding
Speculative Decoding
SGLang-JAX supports EAGLE and EAGLE3 speculative decoding algorithms for Qwen3 and LLaMA model families. Speculative decoding can improve throughput by 20-40% without affecting output quality.See the Speculative Decoding documentation for detailed configuration and supported model combinations.
Chunked Prefill
Chunked Prefill
Enable mixed prefill-decode batching for better TPU utilization:This allows the scheduler to mix prefill operations with decode operations in the same batch, improving overall throughput.
Custom Attention Backends
Custom Attention Backends
SGLang-JAX supports a plugin-based attention backend system. You can implement custom attention kernels optimized for specific use cases.See the Attention Backend documentation for implementation details.
Environment Verification
Environment Verification
Verify your TPU setup before deploying:This command checks:
- Installed package versions
- TPU device availability and specifications
- System resources and configuration
- Compatibility of settings
Contributing
We welcome contributions to improve TPU support in SGLang-JAX!Check the Development Roadmap to see planned features and find opportunities to contribute new functionality.
- Performance optimizations for specific TPU generations
- Support for additional model architectures
- Documentation improvements and examples
- Bug reports and fixes
- Benchmark results and performance analysis
Repository
Visit the sglang-jax repository
Contribution Guide
Read the Contribution Guide
Slack Community
Join the SGL-JAX Slack community for discussions
Testing on TPU
For contributors who need TPU access for testing:- Refer to the TPU Resources Guide for information on accessing TPU hardware
- Use SkyPilot with spot instances for cost-effective testing
- Follow the Benchmark and Profiling Guide for performance validation
References
SGLang-JAX Repository
Source code and issue tracker for the JAX TPU backend.
SGLang-JAX Installation Guide
Step-by-step installation instructions.
Qwen Models Quick Start
Get up and running quickly with the Qwen model family.
Benchmark and Profiling Guide
Advanced benchmarking techniques and JAX Profiler usage.
Speculative Decoding
EAGLE and EAGLE3 speculative decoding configuration.
JAX Documentation
Official JAX documentation and API reference.
Google Cloud TPU Docs
Google Cloud TPU product documentation.
SkyPilot Documentation
Simplified cloud deployment with SkyPilot.
