Skip to main content
SGLang TPU support is implemented via the SGLang-JAX backend, a dedicated JAX-based inference engine maintained as a separate repository at sgl-project/sglang-jax.
For TPU-specific issues or feature requests, please visit the sglang-jax GitHub issues page.

System Requirements


Supported TPU Hardware

TPU v6e

32 GB HBM Memory — Available on Google Cloud

TPU v7

96 GB per core HBM Memory — Available on Google Cloud

Software Requirements

Python

Version 3.12 or higher

JAX

Latest version with TPU support

Environment

Google Cloud TPU VM or compatible TPU runtime. Optional: SkyPilot for simplified cloud deployment.

Feature Support Matrix

SGLang-JAX provides comprehensive TPU-optimized features for production LLM serving:

Supported Features

FeatureSupport StatusDescription
High-Throughput Continuous BatchingDynamic request batching for maximum TPU utilization
Radix Tree KV CacheMemory-efficient prefix sharing between requests
FlashAttention BackendTPU-optimized attention kernel for long sequences
Tensor ParallelismDistribute models across multiple TPU cores
Paged AttentionFlexible KV cache management with paging
Speculative Decoding (EAGLE/EAGLE3)20-40% throughput improvement for compatible models
Chunked PrefillMixed prefill-decode batching
OpenAI-Compatible APIDrop-in replacement for OpenAI API
Data Parallel Attention🚧In development — Attention computation with data parallelism
Quantization🚧In development — Model quantization for reduced memory usage
Multi-LoRA🚧In development — Serve multiple LoRA adapters simultaneously

Attention Backend Comparison

BackendPaged AttentionSpec DecodingMLASliding Window
FlashAttention (fa)
Native
FlashAttention backend is recommended for production workloads due to superior memory efficiency and performance.

Optimized Model List

The following models have been tested and optimized for TPU deployment:
Model FamilyPerformance Status
Qwen 3⭐ Recommended for production
Qwen 3 MoE⭐ Best performance
Qwen 2Needs improvement
Qwen 2 MoENeeds improvement
Qwen 1.5Needs improvement
Llama/LLaMANeeds improvement
Grok-2Needs improvement
Gemma 2Verified on TPU
Bailing MoENeeds improvement

Installation


Launch the Serving Engine

JAX_COMPILATION_CACHE_DIR=/tmp/jit_cache python3 -u -m sgl_jax.launch_server \
    --model-path Qwen/Qwen-7B-Chat \
    --trust-remote-code \
    --dist-init-addr=0.0.0.0:10011 \
    --nnodes=1 \
    --tp-size=4 \
    --device=tpu \
    --random-seed=3 \
    --node-rank=0 \
    --mem-fraction-static=0.8 \
    --max-prefill-tokens=8192 \
    --download-dir=/tmp \
    --dtype=bfloat16 \
    --skip-server-warmup \
    --host 0.0.0.0 \
    --port 30000
JAX_COMPILATION_CACHE_DIR
string
Enables JIT compilation caching to accelerate server startup on subsequent runs. Recommended: /tmp/jit_cache
--tp-size
integer
default:"1"
Tensor parallelism size; match this to your TPU core count (typically 1, 4, or 8).
--device
string
default:"tpu"
Specifies TPU device. This is the default for sglang-jax.
--dtype
string
default:"bfloat16"
Uses bfloat16 precision, which TPUs are optimized for.
--mem-fraction-static
float
default:"0.8"
Allocates this fraction of TPU HBM for static memory. Adjustable from 0.2 to 0.9.
--max-prefill-tokens
integer
default:"8192"
Maximum number of tokens processed in the prefill phase.

Benchmarking with Requests

Basic throughput benchmark:
python3 -m sgl_jax.bench_serving \
    --backend sgl-jax \
    --dataset-name random \
    --num-prompts=100 \
    --random-input=512 \
    --random-output=128 \
    --max-concurrency=8 \
    --random-range-ratio=1 \
    --warmup-requests=0

Performance Optimization

Reduce memory usage:
  • Lower --mem-fraction-static (from 0.80.50.3)
  • Decrease --max-prefill-tokens (from 1638481924096)
  • Reduce --max-running-requests
Handle OOM errors:
  • Start with conservative memory settings (--mem-fraction-static=0.5)
  • Gradually increase until you find the optimal balance
  • Increase --page-size for better memory locality (11664128)
To maximize tokens per second:
  • Use FlashAttention backend: --attention-backend=fa
  • Enable speculative decoding (EAGLE3) for Qwen3 models (20-40% improvement)
  • Increase --max-running-requests to 256+
  • Set --mem-fraction-static to 0.8+ (if memory allows)
  • Use larger page sizes (64-128)
  • Enable chunked prefill: --chunked-prefill-size=2048
To minimize time-to-first-token (TTFT) and inter-token latency:
  • Reduce --page-size to 1-4
  • Lower --max-running-requests (16-32) for smaller batches
  • Reduce --chunked-prefill-size
  • Use conservative memory settings to avoid GC pauses
JIT Compilation Cache:
export JAX_COMPILATION_CACHE_DIR=/tmp/jit_cache
Always set this environment variable to cache compiled kernels and accelerate server startup.Data Type Optimization: Use --dtype=bfloat16 for TPU native optimization. TPUs are specifically designed for bfloat16 computations.Tensor Parallelism: Match --tp-size to your TPU core configuration (1, 4, or 8) for optimal model distribution.Attention Backend: Always use --attention-backend=fa (FlashAttention) for production workloads.

Troubleshooting

If you encounter out-of-memory errors:
  1. Reduce mem-fraction-static
Lower --mem-fraction-static from 0.8 to 0.5 or lower.
  1. Decrease max-prefill-tokens
Decrease --max-prefill-tokens from 8192 to 4096 or 2048.
  1. Lower max-running-requests
Lower --max-running-requests to reduce concurrent batch size.
  1. Increase page-size
Increase --page-size for better memory layout efficiency.
If the server takes too long to start:
Ensure JAX_COMPILATION_CACHE_DIR is properly set
Understand that the first run requires JIT compilation — this is normal
Subsequent runs will be significantly faster with cached compilations
Consider using --skip-server-warmup to defer compilation until first request
If you’re not achieving expected throughput:
Verify --tp-size matches your TPU core configuration
Check that --attention-backend=fa is enabled
Increase --max-running-requests to enable larger batch formation
Consider enabling speculative decoding for compatible models
Ensure memory settings allow for sufficient batch sizes
If clients cannot connect to the server:
Ensure --host=0.0.0.0 for external access (not just 127.0.0.1)
Verify firewall rules allow traffic on the specified port (default: 30000)
Check that the server process is running: curl http://localhost:30000/health

Advanced Features

SGLang-JAX supports EAGLE and EAGLE3 speculative decoding algorithms for Qwen3 and LLaMA model families. Speculative decoding can improve throughput by 20-40% without affecting output quality.See the Speculative Decoding documentation for detailed configuration and supported model combinations.
Enable mixed prefill-decode batching for better TPU utilization:
--chunked-prefill-size=2048 --enable-mixed-chunk
This allows the scheduler to mix prefill operations with decode operations in the same batch, improving overall throughput.
SGLang-JAX supports a plugin-based attention backend system. You can implement custom attention kernels optimized for specific use cases.See the Attention Backend documentation for implementation details.
Verify your TPU setup before deploying:
python -c "from sgl_jax import check_env; check_env.check_env()"
This command checks:
  • Installed package versions
  • TPU device availability and specifications
  • System resources and configuration
  • Compatibility of settings

Contributing

We welcome contributions to improve TPU support in SGLang-JAX!
Check the Development Roadmap to see planned features and find opportunities to contribute new functionality.
Current contribution areas include:
  • Performance optimizations for specific TPU generations
  • Support for additional model architectures
  • Documentation improvements and examples
  • Bug reports and fixes
  • Benchmark results and performance analysis

Repository

Visit the sglang-jax repository

Contribution Guide

Read the Contribution Guide

Slack Community

Join the SGL-JAX Slack community for discussions

Testing on TPU

For contributors who need TPU access for testing:

References

SGLang-JAX Repository

Source code and issue tracker for the JAX TPU backend.

SGLang-JAX Installation Guide

Step-by-step installation instructions.

Qwen Models Quick Start

Get up and running quickly with the Qwen model family.

Benchmark and Profiling Guide

Advanced benchmarking techniques and JAX Profiler usage.

Speculative Decoding

EAGLE and EAGLE3 speculative decoding configuration.

JAX Documentation

Official JAX documentation and API reference.

Google Cloud TPU Docs

Google Cloud TPU product documentation.

SkyPilot Documentation

Simplified cloud deployment with SkyPilot.