Skip to main content
SGLang provides multiple caching acceleration strategies for Diffusion Transformer (DiT) models. These strategies can significantly reduce inference time by skipping redundant computation.

Overview

SGLang supports two complementary caching approaches:
StrategyScopeMechanismBest For
Cache-DiTBlock-levelSkip individual transformer blocks dynamicallyAdvanced, higher speedup
TeaCacheTimestep-levelSkip entire denoising steps based on L1 similaritySimple, built-in

Cache-DiT

Cache-DiT provides block-level caching with advanced strategies like DBCache and TaylorSeer. It can achieve up to 1.69x speedup. See Cache-DiT for detailed configuration.

Quick Start

SGLANG_CACHE_DIT_ENABLED=true \
sglang generate --model-path Qwen/Qwen-Image \
    --prompt "A beautiful sunset over the mountains"

Key Features

  • DBCache: Dynamic block-level caching based on residual differences
  • TaylorSeer: Taylor expansion-based calibration for optimized caching
  • SCM: Step-level computation masking for additional speedup

TeaCache

TeaCache (Temporal similarity-based caching) accelerates diffusion inference by detecting when consecutive denoising steps are similar enough to skip computation entirely. See TeaCache for detailed documentation.

Quick Overview

  • Tracks L1 distance between modulated inputs across timesteps
  • When accumulated distance is below threshold, reuses cached residual
  • Supports CFG with separate positive/negative caches

Supported Models

  • Wan (wan2.1, wan2.2)
  • Hunyuan (HunyuanVideo)
  • Z-Image
For Flux and Qwen models, TeaCache is automatically disabled when CFG is enabled.

References