Skip to main content
SGLang provides robust support for embedding models by integrating efficient serving mechanisms with its flexible programming interface. This integration allows for streamlined handling of embedding tasks, facilitating faster and more accurate retrieval and semantic search operations. SGLang’s architecture enables better resource utilization and reduced latency in embedding model deployment.
Embedding models must be launched with the --is-embedding flag. Some models may also require --trust-remote-code.

Quick start

  1. Launch the server
python3 -m sglang.launch_server \
  --model-path Qwen/Qwen3-Embedding-4B \
  --is-embedding \
  --host 0.0.0.0 \
  --port 30000
  1. Send a client request
import requests

url = "http://127.0.0.1:30000"

payload = {
    "model": "Qwen/Qwen3-Embedding-4B",
    "input": "What is the capital of France?",
    "encoding_format": "float"
}

response = requests.post(url + "/v1/embeddings", json=payload).json()
print("Embedding:", response["data"][0]["embedding"])

Multimodal embedding example

For multimodal models like GME that support both text and images:
  1. Launch the server with a multimodal model
python3 -m sglang.launch_server \
  --model-path Alibaba-NLP/gme-Qwen2-VL-2B-Instruct \
  --is-embedding \
  --chat-template gme-qwen2-vl \
  --host 0.0.0.0 \
  --port 30000
  1. Send a multimodal request
import requests

url = "http://127.0.0.1:30000"

text_input = "Represent this image in embedding space."
image_path = "https://huggingface.co/datasets/liuhaotian/llava-bench-in-the-wild/resolve/main/images/023.jpg"

payload = {
    "model": "gme-qwen2-vl",
    "input": [
        {"text": text_input},
        {"image": image_path}
    ],
}

response = requests.post(url + "/v1/embeddings", json=payload).json()
print("Embeddings:", [x.get("embedding") for x in response.get("data", [])])

Matryoshka embedding example

Matryoshka Embeddings or Matryoshka Representation Learning (MRL) is a technique used in training embedding models. It allows users to trade off between performance and cost.
  1. Launch a Matryoshka-capable model
If the model config already includes matryoshka_dimensions or is_matryoshka then no override is needed. Otherwise, use --json-model-override-args as below:
python3 -m sglang.launch_server \
  --model-path Qwen/Qwen3-Embedding-0.6B \
  --is-embedding \
  --host 0.0.0.0 \
  --port 30000 \
  --json-model-override-args '{"matryoshka_dimensions": [128, 256, 512, 1024, 1536]}'
Setting "is_matryoshka": true allows truncating to any dimension. Otherwise, the server validates that the specified dimension in the request is one of matryoshka_dimensions. Omitting dimensions in a request returns the full vector.
  1. Make requests with different output dimensions
import requests

url = "http://127.0.0.1:30000"

# Request a truncated (Matryoshka) embedding by specifying a supported dimension.
payload = {
    "model": "Qwen/Qwen3-Embedding-0.6B",
    "input": "Explain diffusion models simply.",
    "dimensions": 512  # change to 128 / 1024 / omit for full size
}

response = requests.post(url + "/v1/embeddings", json=payload).json()
print("Embedding:", response["data"][0]["embedding"])

Supported models

ModelExample HF modelChat templateNotes
E5 (Llama/Mistral based)intfloat/e5-mistral-7b-instructN/AHigh-quality text embeddings based on Mistral/Llama architectures
GTE-Qwen2Alibaba-NLP/gte-Qwen2-7B-instructN/AAlibaba’s text embedding model with multilingual support
Qwen3-EmbeddingQwen/Qwen3-Embedding-4BN/ALatest Qwen3-based text embedding model for semantic representation
BGEBAAI/bge-large-en-v1.5N/ABAAI’s text embeddings (requires --attention-backend triton or torch_native)
GME (Multimodal)Alibaba-NLP/gme-Qwen2-VL-2B-Instructgme-qwen2-vlMultimodal embedding for text and image cross-modal tasks
CLIPopenai/clip-vit-large-patch14-336N/AOpenAI’s CLIP for image and text embeddings