Skip to content
SIE

Performance Tuning

SIE provides several tuning parameters that affect throughput, latency, and resource usage. This guide covers the main configuration options.

Batching groups requests to maximize GPU utilization. Three parameters control batch formation:

Maximum total cost per batch. For text, cost equals token count. Default: 16384 tokens.

Terminal window
# Environment variable
export SIE_MAX_BATCH_COST=32768

Higher values improve throughput at the cost of latency. GPU memory limits how high you can go.

Maximum time to wait for more requests before processing a batch. Default: 10ms.

Terminal window
# Environment variable
export SIE_MAX_BATCH_WAIT_MS=20

Lower values reduce latency for sparse traffic. Higher values improve batching efficiency under load.

Maximum number of requests per batch. Default: 64.

Terminal window
# Environment variable
export SIE_MAX_BATCH_REQUESTS=128

This is a secondary limit. Cost-based batching typically triggers first for text workloads.

For low-latency workloads, reduce max_batch_wait_ms to 5ms or less. For high-throughput batch processing, increase both max_batch_cost and max_batch_wait_ms.

SIE uses reactive LRU eviction to manage GPU memory. No static VRAM budget is required.

When memory usage exceeds this percentage, the least-recently-used model is evicted. Default: 90%.

Terminal window
# Environment variable
export SIE_MEMORY_PRESSURE_THRESHOLD_PERCENT=85

Lower values keep more headroom for inference spikes. Higher values allow more models to stay loaded.

The memory manager checks pressure at two points:

  1. Before loading: If above threshold, evict LRU model first
  2. After each batch: Background check for gradual memory growth

Models are tracked by last-use time. The oldest model is evicted first.

# From memory.py - LRU tracking
def touch(self, model_name: str) -> None:
if model_name in self._models:
self._models[model_name].touch()
self._models.move_to_end(model_name)

Memory tracking adapts to your hardware:

DeviceMemory Source
CUDANVML device memory query
MPSPyTorch allocated memory
CPUSystem RAM via psutil

The attention implementation affects inference speed significantly.

BackendRequirementsSpeedup
flash_attention_2Ampere+ GPU, flash-attn package2-4x
sdpaPyTorch 2.0+1.5-2x
eagerAnyBaseline
Terminal window
# Auto-select best available (default)
export SIE_ATTENTION_BACKEND=auto
# Force specific backend
export SIE_ATTENTION_BACKEND=flash_attention_2
export SIE_ATTENTION_BACKEND=sdpa

Auto mode selects Flash Attention 2 if available, then SDPA, then eager.

Flash Attention 2 requires:

  • CUDA compute capability 8.0+ (Ampere: A100, RTX 30xx, RTX 40xx)
  • The flash-attn package installed
  • FP16 or BF16 compute precision (not FP32)

If requirements are not met, the server falls back to SDPA automatically.

Control the precision used for model inference:

Terminal window
# Options: float16, bfloat16, float32
export SIE_DEFAULT_COMPUTE_PRECISION=float16
PrecisionMemorySpeedCompatibility
float16LowFastAll CUDA GPUs
bfloat16LowFastAmpere+, MPS, CPU
float32HighSlowAll devices

BF16 offers better numerical stability than FP16 for some models. FP32 is mainly for debugging.

Tokenization and image processing run in a CPU thread pool.

Terminal window
# Environment variable
export SIE_PREPROCESSOR_WORKERS=8

Default: min(CPU count, 8). Increase for high request rates. Decrease on memory-constrained systems.

The thread pool is shared across all models. Both tokenization and image preprocessing use the same pool.

All tuning parameters can be set via environment variables with the SIE_ prefix:

VariableDefaultDescription
SIE_MAX_BATCH_REQUESTS64Max requests per batch
SIE_MAX_BATCH_WAIT_MS10Max wait time (ms)
SIE_MAX_CONCURRENT_REQUESTS512Request queue size
SIE_MEMORY_PRESSURE_THRESHOLD_PERCENT90Eviction trigger (%)
SIE_PREPROCESSOR_WORKERS8CPU thread pool size
SIE_ATTENTION_BACKENDautoAttention implementation
SIE_DEFAULT_COMPUTE_PRECISIONfloat16Model precision

Use the eval harness to measure the impact of tuning changes:

Terminal window
# Performance benchmark
mise run eval BAAI/bge-m3 -t mteb/NFCorpus --type perf -s sie
# Compare before/after
mise run eval BAAI/bge-m3 -t mteb/NFCorpus --type perf -s sie,targets

The perf eval reports throughput (items/sec), latency percentiles, and GPU utilization.

See the Evals documentation for the full benchmarking workflow.