Monitoring & Observability
SIE provides multiple monitoring interfaces. Use health endpoints for container orchestration. Expose Prometheus metrics for alerting. Stream real-time status via WebSocket. Monitor interactively with the TUI.
Health Endpoints
Section titled “Health Endpoints”SIE exposes Kubernetes-compatible health probes for liveness and readiness checks.
Liveness Probe
Section titled “Liveness Probe”curl http://localhost:8080/healthz# Returns: okThe /healthz endpoint returns 200 OK if the server process is alive. Use this for Kubernetes liveness probes. A failed check triggers container restart.
Readiness Probe
Section titled “Readiness Probe”curl http://localhost:8080/readyz# Returns: okThe /readyz endpoint returns 200 OK if the server is ready to accept traffic. Use this for Kubernetes readiness probes. A failed check removes the pod from service endpoints.
Kubernetes configuration:
livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 10 periodSeconds: 10
readinessProbe: httpGet: path: /readyz port: 8080 initialDelaySeconds: 5 periodSeconds: 5Prometheus Metrics
Section titled “Prometheus Metrics”SIE exposes Prometheus-format metrics at /metrics. All metrics use the sie_ prefix.
Metrics Reference
Section titled “Metrics Reference”| Metric | Type | Labels | Description |
|---|---|---|---|
sie_requests_total | Counter | model, endpoint, status | Total requests processed |
sie_request_duration_seconds | Histogram | model, endpoint, phase | Request duration breakdown |
sie_batch_size | Histogram | model | Items per batch |
sie_tokens_processed_total | Counter | model | Total tokens processed |
sie_queue_depth | Gauge | model | Current pending items in queue |
sie_model_loaded | Gauge | model, device | Model load state (1=loaded, 0=not) |
sie_model_memory_bytes | Gauge | model, device | GPU memory usage per model |
Duration Phases
Section titled “Duration Phases”The sie_request_duration_seconds histogram tracks latency by phase:
| Phase | Description |
|---|---|
total | End-to-end request latency |
queue | Time spent waiting in the request queue |
tokenize | Tokenization and preprocessing time |
inference | GPU inference time |
Histogram Buckets
Section titled “Histogram Buckets”Duration buckets (seconds): 0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0, 30.0
Batch size buckets: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024
Scrape Configuration
Section titled “Scrape Configuration”scrape_configs: - job_name: 'sie' static_configs: - targets: ['localhost:8080'] metrics_path: /metrics scrape_interval: 15ssie-top TUI
Section titled “sie-top TUI”The sie-top command provides a real-time terminal interface for monitoring SIE servers.
Installation
Section titled “Installation”pip install 'sie-admin[top]'# Monitor local server (auto-detects mode)sie-top
# Monitor specific serversie-top localhost:8080
# Force worker mode (connect to individual worker)sie-top --worker worker-0.sie.svc:8080
# Force cluster mode (connect to router)sie-top --cluster router.example.com:8080Mode is auto-detected by probing the router /health endpoint. Use --worker or --cluster to force a specific mode.
Features
Section titled “Features”The TUI displays:
- Server info: Version, uptime, user, PID
- GPU table: Device name, memory usage, compute utilization, trend sparkline
- Model table: Name, state, device, memory, queue depth, QPS sparkline
- Detail panel: Selected GPU or model with 60-second history charts
Keyboard shortcuts:
| Key | Action |
|---|---|
j / Down | Move selection down |
k / Up | Move selection up |
? | Show help |
q | Quit |
WebSocket Status
Section titled “WebSocket Status”SIE streams real-time status over WebSocket at /ws/status. Updates push every 200ms.
Connection
Section titled “Connection”import asyncioimport websocketsimport json
async def monitor(): async with websockets.connect("ws://localhost:8080/ws/status") as ws: async for message in ws: status = json.loads(message) print(f"Loaded models: {status['loaded_models']}") print(f"GPU type: {status['gpu']}")Status Message Format
Section titled “Status Message Format”{ "timestamp": 1703001234.567, "gpu": "l4", "loaded_models": ["bge-m3", "e5-base-v2"], "server": { "version": "0.1.0", "uptime_seconds": 3600, "user": "sie", "working_dir": "/app", "pid": 1 }, "gpus": [ { "device": "cuda:0", "name": "NVIDIA L4", "gpu_type": "l4", "utilization_pct": 45, "memory_used_bytes": 8589934592, "memory_total_bytes": 23622320128, "memory_threshold_pct": 85 } ], "models": [ { "name": "bge-m3", "state": "loaded", "device": "cuda:0", "memory_bytes": 2147483648, "queue_depth": 0, "queue_pending_items": 0, "config": { "hf_id": "BAAI/bge-m3", "adapter": "bge_m3", "inputs": ["text"], "outputs": ["dense", "sparse"] } } ], "counters": {}, "histograms": {}}Model States
Section titled “Model States”| State | Description |
|---|---|
available | Config loaded, weights not in memory |
loading | Weights currently loading to GPU |
loaded | Ready for inference |
unloading | Weights being evicted from GPU |
Grafana Dashboards
Section titled “Grafana Dashboards”SIE includes pre-built Grafana dashboards in the Helm chart at deploy/helm/sie-cluster/files/dashboards/. These are automatically provisioned when deploying with Grafana’s sidecar.
Example queries for common panels:
Request Rate
Section titled “Request Rate”sum(rate(sie_requests_total{status="success"}[5m])) by (model)P99 Latency
Section titled “P99 Latency”histogram_quantile(0.99, sum(rate(sie_request_duration_seconds_bucket{phase="total"}[5m])) by (le, model))GPU Memory Usage
Section titled “GPU Memory Usage”sum(sie_model_memory_bytes) by (device)Queue Depth
Section titled “Queue Depth”sum(sie_queue_depth) by (model)Batch Efficiency
Section titled “Batch Efficiency”histogram_quantile(0.5, sum(rate(sie_batch_size_bucket[5m])) by (le, model))Logging
Section titled “Logging”SIE supports both human-readable and structured JSON logging.
Log Levels
Section titled “Log Levels”Enable verbose logging with --verbose or -v:
sie-server serve --verboseJSON Logging
Section titled “JSON Logging”Enable JSON format for Loki and log aggregation systems:
sie-server serve --json-logsOr via environment variable:
export SIE_LOG_JSON=truesie-server serveJSON Log Format
Section titled “JSON Log Format”{ "timestamp": "2025-12-18T10:30:00.123Z", "level": "INFO", "logger": "sie_server.api.encode", "message": "Inference completed", "model": "bge-m3", "request_id": "abc123", "trace_id": "def456", "latency_ms": 45.2, "batch_size": 16, "gpu_type": "l4"}Structured Fields
Section titled “Structured Fields”JSON logs include optional fields when available:
| Field | Description |
|---|---|
model | Model name for the request |
request_id | Unique request identifier |
trace_id | OpenTelemetry trace ID |
latency_ms | Request latency in milliseconds |
batch_size | Number of items in the batch |
gpu_type | Detected GPU type |
What’s Next
Section titled “What’s Next”- CLI Reference for all server options
- API Reference for endpoint documentation