Grafana Tempo is a high-volume, cost-effective distributed tracing backend. It only requires object storage (or a local filesystem) to operate – no complex indexing or database clusters needed. Tempo accepts traces in OpenTelemetry (OTLP), Jaeger, and Zipkin formats and integrates tightly with Grafana for trace visualization and correlation with metrics and logs.
This guide covers installing Tempo on Ubuntu 24.04 and Debian 13, configuring it to receive OTLP traces, connecting Grafana Alloy as a trace pipeline, and querying traces through Grafana using TraceQL.
Prerequisites
- A server running Ubuntu 24.04 LTS or Debian 13 with at least 2GB RAM
- Root or sudo access
- Applications instrumented with OpenTelemetry SDKs or Jaeger client libraries (to generate traces)
- Grafana installed for trace visualization
1. Download and Install Tempo
Tempo provides official deb packages on GitHub releases. Download and install the latest version:
VER=$(curl -sI https://github.com/grafana/tempo/releases/latest | grep -i ^location | grep -o v[0-9.]* | sed s/^v//)
curl -fsSLo /tmp/tempo.deb https://github.com/grafana/tempo/releases/download/v${VER}/tempo_${VER}_linux_amd64.deb
sudo dpkg -i /tmp/tempo.deb
rm /tmp/tempo.deb
Verify the installation:
tempo --version
You should see the version confirmed:
tempo, version 2.10.3 (branch: HEAD, revision: 4aeafc237)
build user:
build date:
The deb package creates a tempo system user, a systemd service, and places the default configuration at /etc/tempo/config.yml.
2. Configure Tempo
The default Tempo configuration works well for a standalone setup. Create the data directories first:
sudo mkdir -p /var/tempo/{wal,blocks,generator/wal,generator/traces}
sudo chown -R tempo:tempo /var/tempo
Open the configuration file:
sudo vi /etc/tempo/config.yml
Replace the contents with this configuration:
stream_over_http_enabled: true
server:
http_listen_port: 3200
log_level: info
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
jaeger:
protocols:
thrift_http:
grpc:
ingester:
trace_idle_period: 10s
max_block_bytes: 524288000
max_block_duration: 5m
compactor:
compaction:
block_retention: 72h
storage:
trace:
backend: local
wal:
path: /var/tempo/wal
local:
path: /var/tempo/blocks
metrics_generator:
registry:
external_labels:
source: tempo
cluster: production
storage:
path: /var/tempo/generator/wal
traces_storage:
path: /var/tempo/generator/traces
overrides:
defaults:
metrics_generator:
processors: [service-graphs, span-metrics, local-blocks]
generate_native_histograms: both
analytics:
reporting_enabled: false
Key settings explained:
- OTLP receiver on 4317/4318 – accepts traces from OpenTelemetry instrumented applications or from Grafana Alloy
- Jaeger receiver – also accepts Jaeger-format traces for backwards compatibility with existing instrumentation
- block_retention: 72h – keeps trace data for 3 days. Increase this based on your debugging needs
- metrics_generator – automatically generates RED metrics (rate, errors, duration) and service graphs from incoming traces, making them queryable in Grafana without additional instrumentation
- local backend – stores trace blocks on the local filesystem. For production at scale, switch to S3 or GCS
3. Start and Enable Tempo
Start the Tempo service:
sudo systemctl enable --now tempo
Check the service status:
sudo systemctl status tempo
The service should show as active:
● tempo.service - Tempo service
Loaded: loaded (/etc/systemd/system/tempo.service; enabled; preset: enabled)
Active: active (running) since Mon 2026-03-23 22:04:56 UTC
Main PID: 3139 (tempo)
Tasks: 8 (limit: 9489)
Memory: 26.1M (peak: 26.5M)
CGroup: /system.slice/tempo.service
└─3139 /usr/bin/tempo -config.file /etc/tempo/config.yml
Verify Tempo is ready to accept traces:
curl -s http://localhost:3200/ready
A healthy Tempo returns ready.
4. Open Firewall Ports
If UFW is active, open the necessary ports:
sudo ufw allow 3200/tcp
sudo ufw allow 4317/tcp
sudo ufw allow 4318/tcp
Port 3200 is the Tempo HTTP API (used by Grafana), ports 4317 and 4318 are the OTLP gRPC and HTTP receivers where applications and Alloy send traces.
5. Forward Traces via Grafana Alloy
If you have Grafana Alloy running, configure it to receive OTLP traces from your applications and forward them to Tempo. Add these blocks to /etc/alloy/config.alloy:
otelcol.receiver.otlp "default" {
grpc {
endpoint = "0.0.0.0:4320"
}
http {
endpoint = "0.0.0.0:4321"
}
output {
traces = [otelcol.exporter.otlp.tempo.input]
}
}
otelcol.exporter.otlp "tempo" {
client {
endpoint = "localhost:4317"
tls {
insecure = true
}
}
}
This makes Alloy listen on ports 4320/4321 (different from Tempo’s direct ports 4317/4318) and forward traces to Tempo. Applications can send traces to either Alloy (for pipeline processing) or directly to Tempo.
6. Add Tempo as a Grafana Data Source
In Grafana, go to Connections > Data sources > Add data source and select Tempo. Set the URL to http://localhost:3200 and click Save & test.
Or provision it automatically:
sudo vi /etc/grafana/provisioning/datasources/tempo.yaml
Add the following:
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://localhost:3200
editable: true
Restart Grafana to load the data source:
sudo systemctl restart grafana-server
7. Send a Test Trace
To verify the full pipeline works, send a test trace using curl and the OTLP HTTP endpoint:
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d '{
"resourceSpans": [{
"resource": {"attributes": [{"key": "service.name", "value": {"stringValue": "test-service"}}]},
"scopeSpans": [{
"spans": [{
"traceId": "01020304050607080102040810203040",
"spanId": "0102040810203040",
"name": "test-span",
"kind": 1,
"startTimeUnixNano": "'$(date +%s)000000000'",
"endTimeUnixNano": "'$(( $(date +%s) + 1 ))000000000'",
"status": {"code": 1}
}]
}]
}]
}'
Then query it in Grafana’s Explore view using TraceQL. Select the Tempo data source and search for the test trace:
{resource.service.name = "test-service"}
Tempo Port Reference
| Port | Protocol | Purpose |
|---|---|---|
| 3200 | HTTP | Tempo API (Grafana connects here) |
| 4317 | gRPC | OTLP trace receiver (gRPC) |
| 4318 | HTTP | OTLP trace receiver (HTTP/protobuf) |
| 9095 | gRPC | Internal gRPC communication |
| 14268 | HTTP | Jaeger thrift HTTP receiver |
Troubleshooting Common Issues
Tempo fails to start with port conflict on 9095
If you are running Mimir on the same host, both use gRPC port 9095 by default. Change Mimir’s gRPC port to a different value (e.g., 9097) in /etc/mimir/mimir.yaml with grpc_listen_port: 9097.
Traces not appearing in Grafana
Tempo has a brief delay between ingestion and query availability (usually a few seconds). If traces still don’t appear, verify the Tempo data source URL in Grafana is correct (http://localhost:3200, not the OTLP port). Also check Tempo logs for ingestion errors with sudo journalctl -u tempo -f.
WAL directory permission errors
Make sure the tempo user owns all data directories: sudo chown -R tempo:tempo /var/tempo. The deb package creates the user but does not always set up directories beyond the default paths.
What’s Next
With Tempo running, you have the tracing component of the LGTM observability stack (Loki for logs, Grafana for visualization, Tempo for traces, Mimir for metrics). The real power comes from correlation – clicking a log line in Loki to jump to the related trace in Tempo, or clicking a trace span to see the corresponding metrics. Configure trace-to-logs and trace-to-metrics links in the Grafana Tempo data source settings to enable this workflow.