Grafana Tempo is a distributed tracing backend that stores traces and makes them searchable through Grafana. It accepts traces via OTLP, Jaeger, and Zipkin protocols, stores them on local disk or object storage, and can generate RED (Rate, Errors, Duration) metrics from the ingested spans. This guide covers installing Tempo on Rocky Linux 10 or AlmaLinux 10 via the RPM package from GitHub releases, configuring it with OTLP and Jaeger receivers, local storage, and metrics generation.
Prerequisites
You will need:
- Rocky Linux 10 or AlmaLinux 10 with root or sudo access
- Grafana Alloy to forward traces from applications – see Install Grafana Alloy on Rocky Linux / AlmaLinux
- Grafana Mimir for storing generated metrics – see Install Grafana Mimir on Rocky Linux / AlmaLinux
- Grafana for visualizing traces
- SELinux in enforcing mode
Step 1: Download and Install Tempo RPM
Tempo provides RPM packages on its GitHub releases page. Download the latest version:
VER=$(curl -sI https://github.com/grafana/tempo/releases/latest | grep -i ^location | grep -o v[0-9.]* | sed s/^v//)
curl -Lo /tmp/tempo.rpm https://github.com/grafana/tempo/releases/download/v${VER}/tempo_${VER}_linux_amd64.rpm
Install the RPM package:
sudo dnf install -y /tmp/tempo.rpm
rm -f /tmp/tempo.rpm
Verify the installed version:
tempo --version
The output confirms the installed version:
tempo, version 2.10.3 (branch: HEAD, revision: abcdef123)
build user:
build date:
go version: go1.23.8
platform: linux/amd64
Step 2: Create Storage Directories
Create the directories Tempo needs for trace storage and the WAL:
sudo mkdir -p /var/tempo/{traces,wal}
sudo chown -R tempo:tempo /var/tempo
Step 3: Configure SELinux
Tempo listens on several ports: 3200 (HTTP API), 4317 (OTLP gRPC), and 4318 (OTLP HTTP). Add SELinux port contexts for each:
sudo semanage port -a -t http_port_t -p tcp 3200
sudo semanage port -a -t http_port_t -p tcp 4317
sudo semanage port -a -t http_port_t -p tcp 4318
If semanage is not available:
sudo dnf install -y policycoreutils-python-utils
Set the correct SELinux context on the data directory:
sudo semanage fcontext -a -t var_lib_t "/var/tempo(/.*)?"
sudo restorecon -Rv /var/tempo
Verify the contexts are applied:
ls -dZ /var/tempo
The context should show var_lib_t:
system_u:object_r:var_lib_t:s0 /var/tempo
Step 4: Configure Tempo
Open the Tempo configuration file:
sudo vi /etc/tempo/config.yml
Replace the contents with a production configuration that includes OTLP and Jaeger receivers, local storage, and metrics generation:
stream_over_http_enabled: true
server:
http_listen_port: 3200
log_level: info
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
jaeger:
protocols:
thrift_http:
endpoint: "0.0.0.0:14268"
grpc:
endpoint: "0.0.0.0:14250"
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 744h
metrics_generator:
registry:
external_labels:
source: tempo
cluster: rocky-lgtm
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://localhost:9009/api/v1/push
send_exemplars: true
traces_storage:
path: /var/tempo/generator/traces
processor:
span_metrics:
dimensions:
- service.name
- http.method
- http.status_code
service_graphs:
dimensions:
- service.name
storage:
trace:
backend: local
wal:
path: /var/tempo/wal
local:
path: /var/tempo/traces
block:
bloom_filter_false_positive: 0.05
v2_index_downsample_bytes: 1000
v2_encoding: zstd
overrides:
defaults:
metrics_generator:
processors:
- span-metrics
- service-graphs
analytics:
reporting_enabled: false
Key settings in this configuration:
- OTLP receivers on ports 4317 (gRPC) and 4318 (HTTP) – the standard OpenTelemetry ports
- Jaeger receivers on ports 14268 (HTTP) and 14250 (gRPC) – for Jaeger-native instrumentation
- metrics_generator – creates RED metrics from traces and pushes them to Mimir via remote_write. This gives you request rate, error rate, and duration metrics without separate instrumentation
- block_retention: 744h – keeps trace data for 31 days
- zstd encoding – compresses trace blocks for better storage efficiency
Create the metrics generator directories:
sudo mkdir -p /var/tempo/generator/{wal,traces}
sudo chown -R tempo:tempo /var/tempo
Step 5: Configure Firewall
Open the ports for Tempo’s HTTP API and OTLP receivers:
sudo firewall-cmd --permanent --add-port=3200/tcp
sudo firewall-cmd --permanent --add-port=4317/tcp
sudo firewall-cmd --permanent --add-port=4318/tcp
sudo firewall-cmd --reload
Verify all ports are open:
sudo firewall-cmd --list-ports
The output should include the Tempo ports:
3200/tcp 4317/tcp 4318/tcp
Step 6: Start and Enable Tempo
Start the Tempo service:
sudo systemctl enable --now tempo
Check the service status:
sudo systemctl status tempo
Tempo should show active (running):
● tempo.service - Grafana Tempo
Loaded: loaded (/usr/lib/systemd/system/tempo.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-03-24 10:40:22 UTC; 5s ago
Main PID: 15678 (tempo)
Tasks: 10 (limit: 23102)
Memory: 85.3M
CPU: 1.567s
CGroup: /system.slice/tempo.service
└─15678 /usr/bin/tempo -config.file=/etc/tempo/config.yml
Verify Tempo is ready:
curl -s http://localhost:3200/ready
A healthy Tempo instance responds with:
ready
Step 7: Add Tempo as a Grafana Data Source
In Grafana, navigate to Connections > Data sources > Add data source and select Tempo. Set the URL to:
http://localhost:3200
Under Additional settings, enable TraceQL Search if available. Click Save & test – you should see “Successfully connected to Tempo.”
Step 8: Send a Test Trace
Send a test trace to verify the entire pipeline works. This curl command sends a minimal OTLP trace via the HTTP endpoint:
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d '{
"resourceSpans": [{
"resource": {
"attributes": [{"key": "service.name", "value": {"stringValue": "test-service"}}]
},
"scopeSpans": [{
"scope": {"name": "test"},
"spans": [{
"traceId": "01020304050607080102040810203040",
"spanId": "0102040810203040",
"name": "test-span",
"kind": 1,
"startTimeUnixNano": "1700000000000000000",
"endTimeUnixNano": "1700000001000000000",
"status": {"code": 1}
}]
}]
}]
}'
A successful submission returns an empty JSON response:
{}
You can search for this trace in Grafana by going to Explore, selecting the Tempo data source, and searching for the trace ID 01020304050607080102040810203040 or filtering by service.name = test-service.
Step 9: Verify Metrics Generation
If you configured the metrics_generator to push to Mimir, the RED metrics should start appearing after Tempo processes traces. Check the Mimir endpoint through Grafana by querying:
traces_spanmetrics_calls_total
This PromQL query shows the total span count by service, method, and status code. The metrics generator produces several useful metrics:
traces_spanmetrics_calls_total– request count (Rate)traces_spanmetrics_latency_bucket– request duration histogram (Duration)traces_service_graph_request_total– service-to-service request counttraces_service_graph_request_failed_total– service-to-service error count (Errors)
Troubleshooting
Tempo fails to bind on port 4317
Port 4317 is the standard OTLP gRPC port. If Alloy is also running on the same host with an OTLP receiver, there will be a port conflict. Either disable the OTLP receiver in Alloy (since Tempo will handle trace ingestion directly) or change Alloy’s OTLP ports to 4320/4321 as shown in the Alloy article.
Check what is using the port:
sudo ss -tlnp | grep 4317
SELinux blocking Tempo from writing to /var/tempo
Check for AVC denials:
sudo ausearch -m avc -ts recent | grep tempo
If you see denials related to the data directory, reapply the SELinux contexts:
sudo restorecon -Rv /var/tempo
Metrics generator not producing RED metrics
Verify the generator WAL directory exists and is writable:
ls -laZ /var/tempo/generator/
Also check that Mimir is reachable from Tempo. The generator pushes metrics via remote_write, so Mimir must be running and accepting writes:
curl -s http://localhost:9009/ready
Check Tempo’s logs for remote_write errors:
sudo journalctl -u tempo --no-pager -n 30 | grep -i "remote_write\|error"
Traces not appearing in Grafana
Tempo has an ingestion delay – traces may take up to 30 seconds to become searchable after ingestion. If traces still do not appear after waiting, check the Tempo metrics:
curl -s http://localhost:3200/metrics | grep tempo_distributor_spans_received_total
If this counter is zero, no traces are reaching Tempo. Verify the sending application or Alloy is configured to push to the correct endpoint.
Conclusion
Tempo is now running on Rocky Linux 10 / AlmaLinux 10, accepting traces via OTLP and Jaeger protocols, storing them locally with 31-day retention, and generating RED metrics that get pushed to Mimir. This completes the tracing layer of the Grafana LGTM stack. With Alloy collecting and forwarding telemetry, Loki storing logs, Mimir storing metrics, and Tempo storing traces, you have full observability across your infrastructure.