Grafana Loki is a log aggregation system designed to be cost-effective and easy to operate. Unlike traditional log platforms that index the full text of every log line, Loki only indexes metadata labels – making it significantly cheaper to run at scale. This guide walks through installing Loki on Rocky Linux 10 or AlmaLinux 10 from the Grafana RPM repository, configuring it for production use with local storage and 31-day retention, and connecting it to Grafana Alloy for log ingestion.
Prerequisites
You will need the following before starting:
- Rocky Linux 10 or AlmaLinux 10 server with root or sudo access
- Grafana Alloy installed and configured to send logs – see Install Grafana Alloy on Rocky Linux / AlmaLinux
- Grafana instance for querying logs
- SELinux in enforcing mode (the default)
- At least 10 GB of free disk space for log storage
For the Ubuntu/Debian version, see Install Grafana Alloy on Ubuntu / Debian which covers the Debian-based Loki setup as well.
Step 1: Add the Grafana RPM Repository
If you already have the Grafana repo configured from a previous Alloy or Grafana installation, skip this step. Otherwise, add it:
sudo tee /etc/yum.repos.d/grafana.repo <<'REPO'
[grafana]
name=grafana
baseurl=https://rpm.grafana.com
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://rpm.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
REPO
Step 2: Install Grafana Loki
Install the Loki package:
sudo dnf install -y loki
Check the installed version:
loki --version
The output confirms the installed version:
loki, version 3.6.7 (branch: HEAD, revision: abcdef123)
build user:
build date:
go version: go1.23.8
platform: linux/amd64
Step 3: Create Storage Directories
Loki needs directories for chunk storage, index files, and the WAL (write-ahead log). Create them with proper ownership:
sudo mkdir -p /var/lib/loki/{chunks,index,wal,compactor,rules}
sudo chown -R loki:loki /var/lib/loki
Step 4: Configure Loki for Production
The default config that ships with the RPM is minimal. Replace it with a production-ready configuration that uses local filesystem storage with 31-day retention. Open the config file:
sudo vi /etc/loki/config.yml
Add the following production configuration:
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: info
common:
path_prefix: /var/lib/loki
storage:
filesystem:
chunks_directory: /var/lib/loki/chunks
rules_directory: /var/lib/loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2024-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
storage_config:
tsdb_shipper:
active_index_directory: /var/lib/loki/index
cache_location: /var/lib/loki/index_cache
limits_config:
retention_period: 744h
reject_old_samples: true
reject_old_samples_max_age: 168h
max_query_series: 5000
max_query_parallelism: 2
compactor:
working_directory: /var/lib/loki/compactor
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
delete_request_store: filesystem
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
analytics:
reporting_enabled: false
Key settings explained:
- retention_period: 744h – keeps logs for 31 days (744 hours). Adjust based on your storage budget
- schema: v13 with tsdb – the latest and most efficient index format
- replication_factor: 1 – single-node deployment. Increase for HA setups
- reject_old_samples_max_age: 168h – rejects log entries older than 7 days at ingest time
- compactor with retention_enabled – actually enforces the retention period by deleting old data
Step 5: Configure SELinux
Loki listens on ports 3100 (HTTP API) and 9096 (gRPC). SELinux may block these non-standard ports. Add the port contexts:
sudo semanage port -a -t http_port_t -p tcp 3100
sudo semanage port -a -t http_port_t -p tcp 9096
If the semanage command is not found, install the policy utilities first:
sudo dnf install -y policycoreutils-python-utils
Verify the port contexts were added:
sudo semanage port -l | grep -E '3100|9096'
Both ports should appear in the output:
http_port_t tcp 3100, 9096, 80, 81, 443, 488, 8008, 8009, 8443, 9000
Also ensure the Loki data directory has the correct SELinux context:
sudo semanage fcontext -a -t var_lib_t "/var/lib/loki(/.*)?"
sudo restorecon -Rv /var/lib/loki
Step 6: Configure Firewall
Open port 3100 for the Loki HTTP API. The gRPC port (9096) is typically only needed for inter-component communication in clustered setups:
sudo firewall-cmd --permanent --add-port=3100/tcp
sudo firewall-cmd --reload
Verify the port is open:
sudo firewall-cmd --list-ports
Step 7: Start and Enable Loki
Start the Loki service:
sudo systemctl enable --now loki
Check the service status:
sudo systemctl status loki
The service should show active (running):
● loki.service - Loki service
Loaded: loaded (/usr/lib/systemd/system/loki.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-03-24 10:20:15 UTC; 3s ago
Docs: https://grafana.com/docs/loki/latest/
Main PID: 13456 (loki)
Tasks: 9 (limit: 23102)
Memory: 62.4M
CPU: 0.856s
CGroup: /system.slice/loki.service
└─13456 /usr/bin/loki -config.file=/etc/loki/config.yml
Verify Loki is ready to accept logs:
curl -s http://localhost:3100/ready
A healthy Loki instance responds with:
ready
Step 8: Configure Alloy to Send Logs to Loki
If you followed the Alloy installation guide, the loki.write component is already configured to push to http://localhost:3100/loki/api/v1/push. Restart Alloy to ensure it connects:
sudo systemctl restart alloy
After a few seconds, verify logs are being ingested by querying Loki’s label endpoint:
curl -s http://localhost:3100/loki/api/v1/labels | python3 -m json.tool
You should see labels from the Alloy configuration:
{
"status": "success",
"data": [
"filename",
"instance",
"job",
"log_type",
"unit"
]
}
Step 9: Add Loki as a Grafana Data Source
In your Grafana instance, go to Connections > Data sources > Add data source and select Loki. Set the URL to:
http://localhost:3100
If Grafana runs on a different server, replace localhost with the Loki server’s IP address. Click Save & test – you should see “Data source successfully connected.”
Step 10: Query Logs with LogQL
Go to Explore in Grafana and select the Loki data source. Here are some useful LogQL queries to get started.
View all system messages:
{job="varlogs", log_type="messages"}
Filter SSH authentication events from the secure log:
{job="varlogs", log_type="secure"} |= "sshd"
View journal logs from a specific systemd unit:
{job="journal"} |= "nginx" | json
Count errors per minute across all sources:
sum(rate({job=~".+"} |= "error" [1m]))
Troubleshooting
Loki fails to start with “permission denied” on /var/lib/loki
Check directory ownership and SELinux contexts:
ls -laZ /var/lib/loki/
The owner should be loki:loki and the context should be var_lib_t. Fix both if needed:
sudo chown -R loki:loki /var/lib/loki
sudo restorecon -Rv /var/lib/loki
Loki returns “too many outstanding requests”
This happens when query load exceeds Loki’s capacity. For single-node deployments, reduce query parallelism in the config:
limits_config:
max_query_parallelism: 1
Then restart Loki.
Logs not appearing in Grafana
First check that Loki is receiving data:
curl -s http://localhost:3100/metrics | grep loki_distributor_lines_received_total
If the counter is zero, Alloy is not sending logs. Check Alloy’s logs for connection errors:
sudo journalctl -u alloy --no-pager -n 20 | grep -i error
Disk space growing too fast
Check the compactor is running and retention is being enforced:
curl -s http://localhost:3100/metrics | grep loki_compactor_apply_retention_last_successful
If this metric is not updating, verify retention_enabled: true is set in the compactor section and restart Loki.
Conclusion
Loki is now running on Rocky Linux 10 / AlmaLinux 10 with production-ready local storage and 31-day retention. Combined with Grafana Alloy for log collection, you have a lightweight log aggregation pipeline that scales well for single-server and small-cluster deployments. The LogQL query language in Grafana gives you powerful filtering, aggregation, and alerting capabilities across all your collected logs.