Redis is an in-memory data structure store used as a database, cache, message broker, and streaming engine. It handles caching layers, session storage, pub/sub messaging, job queues, rate limiting, and real-time leaderboards with sub-millisecond latency. If your application stack runs on Ubuntu 24.04 or Debian 13, this guide walks through installing Redis from the official APT repository, hardening it for production, setting up persistence, configuring it as a session and cache backend for PHP and Python frameworks, deploying Redis Sentinel for high availability, and tuning the kernel for optimal throughput.
The default Ubuntu and Debian repositories ship older Redis versions that miss important features like ACLs, threaded I/O, and native TLS support. We will use the official Redis APT repository maintained by Redis Ltd. to get the latest stable release.
Prerequisites
Before you start, confirm these are in place:
- A running Ubuntu 24.04 LTS or Debian 13 (Trixie) server with root or sudo privileges
- System packages updated to the latest versions
- A working internet connection for downloading packages
- UFW or iptables available for firewall configuration
Update your system first:
sudo apt update && sudo apt upgrade -y
What Redis Is and Why It Matters
Redis (Remote Dictionary Server) stores data structures entirely in RAM, which is why read and write operations complete in microseconds rather than the milliseconds you get from disk-backed databases. It supports strings, hashes, lists, sets, sorted sets, streams, bitmaps, and hyperloglogs natively. That flexibility makes it suitable for multiple roles in a single deployment:
- Caching layer – Store query results, rendered HTML fragments, or API responses to reduce load on your primary database.
- Session storage – Keep user sessions in Redis instead of on-disk files. This is mandatory when you run multiple application servers behind a load balancer.
- Pub/Sub messaging – Publish events to channels that multiple subscribers consume in real time. Useful for chat systems, notifications, and live dashboards.
- Job queues – Frameworks like Sidekiq (Ruby), Laravel Horizon (PHP), Celery (Python), and BullMQ (Node.js) use Redis lists and streams as their queue backend.
- Rate limiting – Track request counts per IP or API key using atomic increment operations with TTL-based expiry.
- Leaderboards and counters – Sorted sets let you maintain ranked lists that update in O(log N) time.
Redis is single-threaded for command execution, which eliminates locking overhead. Starting with Redis 6, I/O threading handles network reads and writes across multiple cores while command processing stays on one thread. Redis 7 added further improvements to memory efficiency and introduced Redis Functions as a replacement for Lua scripting in some use cases.
Step 1: Install Redis from the Official APT Repository
Install the prerequisite packages needed to add the repository over HTTPS:
sudo apt install -y curl gnupg lsb-release
Import the official Redis GPG signing key:
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
Add the Redis repository to your sources list:
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
Update the package index and install Redis:
sudo apt update
sudo apt install -y redis
This installs the redis-server and redis-cli binaries along with the systemd service unit.
Step 2: Verify the Installation and Start Redis
Check the installed Redis version:
redis-server --version
You should see output showing Redis 7.x or newer. If the version reports 6.x or lower, the system pulled the package from the distro repository instead of the official one. Go back and verify the repository configuration.
Enable Redis to start on boot and start the service:
sudo systemctl enable redis-server
sudo systemctl start redis-server
Confirm it is running:
sudo systemctl status redis-server
The output should show active (running). If you see failed, check the journal with journalctl -xeu redis-server for the exact error.
Step 3: Configure redis.conf for Production
The main configuration file lives at /etc/redis/redis.conf. Before editing, create a backup:
sudo cp /etc/redis/redis.conf /etc/redis/redis.conf.bak
Open the file in your editor:
sudo nano /etc/redis/redis.conf
Bind Address
By default, Redis binds to 127.0.0.1 ::1, which means it only accepts connections from localhost. If other servers need to connect (for example, application servers in a private network), add the server’s private IP:
bind 127.0.0.1 ::1 10.0.1.50
Replace 10.0.1.50 with your server’s actual private IP. Never bind to 0.0.0.0 on a public-facing server without authentication and firewall rules in place.
Authentication
Set a strong password with the requirepass directive:
requirepass YourStr0ngP@sswordHere
Generate a random password with openssl rand -base64 32 and paste it here. Every client must authenticate with AUTH YourStr0ngP@sswordHere before running commands.
Memory Limit and Eviction Policy
Set the maximum memory Redis is allowed to use. On a server with 4 GB of RAM where Redis is the only significant workload, reserving 2.5 to 3 GB is reasonable:
maxmemory 3gb
maxmemory-policy allkeys-lru
The allkeys-lru policy evicts the least recently used keys when memory is full. Other useful policies:
volatile-lru– Evict only keys that have a TTL set, using LRU ordering.allkeys-lfu– Evict the least frequently used keys across the entire keyspace.noeviction– Return errors on writes when memory is full. Use this if you need Redis as a persistent store and want writes to fail rather than lose data.
After editing, restart Redis to apply changes:
sudo systemctl restart redis-server
Step 4: Persistence – RDB Snapshots vs AOF
Redis offers two persistence mechanisms. You can use one or both depending on your durability requirements.
RDB Snapshots
RDB creates point-in-time snapshots of your dataset at configured intervals. It is the default persistence method. The relevant directives in redis.conf:
save 900 1
save 300 10
save 60 10000
dbfilename dump.rdb
dir /var/lib/redis
This means Redis writes a snapshot if at least 1 key changed in 900 seconds, 10 keys in 300 seconds, or 10000 keys in 60 seconds. RDB files are compact and ideal for backups, disaster recovery, and fast restarts. The tradeoff is that you lose all writes since the last snapshot if Redis crashes.
Append-Only File (AOF)
AOF logs every write operation. On restart, Redis replays the log to reconstruct the dataset. Enable it with:
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
The appendfsync options are:
always– Fsync after every write. Safest, but slowest.everysec– Fsync once per second. Best balance of durability and performance. You lose at most one second of writes on crash.no– Let the OS decide when to flush. Fastest, but you may lose more data.
When to Use Which
If Redis is purely a cache and you can tolerate data loss on restart, disable AOF and rely on RDB snapshots (or disable persistence entirely with save ""). If Redis holds data you cannot regenerate (queues, session data for financial transactions, counters), enable both RDB and AOF. RDB gives you compact backups, and AOF gives you write-level durability.
Step 5: Test with redis-cli
Connect to Redis locally. If you set a password, pass it with -a:
redis-cli -a YourStr0ngP@sswordHere
Run a basic health check:
127.0.0.1:6379> PING
PONG
Set and retrieve a key:
127.0.0.1:6379> SET greeting "Hello from Redis"
OK
127.0.0.1:6379> GET greeting
"Hello from Redis"
List all keys (use only on development servers, never in production with large datasets):
127.0.0.1:6379> KEYS *
In production, use SCAN instead of KEYS to avoid blocking the server.
View server statistics:
127.0.0.1:6379> INFO server
127.0.0.1:6379> INFO memory
127.0.0.1:6379> INFO stats
Watch commands in real time (useful for debugging what your application sends to Redis):
redis-cli -a YourStr0ngP@sswordHere MONITOR
Press Ctrl+C to stop monitoring. Do not leave MONITOR running in production because it adds overhead to every command.
Step 6: Firewall Configuration with UFW
If Redis listens on a network interface (not just localhost), restrict access to specific IP addresses. Allow connections from your application server at 10.0.1.100:
sudo ufw allow from 10.0.1.100 to any port 6379 proto tcp
If you have multiple application servers, add a rule for each:
sudo ufw allow from 10.0.1.101 to any port 6379 proto tcp
sudo ufw allow from 10.0.1.102 to any port 6379 proto tcp
Deny all other traffic to port 6379:
sudo ufw deny 6379/tcp
Verify the rules:
sudo ufw status numbered
Make sure the allow rules appear before the deny rule in the list. UFW processes rules in order, and the first match wins.
Step 7: Redis as a PHP Session Store
Storing PHP sessions in Redis is the standard approach when you run multiple web servers behind a load balancer. Any server can serve any request because session data lives in a shared store.
Install the PHP Redis extension:
sudo apt install -y php-redis
Open your php.ini file. The location depends on your SAPI. For PHP-FPM with PHP 8.3:
sudo nano /etc/php/8.3/fpm/php.ini
Find and update the session handler directives:
session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=YourStr0ngP@sswordHere"
If Redis runs on a remote server, replace 127.0.0.1 with its IP address. You can also specify the database number:
session.save_path = "tcp://10.0.1.50:6379?auth=YourStr0ngP@sswordHere&database=2"
Restart PHP-FPM to apply the changes:
sudo systemctl restart php8.3-fpm
Verify sessions are being stored in Redis by logging into your application and checking:
redis-cli -a YourStr0ngP@sswordHere KEYS "PHPREDIS_SESSION:*"
Step 8: Redis as a Cache Backend for Django and Laravel
Django Configuration
Install the django-redis package:
pip install django-redis
Add the cache backend to your settings.py:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://:YourStr0ngP@[email protected]:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
SESSION_CACHE_ALIAS = "default"
The /1 at the end selects Redis database 1. Redis supports 16 databases (0 through 15) by default. Using separate databases for cache and sessions keeps things organized.
Laravel Configuration
Laravel supports Redis out of the box. Install the predis package or use the phpredis extension (already installed in Step 7):
composer require predis/predis
In your .env file, set:
CACHE_DRIVER=redis
SESSION_DRIVER=redis
QUEUE_CONNECTION=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=YourStr0ngP@sswordHere
REDIS_PORT=6379
Laravel’s config/database.php file already contains a redis section. The .env values above map directly to it. Clear the config cache after making changes:
php artisan config:clear
php artisan cache:clear
Step 9: Redis Sentinel for High Availability
Redis Sentinel monitors your Redis instances, handles automatic failover when a master goes down, and acts as a configuration provider so clients always know which node is the current master. A production Sentinel deployment requires at least three nodes to form a quorum.
Architecture
The setup uses three servers:
- Node 1 (10.0.1.50) – Redis master + Sentinel
- Node 2 (10.0.1.51) – Redis replica + Sentinel
- Node 3 (10.0.1.52) – Redis replica + Sentinel
Install Redis on all three nodes using the steps from Step 1.
Configure the Master (Node 1)
On Node 1, edit /etc/redis/redis.conf:
bind 127.0.0.1 10.0.1.50
requirepass YourStr0ngP@sswordHere
masterauth YourStr0ngP@sswordHere
The masterauth directive is needed so that after a failover, the old master can authenticate against the new master when it comes back as a replica.
Configure the Replicas (Nodes 2 and 3)
On both Node 2 and Node 3, edit /etc/redis/redis.conf:
bind 127.0.0.1 10.0.1.51
requirepass YourStr0ngP@sswordHere
masterauth YourStr0ngP@sswordHere
replicaof 10.0.1.50 6379
Adjust the bind address to 10.0.1.52 on Node 3. Restart Redis on all three nodes after making changes.
Configure Sentinel on All Three Nodes
Create the Sentinel configuration file on each node:
sudo nano /etc/redis/sentinel.conf
Add the following content (identical on all three nodes):
port 26379
daemonize yes
pidfile /var/run/redis/redis-sentinel.pid
logfile /var/log/redis/redis-sentinel.log
sentinel monitor mymaster 10.0.1.50 6379 2
sentinel auth-pass mymaster YourStr0ngP@sswordHere
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
The 2 after the port number is the quorum. It means two out of three Sentinels must agree that the master is down before failover begins. The down-after-milliseconds value of 5000 means Sentinel considers the master unreachable if it does not respond to PING within 5 seconds.
Set the correct file permissions and start Sentinel on each node:
sudo chown redis:redis /etc/redis/sentinel.conf
sudo redis-sentinel /etc/redis/sentinel.conf
Verify the Sentinel is running and sees the master:
redis-cli -p 26379 SENTINEL master mymaster
You should see the master’s IP, port, and its current status. Check that replicas are detected:
redis-cli -p 26379 SENTINEL replicas mymaster
Step 10: Performance Tuning
Redis performs well out of the box, but kernel-level settings can make a noticeable difference under high throughput. Apply these tunings on your Redis server.
vm.overcommit_memory
Redis uses fork() to create RDB snapshots and rewrite AOF files. The fork operation duplicates the memory space (copy-on-write), and Linux may refuse the fork if it thinks there is not enough memory. Setting overcommit_memory to 1 tells Linux to always allow the fork:
sudo sysctl vm.overcommit_memory=1
Make it persistent across reboots:
echo "vm.overcommit_memory=1" | sudo tee -a /etc/sysctl.conf
Disable Transparent Hugepages
Transparent Hugepages (THP) cause latency spikes in Redis because the kernel periodically defragments memory in the background. Redis will log a warning at startup if THP is enabled. Disable it:
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
To make this persistent, create a systemd service that runs at boot:
sudo bash -c 'cat > /etc/systemd/system/disable-thp.service << EOF
[Unit]
Description=Disable Transparent Hugepages
Before=redis-server.service
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
[Install]
WantedBy=multi-user.target
EOF'
Enable the service:
sudo systemctl daemon-reload
sudo systemctl enable disable-thp
TCP Backlog
Redis defaults to a tcp-backlog of 511, but Linux may silently cap it at 128. Increase the kernel’s somaxconn and tcp_max_syn_backlog:
sudo sysctl net.core.somaxconn=65535
sudo sysctl net.ipv4.tcp_max_syn_backlog=65535
Persist these settings:
echo "net.core.somaxconn=65535" | sudo tee -a /etc/sysctl.conf
echo "net.ipv4.tcp_max_syn_backlog=65535" | sudo tee -a /etc/sysctl.conf
In redis.conf, you can optionally increase the backlog to match:
tcp-backlog 511
Step 11: Security Hardening
Rename Dangerous Commands
Some Redis commands are dangerous in production. FLUSHALL wipes the entire dataset, FLUSHDB wipes the current database, CONFIG can change settings at runtime, and DEBUG can crash the server. Rename them to make accidental or malicious use harder:
rename-command FLUSHALL ""
rename-command FLUSHDB ""
rename-command CONFIG "REDIS_CONFIG_b7d9a2"
rename-command DEBUG ""
rename-command SHUTDOWN "REDIS_SHUTDOWN_e4f1c8"
Setting a command to an empty string disables it entirely. Setting it to a random string means only operators who know the alias can use it. Add these lines to /etc/redis/redis.conf and restart Redis.
TLS Encryption (Redis 6+)
Redis 6 and later support native TLS. This is important when Redis traffic crosses networks that are not fully trusted. Generate or obtain TLS certificates (a self-signed CA works for internal services), then configure redis.conf:
tls-port 6380
port 0
tls-cert-file /etc/redis/tls/redis.crt
tls-key-file /etc/redis/tls/redis.key
tls-ca-cert-file /etc/redis/tls/ca.crt
tls-auth-clients optional
Setting port 0 disables the non-TLS listener. The tls-port 6380 directive tells Redis to accept TLS connections on port 6380. Adjust your firewall rules accordingly.
Connect with redis-cli using TLS:
redis-cli --tls --cert /etc/redis/tls/redis.crt --key /etc/redis/tls/redis.key --cacert /etc/redis/tls/ca.crt -p 6380
ACLs (Redis 6+)
Instead of a single global password, Redis 6 introduced Access Control Lists that let you create users with specific command and key permissions:
ACL SETUSER appuser on >AppPassword123 ~app:* +get +set +del +expire +ttl
ACL SETUSER readonly on >ReadOnlyPass ~* +get +mget +info +scan
The appuser can only run GET, SET, DEL, EXPIRE, and TTL commands, and only on keys matching the app:* pattern. The readonly user can read any key but cannot write. Save the ACL configuration to a file:
ACL SAVE
Step 12: Monitoring Redis
INFO Command
The INFO command is the primary monitoring tool built into Redis. Key sections to watch:
redis-cli -a YourStr0ngP@sswordHere INFO memory
Look at used_memory_human (current memory usage), used_memory_peak_human (peak usage), and mem_fragmentation_ratio. A fragmentation ratio above 1.5 suggests Redis is using significantly more RSS memory than it needs. A ratio below 1.0 means Redis is swapping to disk, which is bad for performance.
redis-cli -a YourStr0ngP@sswordHere INFO stats
Watch keyspace_hits and keyspace_misses to calculate your cache hit ratio. A healthy cache should have a hit ratio above 90%. The formula is: hits / (hits + misses) * 100.
Latency Monitoring
Enable the latency monitor by setting a threshold in milliseconds. Any event that takes longer than this threshold gets recorded:
redis-cli -a YourStr0ngP@sswordHere CONFIG SET latency-monitor-threshold 5
View recent latency events:
redis-cli -a YourStr0ngP@sswordHere LATENCY LATEST
For a quick latency baseline, use the built-in latency test:
redis-cli -a YourStr0ngP@sswordHere --latency
This continuously pings the server and reports min, max, and average latency. On a local connection, you should see sub-millisecond values.
Slow Log
The slow log captures commands that exceed a time threshold. Configure it in redis.conf:
slowlog-log-slower-than 10000
slowlog-max-len 128
The threshold is in microseconds. A value of 10000 means 10 milliseconds. View the slow log:
redis-cli -a YourStr0ngP@sswordHere SLOWLOG GET 10
Each entry shows the command, its arguments, execution time, and the client that issued it. Use this to find queries that need optimization, such as KEYS * calls that should be replaced with SCAN, or SORT operations on large sets.
Step 13: Troubleshooting Common Issues
Redis Fails to Start
Check the systemd journal for the error message:
sudo journalctl -xeu redis-server
Common causes: a syntax error in redis.conf, the bind address does not exist on the system, or another process is already listening on port 6379. To find what is using the port:
sudo ss -tlnp | grep 6379
Cannot Connect from Remote Server
Walk through the checklist in order:
- Confirm
bindinredis.confincludes the server’s private IP, not just127.0.0.1. - Confirm
protected-modeis set tono, or that a password is configured (which automatically disables protected mode for authenticated clients). - Verify the firewall allows the connecting IP on port 6379.
- Test connectivity with
redis-cli -h 10.0.1.50 -a YourStr0ngP@sswordHere PINGfrom the client machine.
High Memory Usage and OOM Kills
If Redis gets killed by the OOM killer, you will see messages in dmesg or /var/log/syslog. Causes and fixes:
- No maxmemory set – Redis grows without limit. Set
maxmemoryand an eviction policy. - Large RDB fork – During background saves, Redis forks and the child process copies the memory space. If the dataset is large and many keys are being modified, memory usage temporarily doubles. Make sure the server has enough free memory to handle the fork, or schedule saves during low-traffic periods.
- Memory fragmentation – Check
INFO memoryformem_fragmentation_ratio. If it is above 1.5, consider restarting Redis during a maintenance window to reclaim fragmented memory. Redis 4+ supports active defragmentation:CONFIG SET activedefrag yes.
Latency Spikes
If you see periodic latency spikes, the usual suspects are:
- RDB saves or AOF rewrites – These trigger fork(), which can pause Redis for milliseconds to seconds depending on the dataset size. Check
INFO persistenceforrdb_last_bgsave_time_secandaof_last_rewrite_time_sec. - Transparent Hugepages enabled – Follow the steps in Step 10 to disable THP.
- Slow commands – Check
SLOWLOG GETfor expensive operations. Commands likeKEYS *,SORTon large sets, orSMEMBERSon a set with millions of elements block the event loop. - Swapping – If
mem_fragmentation_ratiois below 1.0 inINFO memory, Redis is using swap. Add more RAM or reduce the dataset size.
AOF File Corruption
If Redis refuses to start because the AOF file is corrupted (typically after a power loss), use the repair tool:
sudo redis-check-aof --fix /var/lib/redis/appendonly.aof
This truncates the AOF at the first invalid command. You may lose a few writes, but the rest of the data will be intact.
Replication Lag
On replica nodes, check the replication offset:
redis-cli -a YourStr0ngP@sswordHere INFO replication
Look at master_repl_offset on the master and slave_repl_offset on the replica. The difference tells you how far behind the replica is. Large lag usually means the replica’s network connection is slow, the replica is overloaded with reads, or an RDB transfer is in progress. For Sentinel deployments, persistent replication lag may cause unnecessary failovers.
Summary
You now have Redis installed from the official repository on Ubuntu 24.04 or Debian 13, configured for production with authentication, memory limits, and persistence. The setup covers firewall rules to restrict network access, PHP session storage and Django/Laravel cache backends, Redis Sentinel for automatic failover across three nodes, kernel tuning for stable performance under load, security hardening with renamed commands and TLS, and monitoring with the built-in slow log and latency tools. From here, consider setting up automated backups of your RDB snapshots to off-server storage, and integrate Redis metrics into your existing monitoring stack (Prometheus with the Redis exporter, or Grafana with the Redis data source) for long-term visibility into memory, hit rates, and command throughput.































































