How To

Setup LVS Load Balancer on RHEL 10 / Rocky Linux 10

Linux Virtual Server (LVS) is a kernel-level Layer 4 load balancer built directly into the Linux kernel through the IPVS (IP Virtual Server) module. Unlike application-level load balancers that process each request in userspace, IPVS operates inside the kernel’s networking stack, which makes it handle millions of concurrent connections with minimal CPU overhead. It is the technology behind some of the busiest web platforms in the world.

Original content from computingforgeeks.com - post 39933

This guide covers setting up LVS load balancing on RHEL 10 and Rocky Linux 10 using ipvsadm – the userspace tool for managing the IPVS table. We walk through both NAT and Direct Routing (DR) modes, add real servers, configure health checks with keepalived, set up firewall rules, and verify the load balancer is distributing traffic correctly.

Prerequisites

Before starting, make sure you have the following in place:

  • At least 3 servers running RHEL 10 or Rocky Linux 10 – one load balancer (director) and two or more real servers (backends)
  • Root or sudo access on all servers
  • A Virtual IP (VIP) address that is not assigned to any server – this is the IP clients connect to
  • All servers on the same network segment (required for Direct Routing mode)
  • Working firewalld configuration on all servers

Here is the network layout used in this guide:

Server RoleHostnameIP Address
Director (Load Balancer)lvs-director10.0.1.10
Real Server 1web110.0.1.11
Real Server 2web210.0.1.12
Virtual IP (VIP)10.0.1.100

Step 1: Install ipvsadm on the LVS Director

The ipvsadm package provides the command-line tool for managing the IPVS virtual server table in the kernel. Install it on the director server only – the real servers do not need it.

sudo dnf install -y ipvsadm

Load the IPVS kernel module and confirm it is active:

sudo modprobe ip_vs
lsmod | grep ip_vs

You should see the ip_vs module loaded along with its dependencies:

ip_vs_wrr              16384  0
ip_vs_wlc              16384  0
ip_vs_rr               16384  0
ip_vs                 200704  6 ip_vs_rr,ip_vs_wlc,ip_vs_wrr
nf_conntrack          188416  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
libcrc32c              16384  3 nf_conntrack,ip_vs

Make the module load automatically on boot:

echo "ip_vs" | sudo tee /etc/modules-load.d/ip_vs.conf

Enable IP forwarding on the director since it needs to route traffic between clients and real servers:

echo "net.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/99-ipvs.conf
sudo sysctl -p /etc/sysctl.d/99-ipvs.conf

Verify the forwarding is enabled:

sysctl net.ipv4.ip_forward

The output should confirm forwarding is active:

net.ipv4.ip_forward = 1

Step 2: Understand LVS Load Balancing Modes

IPVS supports three packet-forwarding modes. Each has different network requirements and performance characteristics. Choose the right mode before configuring anything.

NAT (Network Address Translation)

In NAT mode, the director rewrites both the destination address on incoming packets and the source address on return packets. All traffic flows through the director in both directions. This is the simplest mode to set up since real servers only need the director as their default gateway – no special configuration needed on backends. The downside is that the director becomes a bottleneck since every response packet passes through it.

  • Real servers use the director as their default gateway
  • Real servers can be on a different subnet
  • Scales to around 10-20 real servers before the director becomes a bottleneck

Direct Routing (DR)

In DR mode, the director only handles incoming packets. It changes the MAC address of the incoming frame to match the selected real server and forwards it on the local network. The real server responds directly to the client, bypassing the director entirely. This makes DR mode far more scalable since return traffic (which is typically much larger than requests) never touches the director.

  • All servers must be on the same Layer 2 network segment
  • Real servers must have the VIP configured on a loopback interface with ARP suppressed
  • Scales to hundreds of real servers

IP Tunneling (TUN)

In TUN mode, the director encapsulates the incoming packet inside a new IP packet (IP-in-IP tunnel) and forwards it to the real server. The real server decapsulates the packet, processes it, and responds directly to the client. TUN mode works across different networks and subnets, making it suitable for geographically distributed setups. It requires all real servers to support IP tunneling.

Step 3: Configure LVS with NAT Mode

NAT mode is the best starting point since it requires minimal configuration on the real servers. The director handles all address translation.

First, assign the Virtual IP to the director’s network interface. Replace eth0 with your actual interface name:

sudo ip addr add 10.0.1.100/24 dev eth0 label eth0:0

Create a virtual service for HTTP traffic on the VIP. This tells IPVS to listen for connections on 10.0.1.100 port 80 and use weighted round-robin scheduling:

sudo ipvsadm -A -t 10.0.1.100:80 -s wrr

Add the real servers to the virtual service using NAT forwarding (the -m flag). Each server gets a weight that controls how much traffic it receives relative to others:

sudo ipvsadm -a -t 10.0.1.100:80 -r 10.0.1.11:80 -m -w 3
sudo ipvsadm -a -t 10.0.1.100:80 -r 10.0.1.12:80 -m -w 2

In this example, web1 (weight 3) receives 60% of connections and web2 (weight 2) receives 40%.

On each real server, set the default gateway to the director’s IP so return traffic routes back through the director for NAT translation:

sudo ip route replace default via 10.0.1.10

Verify the IPVS table on the director:

sudo ipvsadm -Ln

The output should show the virtual service and both real servers with their weights and NAT forwarding method:

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.1.100:80 wrr
  -> 10.0.1.11:80                 Masq    3      0          0
  -> 10.0.1.12:80                 Masq    2      0          0

Step 4: Configure LVS with Direct Routing (DR) Mode

DR mode is the recommended approach for production deployments because return traffic goes directly from the real server to the client, eliminating the director as a bottleneck. It requires more setup on the real servers but delivers much better scalability.

If you followed Step 3 for NAT mode, clear the existing IPVS table first:

sudo ipvsadm -C

Configure the Director for DR Mode

Add the VIP to the director’s interface:

sudo ip addr add 10.0.1.100/32 dev eth0 label eth0:0

Note the /32 netmask – in DR mode, the VIP is a host route, not a network address. Create the virtual service with round-robin scheduling and add both real servers using the -g flag for direct routing (gatewaying):

sudo ipvsadm -A -t 10.0.1.100:80 -s rr
sudo ipvsadm -a -t 10.0.1.100:80 -r 10.0.1.11:80 -g -w 1
sudo ipvsadm -a -t 10.0.1.100:80 -r 10.0.1.12:80 -g -w 1

Configure Real Servers for DR Mode

Each real server needs the VIP bound to its loopback interface and ARP responses suppressed so it does not answer ARP requests for the VIP (only the director should respond to those). Run these commands on every real server.

Add the VIP to the loopback interface:

sudo ip addr add 10.0.1.100/32 dev lo label lo:0

Suppress ARP responses for the VIP on the real servers. This is critical – without it, real servers will answer ARP queries for the VIP and break the load balancer:

echo "net.ipv4.conf.lo.arp_ignore = 1" | sudo tee -a /etc/sysctl.d/99-lvs-dr.conf
echo "net.ipv4.conf.lo.arp_announce = 2" | sudo tee -a /etc/sysctl.d/99-lvs-dr.conf
echo "net.ipv4.conf.all.arp_ignore = 1" | sudo tee -a /etc/sysctl.d/99-lvs-dr.conf
echo "net.ipv4.conf.all.arp_announce = 2" | sudo tee -a /etc/sysctl.d/99-lvs-dr.conf
sudo sysctl -p /etc/sysctl.d/99-lvs-dr.conf

The sysctl settings mean:

  • arp_ignore = 1 – only respond to ARP requests if the target IP is configured on the incoming interface
  • arp_announce = 2 – always use the best local address for ARP requests, preventing the VIP from leaking into ARP replies

Verify the VIP is bound on the loopback:

ip addr show lo

You should see the VIP listed under the loopback interface:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.0.1.100/32 scope global lo:0
       valid_lft forever preferred_lft forever

Step 5: Add and Manage Real Servers

Managing real servers in the IPVS table is straightforward with ipvsadm. Here are the common operations you will need for day-to-day management.

Add a new real server to an existing virtual service (DR mode):

sudo ipvsadm -a -t 10.0.1.100:80 -r 10.0.1.13:80 -g -w 1

Change the weight of an existing real server to send it more or less traffic:

sudo ipvsadm -e -t 10.0.1.100:80 -r 10.0.1.11:80 -g -w 5

Remove a real server from the pool (for maintenance):

sudo ipvsadm -d -t 10.0.1.100:80 -r 10.0.1.12:80

Set a server weight to zero for graceful drain – existing connections complete but no new ones are sent:

sudo ipvsadm -e -t 10.0.1.100:80 -r 10.0.1.12:80 -g -w 0

Save the current IPVS configuration so it persists across reboots:

sudo ipvsadm-save -n > /etc/sysconfig/ipvsadm
sudo systemctl enable ipvsadm

Step 6: Configure Health Checks with Keepalived

Running ipvsadm alone has a major limitation – it does not check whether real servers are actually healthy. If a backend goes down, IPVS keeps sending traffic to it. Keepalived solves this by monitoring real servers and automatically removing failed ones from the IPVS pool.

Install keepalived on the director:

sudo dnf install -y keepalived

Back up the default configuration and create a new one:

sudo cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

Open the keepalived configuration file:

sudo vi /etc/keepalived/keepalived.conf

Replace the contents with the following LVS-DR configuration. This defines a virtual server on the VIP with HTTP health checks against both real servers:

global_defs {
    router_id LVS_DIRECTOR
}

virtual_server 10.0.1.100 80 {
    delay_loop 10
    lb_algo rr
    lb_kind DR
    persistence_timeout 60
    protocol TCP

    real_server 10.0.1.11 80 {
        weight 1
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }

    real_server 10.0.1.12 80 {
        weight 1
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

The key configuration options:

  • delay_loop 10 – check real server health every 10 seconds
  • lb_algo rr – use round-robin scheduling
  • lb_kind DR – use Direct Routing mode
  • persistence_timeout 60 – keep a client on the same real server for 60 seconds
  • HTTP_GET – check that the real server returns HTTP 200 on the root path
  • connect_timeout 3 – mark server as failed if it does not respond within 3 seconds
  • retry 3 – retry 3 times before removing a failed server

When keepalived manages the IPVS table, you do not need to add real servers manually with ipvsadm – keepalived handles it. Clear any existing manual IPVS rules before starting keepalived:

sudo ipvsadm -C

Enable and start keepalived:

sudo systemctl enable --now keepalived

Check that keepalived is running and has populated the IPVS table:

sudo systemctl status keepalived

The service should show active (running). Then verify keepalived has added the real servers to IPVS:

sudo ipvsadm -Ln

You should see both real servers listed under the virtual service, managed automatically by keepalived:

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.1.100:80 rr persistent 60
  -> 10.0.1.11:80                 Route   1      0          0
  -> 10.0.1.12:80                 Route   1      0          0

If a real server fails its health check, keepalived automatically removes it from the pool. When it recovers, keepalived adds it back. Check the keepalived logs for health check activity:

sudo journalctl -u keepalived -f

Step 7: Configure Firewall Rules

The director and real servers need firewall rules to allow load-balanced traffic through. On RHEL 10 and Rocky Linux 10, firewalld is the default firewall manager.

On the director, allow incoming HTTP and HTTPS traffic to the VIP:

sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload

If you are using NAT mode, the director also needs masquerading enabled so it can translate return traffic:

sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --reload

On each real server, allow HTTP and HTTPS traffic from the director and from clients (for DR mode where clients connect directly for return traffic):

sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload

Verify the firewall rules are active on both director and real servers:

sudo firewall-cmd --list-all

The output should include http and https in the services list:

public (active)
  target: default
  services: cockpit dhcpv6-client http https ssh

Step 8: Verify Load Balancing

With everything configured, test that the load balancer is distributing traffic across real servers. Make sure you have a web server running on each real server (Nginx, Apache, or any HTTP service on port 80).

From a client machine (not the director or real servers), send multiple requests to the VIP and check which server responds:

for i in $(seq 1 10); do curl -s http://10.0.1.100/ | head -1; done

If you set up different content on each real server, you should see responses alternating between web1 and web2.

Check the connection statistics on the director to see traffic distribution:

sudo ipvsadm -Ln --stats

This shows packet and byte counters for each real server. Both servers should show increasing connection counts:

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port               Conns   InPkts  OutPkts  InBytes OutBytes
  -> RemoteAddress:Port
TCP  10.0.1.100:80                      10       60        0    5120        0
  -> 10.0.1.11:80                        5       30        0    2560        0
  -> 10.0.1.12:80                        5       30        0    2560        0

To see real-time connection rates, use the rate display:

sudo ipvsadm -Ln --rate

For a live view of active connections, watch the IPVS table:

watch sudo ipvsadm -Ln

Step 9: Configure Persistent Connections

Some applications require that a client always connects to the same real server for the duration of a session. IPVS supports persistent connections that pin a client IP to a specific backend for a configurable time window.

Enable persistence with a 300-second (5-minute) timeout on an existing virtual service:

sudo ipvsadm -E -t 10.0.1.100:80 -s rr -p 300

If you are using keepalived, set persistence_timeout in the virtual_server block instead (as shown in Step 6). Keepalived will apply it automatically.

You can also use persistence with a network mask to group clients from the same subnet:

sudo ipvsadm -E -t 10.0.1.100:80 -s rr -p 300 -M 255.255.255.0

This pins all clients from the same /24 subnet to the same real server. Useful when clients are behind a NAT gateway and appear as different IPs from the same range.

View the current persistence connections in the IPVS table:

sudo ipvsadm -Lnc

The output shows each persistent mapping with its expiration timer:

IPVS connection entries
pro expire state       source             virtual            destination
TCP 04:52  NONE        10.0.1.50:0        10.0.1.100:80      10.0.1.11:80

ipvsadm Command Reference

Here is a quick reference for the most used ipvsadm commands and flags:

CommandDescription
ipvsadm -A -t VIP:port -s schedulerAdd a new virtual service (TCP)
ipvsadm -A -u VIP:port -s schedulerAdd a new virtual service (UDP)
ipvsadm -a -t VIP:port -r RIP:port -gAdd real server with Direct Routing
ipvsadm -a -t VIP:port -r RIP:port -mAdd real server with NAT (masquerading)
ipvsadm -a -t VIP:port -r RIP:port -iAdd real server with IP tunneling
ipvsadm -e -t VIP:port -r RIP:port -w NEdit real server weight
ipvsadm -d -t VIP:port -r RIP:portDelete a real server
ipvsadm -D -t VIP:portDelete an entire virtual service
ipvsadm -CClear all virtual services
ipvsadm -LnList current IPVS table (numeric)
ipvsadm -Ln --statsShow connection statistics
ipvsadm -Ln --rateShow connection rate
ipvsadm -LncList persistent connection entries
ipvsadm -ZZero all counters
ipvsadm-save -nSave current rules (numeric format)
ipvsadm-restoreRestore rules from stdin

IPVS Scheduling Algorithms

IPVS supports multiple scheduling algorithms. Choose based on your workload characteristics and whether your real servers have equal or different capacities.

AlgorithmFlagBest For
Round RobinrrEqual-capacity servers, stateless workloads
Weighted Round RobinwrrServers with different capacities
Least ConnectionslcLong-lived connections, uneven request sizes
Weighted Least ConnectionswlcMixed capacity servers with varying connection lengths
Source HashingshSession persistence based on client IP
Destination HashingdhCache server farms, consistent routing
Shortest Expected DelaysedMinimizing response time with weighted servers
Never QueuenqFavoring idle servers over busy ones

The default choice for most web workloads is wrr (weighted round robin) if your servers have different specs, or rr (round robin) if they are identical. For applications with long-lived connections like database proxying or WebSocket services, wlc (weighted least connections) distributes load more evenly.

Conclusion

You now have a working LVS load balancer on RHEL 10 or Rocky Linux 10 using the IPVS kernel module. The setup covers both NAT mode for simple deployments and DR mode for production-grade scalability, with keepalived handling health checks and automatic failover of real servers.

For production environments, add a second director running keepalived with VRRP for high availability of the load balancer itself. Monitor IPVS statistics with Prometheus or your existing monitoring stack to track connection distribution and detect capacity issues early. If you need application-layer load balancing with features like SSL termination and content-based routing, consider pairing LVS with HAProxy as a backend.

Related Articles

Debian How To Install NetBox on Ubuntu 18.04 LTS CentOS Enable Automatic Software Updates on CentOS 8 / RHEL 8 AlmaLinux Install .NET 10 on Rocky Linux 10 / AlmaLinux 10 CentOS How to Install ELK Stack on RHEL 8 / Rocky 8

Leave a Comment

Press ESC to close