Rancher is an open-source Kubernetes management platform that gives you a single pane of glass for deploying, managing, and monitoring multiple Kubernetes clusters. Whether your clusters run on bare metal, in the cloud, or at the edge, Rancher simplifies day-to-day operations with a powerful web UI and centralized authentication. In this guide, we will install Rancher 2.9+ using Helm on an existing K3s or RKE2 cluster, then use it to provision and manage downstream Kubernetes clusters on RHEL 10 and Ubuntu 24.04.

What Rancher Brings to the Table

Running a single Kubernetes cluster is straightforward enough. Running five or ten across different environments is where things get painful. Rancher solves that problem by centralizing cluster lifecycle management, role-based access control (RBAC), monitoring, alerting, and backup operations. It supports provisioning clusters through RKE2, K3s, EKS, AKS, GKE, and custom providers. The web UI lets you manage workloads, inspect pods, view logs, and open a kubectl shell – all without leaving the browser.

Prerequisites

Before you begin, make sure you have the following in place:

  • A running K3s or RKE2 cluster that will serve as the Rancher management cluster (a single-node setup works for testing)
  • RHEL 10 or Ubuntu 24.04 on the management node with at least 4 GB RAM and 2 CPUs
  • kubectl configured and communicating with your cluster
  • Helm 3.x installed on your workstation
  • A valid domain name pointing to your management node (or you can use a self-signed certificate for lab environments)
  • Ports 80 and 443 open on the management node firewall

Confirm your cluster is healthy before proceeding:

kubectl get nodes
kubectl cluster-info

Both commands should return without errors. If you are running K3s, the kubeconfig file is at /etc/rancher/k3s/k3s.yaml. Export it so Helm and kubectl can find it:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Step 1 – Install cert-manager

Rancher relies on cert-manager to handle TLS certificate issuance and renewal. Install it before deploying Rancher:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.crds.yaml

helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.16.3

Verify that all cert-manager pods are running:

kubectl get pods -n cert-manager

You should see three pods (cert-manager, cert-manager-cainjector, and cert-manager-webhook) all in Running state.

Step 2 – Install Rancher via Helm

Add the Rancher Helm repository. Use the stable channel for production workloads:

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo update

Create the cattle-system namespace and install Rancher. Replace rancher.example.com with your actual hostname:

kubectl create namespace cattle-system

helm install rancher rancher-stable/rancher \
  --namespace cattle-system \
  --set hostname=rancher.example.com \
  --set bootstrapPassword=YourSecurePasswordHere \
  --set replicas=1

For production environments, set replicas=3 and use a proper Let’s Encrypt certificate by adding --set ingress.tls.source=letsEncrypt --set [email protected].

Wait for the deployment to finish rolling out:

kubectl rollout status deployment rancher -n cattle-system

Verify Rancher pods are running:

kubectl get pods -n cattle-system

Step 3 – Access the Rancher Web UI

Open your browser and navigate to https://rancher.example.com. If you used self-signed certificates, you will need to accept the browser warning. On first login, Rancher asks you to set the admin password. Use the bootstrap password you defined during installation, then set a permanent password when prompted.

Once logged in, you land on the Cluster Management dashboard. The local cluster (your management cluster) is already listed. This cluster runs Rancher itself and should not be used for application workloads in production.

If DNS is not configured yet and you want quick access for testing, add an entry to your local /etc/hosts file:

echo "192.168.1.50 rancher.example.com" | sudo tee -a /etc/hosts

Step 4 – Create a Downstream RKE2 Cluster

This is where Rancher shines. You can provision new Kubernetes clusters on remote nodes directly from the UI.

In the Rancher UI, go to Cluster Management and click Create. Select Custom to provision on existing nodes. Choose RKE2/K3s as the Kubernetes distribution. Give your cluster a name (for example, production-cluster) and select the Kubernetes version.

On the next screen, Rancher generates a registration command. Copy this command and run it on each node you want to join to the cluster. For the first node, select all three roles – etcd, control plane, and worker:

# Run on the first node (all roles)
curl -fL https://rancher.example.com/system-agent-install.sh | sudo sh -s - \
  --server https://rancher.example.com \
  --label 'cattle.io/os=linux' \
  --token YOUR_TOKEN_HERE \
  --etcd --controlplane --worker

For additional worker nodes, run the same command but only with the --worker flag. Within a few minutes, the cluster appears as Active in the Rancher dashboard.

Verify the cluster from the Rancher UI or download the kubeconfig from the cluster’s dashboard page and run:

kubectl --kubeconfig ~/production-cluster.yaml get nodes

Step 5 – Managing Multiple Clusters

With Rancher, switching between clusters is a dropdown selection in the UI. You can also import existing clusters that were provisioned outside of Rancher. Go to Cluster Management, click Import Existing, and follow the instructions to apply the agent manifest on the target cluster.

Key multi-cluster management features include:

  • Centralized RBAC – Define users and roles once, propagate permissions across all clusters
  • Fleet for GitOps – Rancher includes Fleet, which deploys applications across clusters from Git repositories
  • Cluster Templates – Define standard cluster configurations and reuse them
  • Global DNS – Route traffic across clusters with integrated DNS management

Step 6 – Deploy the Monitoring Stack

Rancher integrates Prometheus and Grafana through its Monitoring chart. To enable it, navigate to a cluster in the Rancher UI, go to Apps and then Charts. Search for Monitoring and click Install.

The default installation deploys Prometheus, Grafana, Alertmanager, and node-exporter. You can customize resource requests, retention periods, and persistent storage during installation.

After installation, access Grafana from Monitoring in the left sidebar. Rancher ships with pre-built dashboards for cluster health, node metrics, pod resource usage, and etcd performance. You can create custom dashboards or import community dashboards using their IDs from grafana.com.

Verify monitoring components are running:

kubectl get pods -n cattle-monitoring-system

Step 7 – Backup and Restore

The rancher-backup operator handles backup and restore of the Rancher application and its configuration. Install it from the Charts page in the Rancher UI by searching for Rancher Backups.

Once installed, create a backup by going to Rancher Backups in the left sidebar and clicking Create. You can store backups locally or in an S3-compatible bucket. For production setups, always use S3 storage:

apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: rancher-daily-backup
spec:
  storageLocation:
    s3:
      bucketName: rancher-backups
      endpoint: s3.amazonaws.com
      region: us-east-1
      credentialSecretName: s3-credentials
      credentialSecretNamespace: cattle-resources-system
  schedule: "0 2 * * *"
  retentionCount: 10

Apply this manifest to create scheduled daily backups:

kubectl apply -f rancher-backup.yaml

To restore from a backup, create a Restore resource pointing to the backup file. This is critical for disaster recovery – test the restore process regularly.

Firewall Configuration

If you are running firewalld on RHEL 10, open the required ports:

sudo firewall-cmd --permanent --add-port=443/tcp
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --reload

On Ubuntu 24.04 with UFW:

sudo ufw allow 443/tcp
sudo ufw allow 80/tcp
sudo ufw allow 6443/tcp
sudo ufw reload

Troubleshooting Common Issues

If Rancher pods are stuck in CrashLoopBackOff, check the logs:

kubectl logs -n cattle-system -l app=rancher --tail=100

Common causes include cert-manager not being ready, incorrect hostname configuration, or insufficient memory. On nodes with less than 4 GB RAM, Rancher may fail to start reliably.

If a downstream cluster stays in Provisioning state for too long, SSH into the node and check the system-agent logs:

sudo journalctl -u rancher-system-agent -f

Conclusion

Rancher turns multi-cluster Kubernetes management from a chore into something approachable. With a Helm-based install on K3s or RKE2, you can have the management plane running in minutes. From there, provisioning downstream clusters, setting up monitoring with Prometheus and Grafana, and configuring automated backups are all handled through a clean web interface. For teams running Kubernetes across multiple environments, Rancher eliminates the need to juggle kubeconfig files and separate tooling for each cluster.

LEAVE A REPLY

Please enter your comment!
Please enter your name here