Containers

Rancher Quickstart: Deploy Your First Cluster in 15 Minutes

Most Rancher installation guides start with RKE2, Helm charts, and load balancers. That’s the right approach for production, but it’s not how you should learn Rancher. The fastest path from zero to a working Rancher instance is a single Docker container on a single server. Fifteen minutes, no Kubernetes prerequisite, no external database.

Original content from computingforgeeks.com - post 165145

This guide walks through the single-node Docker deployment of Rancher, which gives you the full management UI for creating and importing Kubernetes clusters, managing RBAC, deploying applications from a catalog, and running Fleet GitOps workflows. It’s ideal for evaluation, home labs, and small teams that don’t need high availability yet. When you outgrow this setup, the HA installation on RKE2 is the next step.

Tested March 2026 | Rancher v2.14.0, Docker 29.3.1 on Ubuntu 24.04 LTS

What You Need

Rancher’s Docker deployment is lightweight compared to the full HA stack, but it still runs an embedded K3s cluster internally. Plan for at least these resources:

  • Server – Ubuntu 24.04 LTS or Rocky Linux 10, physical or virtual
  • RAM – 4 GB minimum (Rancher consumes roughly 1.6 GB after startup)
  • Disk – 20 GB free (the container image and internal state use about 7 GB)
  • Ports – 80/tcp and 443/tcp open, nothing else listening on them
  • Docker – Any recent version (we tested with Docker 29.3.1)

Root or sudo access is required. The examples below use Ubuntu 24.04, but every command works on Rocky Linux 10 with dnf instead of apt.

Install Docker

If Docker is already installed, skip ahead. Otherwise, the convenience script from Docker handles the repository setup and package installation in one shot:

curl -fsSL https://get.docker.com | sudo sh

Once the installation finishes, add your user to the docker group so you don’t need sudo for every command:

sudo usermod -aG docker $USER
newgrp docker

Confirm Docker is running and check the version:

docker version

The output should show both client and server components. On our test system, this returned Docker 29.3.1:

Client: Docker Engine - Community
 Version:           29.3.1
 API version:       1.48
 Go version:        go1.23.8
 Built:             Wed May  7 21:14:09 2025

Server: Docker Engine - Community
 Engine:
  Version:          29.3.1
  API version:      1.48 (minimum version 1.24)
  Go version:       go1.23.8

Run the Rancher Container

This single command pulls the Rancher image and starts it with everything needed to serve the management UI on ports 80 and 443:

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  --privileged \
  --name rancher \
  rancher/rancher:v2.14.0

Here’s what each flag does:

  • --privileged – Required because Rancher runs an embedded K3s cluster inside the container, which needs access to kernel features like cgroups and iptables
  • -p 80:80 -p 443:443 – Maps HTTP and HTTPS to the host. Rancher generates a self-signed certificate automatically
  • --restart=unless-stopped – Ensures the container comes back after a server reboot, unless you explicitly stop it
  • rancher/rancher:v2.14.0 – Pinning the version avoids surprises from automatic updates. Check the official docs for the latest stable release

The image is roughly 1 GB. On a reasonable connection, the pull takes about a minute.

Wait for Initialization

Rancher needs about three minutes after the container starts to fully initialize its internal Kubernetes components. Don’t try to access the UI immediately because you’ll get connection refused or a blank page.

Watch the logs to track progress:

docker logs -f rancher

You’ll see a stream of K3s startup messages, certificate generation, and controller initialization. When the log output slows down and you see lines about handlers and controllers being registered, Rancher is ready.

Press Ctrl+C to stop following the logs, then retrieve the bootstrap password that was generated during first startup:

docker logs rancher 2>&1 | grep "Bootstrap Password"

This prints a single line containing a random string. Copy it because you’ll need it in the next step:

2026/03/28 14:22:31 [INFO] Bootstrap Password: 7kxbf2mlnqr4wg9tsdjhvp5c

If the grep returns nothing, Rancher hasn’t finished initializing yet. Wait another minute and try again.

Access the Rancher UI

Open a browser and navigate to https://10.0.1.50. You’ll see a certificate warning because Rancher generates a self-signed TLS certificate by default. Accept the warning and proceed.

The first screen asks for the bootstrap password. Paste the value you retrieved from the container logs and click Log in with Local User.

Rancher then prompts you to:

  1. Set a new admin password – Pick something strong. This replaces the bootstrap password permanently
  2. Confirm the server URL – Rancher auto-detects this as https://10.0.1.50. If your server has a DNS name, change it here. Downstream clusters will use this URL to communicate back to Rancher, so it must be reachable from any cluster you plan to manage
  3. Accept the terms – Check the box and continue

After completing these steps, you land on the Rancher home dashboard. The “local” cluster listed there is the embedded K3s instance running inside the Docker container. That’s Rancher managing itself.

What Rancher Gives You

Even in this single-container deployment, you get the complete Rancher feature set. The difference between this and an HA setup is resilience, not functionality.

Cluster management is the core feature. From this single Rancher instance, you can provision new Kubernetes clusters on bare metal, VMs, or cloud providers (AWS, Azure, GCP). You can also import existing clusters that were built with kubeadm, K3s, or any other distribution. Every cluster appears in one unified dashboard with health status, resource usage, and alerts.

RBAC and authentication are handled centrally. Rancher integrates with Active Directory, LDAP, SAML, GitHub, and other identity providers. You define roles once and apply them across all managed clusters, which eliminates the need to manage kubeconfig files and ClusterRoleBindings on each cluster individually.

Fleet GitOps ships built into Rancher. Fleet watches Git repositories and deploys manifests, Helm charts, or Kustomize overlays to any combination of clusters. For teams adopting GitOps, this replaces the need for a separate ArgoCD or Flux installation.

The app catalog (under Apps in the navigation) provides one-click deployment of Helm charts from curated and custom repositories. Monitoring (Prometheus + Grafana), logging (Fluentd + Elasticsearch), and Istio are all available as managed apps that Rancher keeps updated.

Spend a few minutes clicking through the navigation. The Cluster Management section is where you’ll provision new clusters. Continuous Delivery is where Fleet lives. Users & Authentication is where you configure identity providers.

Provision a Downstream Cluster

The whole point of Rancher is managing Kubernetes clusters, not just running one. From the dashboard, you can create a new cluster in a few clicks.

Navigate to Cluster Management and click Create. Rancher offers several provisioning options:

  • Custom – You provide the VMs, Rancher installs K3s or RKE2 on them. You get a registration command to run on each node
  • Amazon EC2 / Azure / GCP – Rancher provisions the VMs and the cluster automatically using cloud provider APIs
  • Import Existing – For clusters already running, Rancher deploys an agent to bring them under management

For a quick test, the Custom option with a single additional VM works well. Select RKE2 or K3s as the Kubernetes distribution, give the cluster a name, and click Create. Rancher generates a curl registration command. SSH into your target VM, run that command, and within a few minutes the new cluster appears in the Rancher dashboard as Active.

If you have an existing K3s cluster, importing it is even simpler. Choose Import Existing, name the cluster, and apply the generated YAML manifest on the target cluster with kubectl.

Verify Resource Usage

After everything is running, check how much the Rancher container is consuming:

docker stats rancher --no-stream

On our test system with no downstream clusters registered, the numbers looked like this:

CONTAINER ID   NAME      CPU %   MEM USAGE / LIMIT     MEM %
a3f7c2e81d4b   rancher   2.14%   1.612GiB / 3.832GiB   42.06%

About 1.6 GB of RAM at idle. Each downstream cluster you manage adds some overhead to the Rancher controller, but for evaluation with a handful of clusters, 4 GB of total RAM on the Rancher host is sufficient.

Basic Container Operations

A few commands you’ll use regularly when running Rancher as a Docker container.

Stop Rancher without destroying state:

docker stop rancher

Start it again:

docker start rancher

Back up the container’s persistent data (Rancher stores everything in /var/lib/rancher inside the container):

docker cp rancher:/var/lib/rancher ./rancher-backup-$(date +%F)

Upgrade to a newer Rancher version by stopping the old container, backing up data, and running a new container with the updated tag. The official docs cover the full upgrade procedure, which involves mounting the data volume into the new container.

When You Outgrow Docker

The single-container deployment has one critical limitation: no redundancy. If the Docker host goes down, Rancher goes down, and you lose visibility into all managed clusters. The downstream clusters themselves keep running (Rancher is a control plane, not a data plane), but you can’t manage them until Rancher comes back.

For production environments, Rancher should run on a dedicated RKE2 or K3s cluster with three control plane nodes behind a load balancer. This gives you etcd replication, automatic failover, and proper certificate management with cert-manager. The Rancher HA installation guide covers that setup in detail.

The migration path from Docker to HA is straightforward: back up the Docker installation, deploy Rancher on the new HA cluster, and restore the backup. All your cluster registrations, users, and configuration carry over.

Until you hit that point, the Docker deployment is a perfectly capable management plane. It handles cluster provisioning, monitoring, and GitOps just as well as the HA version. The only difference is what happens when the Rancher host itself fails.

Related Articles

Containers How to Monitor Docker Containers with Checkmk Ubuntu Install MongoDB 8.0 on Ubuntu 24.04 / Debian 13 AWS EKS Kubernetes Persistent Storage with EFS Storage Service VOIP Install Asterisk 18 LTS on Ubuntu 22.04|20.04|18.04

Leave a Comment

Press ESC to close