Docker Swarm is Docker’s built-in container orchestration tool that turns a group of Docker hosts into a single virtual cluster. It handles service deployment, scaling, load balancing, and rolling updates with zero additional software – everything runs through the standard Docker CLI. For teams that need container orchestration without the complexity of Kubernetes, Swarm is a practical choice.
This guide walks through setting up a Docker Swarm cluster on Rocky Linux 10 and AlmaLinux 10 with one manager node and two worker nodes. We cover Docker installation, swarm initialization, service deployment, scaling, rolling updates, overlay networking, node management, firewall configuration, and Portainer for web-based cluster management. The official Docker Swarm documentation covers the full feature set if you need deeper reference.
Prerequisites
- 3 servers running Rocky Linux 10 or AlmaLinux 10 with at least 2GB RAM and 2 vCPUs each
- Root or sudo access on all nodes
- Network connectivity between all nodes on ports 2377/tcp, 7946/tcp+udp, and 4789/udp
- Unique hostname on each server
Our lab setup uses these IPs:
| Hostname | IP Address | Role |
|---|---|---|
| swarm-manager | 10.0.1.11 | Manager |
| swarm-worker1 | 10.0.1.12 | Worker |
| swarm-worker2 | 10.0.1.13 | Worker |
Step 1: Set Hostnames on All Nodes
Set a unique hostname on each server so nodes are easy to identify in the cluster. Run the appropriate command on each node.
On the manager node (10.0.1.11):
sudo hostnamectl set-hostname swarm-manager
On worker1 (10.0.1.12):
sudo hostnamectl set-hostname swarm-worker1
On worker2 (10.0.1.13):
sudo hostnamectl set-hostname swarm-worker2
Add all three nodes to /etc/hosts on every server so they can resolve each other by name. Open the file:
sudo vi /etc/hosts
Add these lines:
10.0.1.11 swarm-manager
10.0.1.12 swarm-worker1
10.0.1.13 swarm-worker2
Step 2: Configure Firewall for Docker Swarm
Docker Swarm requires specific ports open between all cluster nodes. These ports handle cluster management, node communication, and overlay network traffic.
| Port | Protocol | Purpose |
|---|---|---|
| 2377 | TCP | Cluster management and Raft consensus |
| 7946 | TCP + UDP | Node discovery and gossip protocol |
| 4789 | UDP | VXLAN overlay network traffic |
Run these firewall commands on all three nodes:
sudo firewall-cmd --permanent --add-port=2377/tcp
sudo firewall-cmd --permanent --add-port=7946/tcp
sudo firewall-cmd --permanent --add-port=7946/udp
sudo firewall-cmd --permanent --add-port=4789/udp
sudo firewall-cmd --reload
Verify the ports are open:
sudo firewall-cmd --list-ports
The output should show all four port entries:
2377/tcp 7946/tcp 7946/udp 4789/udp
Step 3: Install Docker on All Nodes
Install Docker CE from the official Docker repository on all three servers. The default docker package in the RHEL repos is an older version – we need Docker CE for Swarm support. If you need a more detailed walkthrough, check our guide on installing Docker CE on Rocky Linux / AlmaLinux.
Remove any conflicting packages first:
sudo dnf remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine podman buildah
Add the official Docker CE repository:
sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
Install Docker CE, the CLI tools, and containerd:
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Enable and start the Docker service:
sudo systemctl enable --now docker
Verify Docker is running:
sudo systemctl status docker
You should see the service active and running:
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)
Active: active (running) since Fri 2026-03-21 10:15:32 UTC; 5s ago
Main PID: 12345 (dockerd)
Tasks: 8
Memory: 98.2M
CGroup: /system.slice/docker.service
└─12345 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Check the installed Docker version:
docker --version
This confirms Docker CE 29.x is installed:
Docker version 29.3.0, build a1710b6
Add your user to the docker group so you can run Docker commands without sudo (optional but convenient):
sudo usermod -aG docker $USER
newgrp docker
Repeat all Docker installation steps on the other two nodes before proceeding.
Step 4: Initialize Docker Swarm on the Manager Node
On the manager node (10.0.1.11), initialize the swarm cluster. The --advertise-addr flag tells other nodes which IP to use for cluster communication.
sudo docker swarm init --advertise-addr 10.0.1.11
Docker returns a join token that worker nodes use to connect to the cluster:
Swarm initialized: current node (abc123def456) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0abc1234567890xyz-abcdef1234567890 10.0.1.11:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Copy the docker swarm join command from the output – you need it for the next step. If you lose the token, retrieve it anytime from the manager:
sudo docker swarm join-token worker
Step 5: Join Worker Nodes to the Swarm Cluster
On each worker node, run the join command from the previous step. SSH into worker1 (10.0.1.12) and worker2 (10.0.1.13) and run:
sudo docker swarm join --token SWMTKN-1-0abc1234567890xyz-abcdef1234567890 10.0.1.11:2377
Each worker confirms it joined successfully:
This node joined a swarm as a worker.
Back on the manager node, verify all three nodes are in the cluster:
sudo docker node ls
All nodes should show Ready status with the manager marked as Leader:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
abc123def456 * swarm-manager Ready Active Leader 29.3.0
ghi789jkl012 swarm-worker1 Ready Active 29.3.0
mno345pqr678 swarm-worker2 Ready Active 29.3.0
Step 6: Deploy a Service on Docker Swarm
With the cluster ready, deploy a service. Swarm services are the primary way to run containers across the cluster. Let’s deploy an Nginx web server with 3 replicas spread across the nodes.
Run this on the manager node:
sudo docker service create --name web --replicas 3 --publish published=8080,target=80 nginx:latest
Docker pulls the Nginx image and distributes 3 containers across the cluster. Check the service status:
sudo docker service ls
The output shows the service running with all 3 replicas:
ID NAME MODE REPLICAS IMAGE PORTS
x1y2z3a4b5c6 web replicated 3/3 nginx:latest *:8080->80/tcp
To see which nodes are running the containers:
sudo docker service ps web
Each replica is placed on a different node for high availability:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
a1b2c3d4e5f6 web.1 nginx:latest swarm-manager Running Running 30 seconds ago
g7h8i9j0k1l2 web.2 nginx:latest swarm-worker1 Running Running 28 seconds ago
m3n4o5p6q7r8 web.3 nginx:latest swarm-worker2 Running Running 28 seconds ago
Open port 8080 in the firewall on all nodes if you want external access:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Test the service by accessing any node’s IP on port 8080. Swarm’s ingress routing mesh forwards the request to a running container regardless of which node you hit:
curl -s http://10.0.1.11:8080 | head -5
You should see the default Nginx welcome page HTML:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
Step 7: Scale Services Up and Down
Swarm makes scaling straightforward. To increase the Nginx service to 5 replicas:
sudo docker service scale web=5
Docker distributes the additional containers across available nodes:
web scaled to 5
overall progress: 5 out of 5 tasks
1/5: running
2/5: running
3/5: running
4/5: running
5/5: running
verify: Service converged
Verify the new replica count:
sudo docker service ls
The REPLICAS column now shows 5/5:
ID NAME MODE REPLICAS IMAGE PORTS
x1y2z3a4b5c6 web replicated 5/5 nginx:latest *:8080->80/tcp
To scale back down:
sudo docker service scale web=3
Step 8: Perform Rolling Updates
Rolling updates let you update service images without downtime. Swarm replaces containers one at a time (or in batches), keeping the service available throughout the process.
Update the Nginx service to a specific version with a 10-second delay between each container replacement:
sudo docker service update --image nginx:1.27 --update-delay 10s --update-parallelism 1 web
Watch the update progress in real time:
sudo docker service ps web
You can see the old containers shutting down and new ones starting with the updated image:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
s1t2u3v4w5x6 web.1 nginx:1.27 swarm-manager Running Running 15 seconds ago
a1b2c3d4e5f6 \_ web.1 nginx:latest swarm-manager Shutdown Shutdown 16 seconds ago
y7z8a9b0c1d2 web.2 nginx:1.27 swarm-worker1 Running Running 5 seconds ago
g7h8i9j0k1l2 \_ web.2 nginx:latest swarm-worker1 Shutdown Shutdown 6 seconds ago
e3f4g5h6i7j8 web.3 nginx:1.27 swarm-worker2 Running Running 25 seconds ago
m3n4o5p6q7r8 \_ web.3 nginx:latest swarm-worker2 Shutdown Shutdown 26 seconds ago
If an update goes wrong, roll back to the previous version:
sudo docker service rollback web
Step 9: Create an Overlay Network
Overlay networks allow containers on different nodes to communicate as if they were on the same local network. This is essential for multi-service applications where frontend and backend containers run on different nodes. For container networking basics, see our guide on running containers with Podman on Rocky Linux 10.
Create an overlay network on the manager node:
sudo docker network create --driver overlay --attachable app-network
The --attachable flag allows standalone containers (not just services) to connect to this network. Verify the network was created:
sudo docker network ls --filter driver=overlay
You should see both the default ingress network and your new overlay:
NETWORK ID NAME DRIVER SCOPE
a1b2c3d4e5f6 app-network overlay swarm
x7y8z9a0b1c2 ingress overlay swarm
Deploy two services on the same overlay network to demonstrate cross-node communication. First, create a Redis backend:
sudo docker service create --name redis --network app-network redis:latest
Then deploy a web app on the same network:
sudo docker service create --name webapp --network app-network --publish published=5000,target=5000 --replicas 2 nginx:latest
Containers in both services can reach each other by service name. Swarm provides built-in DNS resolution – the webapp containers can connect to redis by using redis as the hostname.
Step 10: Inspect and Monitor Services
Docker provides several commands to inspect service health and configuration. View detailed information about a service:
sudo docker service inspect --pretty web
This shows the full service configuration including replicas, image, ports, and update settings:
ID: x1y2z3a4b5c6
Name: web
Service Mode: Replicated
Replicas: 3
Placement:
UpdateConfig:
Parallelism: 1
Delay: 10s
On failure: pause
ContainerSpec:
Image: nginx:1.27
Ports:
PublishedPort = 8080
Protocol = tcp
TargetPort = 80
PublishMode = ingress
Check the logs from all replicas of a service:
sudo docker service logs web --tail 20
This aggregates log output across all containers running the service, which is useful for debugging.
Step 11: Drain and Remove Nodes
When you need to perform maintenance on a node – OS updates, hardware work, or decommissioning – drain it first. Draining moves all running tasks to other available nodes.
Drain worker2 for maintenance:
sudo docker node update --availability drain swarm-worker2
Verify the node status changed:
sudo docker node ls
The drained node shows Drain under AVAILABILITY – Swarm will not schedule new tasks on it:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
abc123def456 * swarm-manager Ready Active Leader 29.3.0
ghi789jkl012 swarm-worker1 Ready Active 29.3.0
mno345pqr678 swarm-worker2 Ready Drain 29.3.0
After maintenance is done, bring the node back to active:
sudo docker node update --availability active swarm-worker2
To permanently remove a worker node from the swarm, first run this on the worker node itself:
sudo docker swarm leave
Then remove it from the manager’s node list:
sudo docker node rm swarm-worker2
Step 12: Deploy Portainer for Web-Based Swarm Management
Portainer provides a web UI for managing Docker Swarm clusters – you can deploy services, view logs, manage networks, and monitor node health from a browser. Deploy it as a Swarm stack on the manager node.
Download the official Portainer CE Swarm deployment file:
curl -L https://downloads.portainer.io/ce2-21/portainer-agent-stack.yml -o portainer-agent-stack.yml
Deploy the Portainer stack:
sudo docker stack deploy -c portainer-agent-stack.yml portainer
Verify the Portainer services are running:
sudo docker stack services portainer
Both the Portainer server and agent should show 1/1 replicas running:
ID NAME MODE REPLICAS IMAGE PORTS
a1b2c3d4e5f6 portainer_agent global 3/3 portainer/agent:2.21.5
g7h8i9j0k1l2 portainer_portainer replicated 1/1 portainer/portainer-ce:2.21.5 *:9443->9443/tcp, *:9000->9000/tcp
Open port 9443 in the firewall on the manager node for the Portainer web interface:
sudo firewall-cmd --permanent --add-port=9443/tcp
sudo firewall-cmd --reload
Access the Portainer UI at https://10.0.1.11:9443. On first launch, create an admin user and password. Portainer auto-detects the Swarm environment and shows your cluster nodes, services, and containers in a single dashboard.
Step 13: Deploy a Multi-Service Stack
For production workloads, Docker Compose files define multi-service applications that Swarm deploys as a stack. Here’s an example stack with a web frontend and Redis backend.
Create a stack definition file:
sudo vi docker-stack.yml
Add the following service definitions:
version: "3.9"
services:
web:
image: nginx:1.27
ports:
- "8081:80"
networks:
- frontend
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
redis:
image: redis:latest
networks:
- frontend
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
networks:
frontend:
driver: overlay
Deploy the stack:
sudo docker stack deploy -c docker-stack.yml myapp
Check the stack services:
sudo docker stack services myapp
Both services should be running with the specified replica counts:
ID NAME MODE REPLICAS IMAGE PORTS
a1b2c3d4e5f6 myapp_web replicated 3/3 nginx:1.27 *:8081->80/tcp
g7h8i9j0k1l2 myapp_redis replicated 1/1 redis:latest
To remove the stack and all its services:
sudo docker stack rm myapp
Step 14: Useful Docker Swarm Commands
Here is a reference of commonly used Docker Swarm commands for day-to-day cluster management. Managing containers effectively is a key skill – see our article on scanning Docker container images for vulnerabilities with Trivy for security best practices.
| Command | Description |
|---|---|
docker node ls | List all nodes in the swarm |
docker service ls | List all running services |
docker service ps SERVICE | Show tasks (containers) for a service |
docker service logs SERVICE | View aggregated logs for a service |
docker service scale SERVICE=N | Scale a service to N replicas |
docker service update --image IMAGE SERVICE | Update the image for a service |
docker service rollback SERVICE | Roll back a service to previous version |
docker stack deploy -c FILE STACK | Deploy a stack from a Compose file |
docker stack services STACK | List services in a stack |
docker stack rm STACK | Remove a stack and its services |
docker node update --availability drain NODE | Drain a node for maintenance |
docker swarm join-token worker | Show the worker join token |
Conclusion
You now have a working Docker Swarm cluster on Rocky Linux 10 / AlmaLinux 10 with a manager node, two workers, deployed services, overlay networking, and Portainer for UI management. The cluster handles service scaling, rolling updates, and automatic container rescheduling when nodes go down.
For production environments, add at least two more manager nodes (3 or 5 total) for high availability – if the single manager fails, the entire cluster loses its control plane. Set up TLS certificates for encrypted node communication, configure log aggregation with a centralized logging stack, and place a reverse proxy like Nginx in front of your services for SSL termination and domain-based routing.