If you manage Rocky Linux or AlmaLinux servers, Podman is already waiting for you in the default repos. No daemon, no root requirement, full OCI compliance. It handles everything Docker does for container workflows, and it does it without a persistent background process eating resources. Buildah handles image builds with fine-grained control that Dockerfiles alone can’t match, and Skopeo rounds out the trio by letting you inspect, copy, and mirror container images across registries without pulling them locally first.
This guide walks through installing Podman, Buildah, and Skopeo on Rocky Linux 10 and AlmaLinux 10 (both the OS repo version and the latest upstream option), then covers practical usage of all three tools with real tested examples. You’ll find sections on Podman container operations, volume mounts with SELinux, pod management, Quadlet systemd integration, Buildah image builds, Skopeo image inspection, networking, secrets, and a comparison table against Docker. If you’re coming from a Docker background, the transition is straightforward because Podman uses the same CLI syntax.
Verified working: March 2026 on Rocky Linux 10.1 (kernel 6.12), Podman 5.6.0, Buildah 1.41.8, Skopeo 1.20.0, SELinux enforcing
Prerequisites
- Rocky Linux 10 or AlmaLinux 10 (minimal or server install)
- A user with
sudoprivileges (or root access) - SELinux in enforcing mode (the default on RHEL-based systems, and this guide assumes it stays that way)
- Internet connectivity to pull container images
- Tested on: Rocky Linux 10.1 with kernel 6.12, Podman 5.6.0, Buildah 1.41.8, Skopeo 1.20.0
Confirm SELinux is enforcing before proceeding:
getenforce
The output should read:
Enforcing
Option 1: Install from OS Repositories
The AppStream repository on Rocky Linux 10 and AlmaLinux 10 ships Podman, Buildah, and Skopeo as part of the default container tools module. These packages receive security patches from Red Hat and are tested against the distribution kernel. For production servers, this is the recommended path.
Check what version of Podman is available:
dnf info podman
You should see version 5.6.0 from the appstream repository:
Available Packages
Name : podman
Version : 5.6.0
Release : 1.el10
Architecture : x86_64
Size : 16 M
Source : podman-5.6.0-1.el10.src.rpm
Repository : appstream
Summary : Manage Pods, Containers and Container Images
URL : https://podman.io/
License : Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0
Description : podman (Pod Manager) is a fully featured container engine that is a
: simple daemonless tool. podman provides a Docker-CLI comparable
: command line that eases the transition from other container engines.
Install all three tools in one shot:
sudo dnf install -y podman buildah skopeo
Verify the installed versions:
podman --version
buildah --version
skopeo --version
The output confirms all three are installed:
podman version 5.6.0
buildah version 1.41.8 (image-spec 1.1.1, runtime-spec 1.2.0)
skopeo version 1.20.0
For a deeper look at the Podman environment, run podman info. This shows the OCI runtime, storage driver, and cgroup version:
podman info
Key sections from the output (trimmed for readability):
host:
buildahVersion: 1.41.8
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.13-1.el10.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.13, commit: '
databaseBackend: sqlite
kernel: 6.12.0-76.el10.x86_64
ociRuntime:
name: crun
package: crun-1.23.1-1.el10.x86_64
path: /usr/bin/crun
version: |-
crun version 1.23.1
os: linux
security:
apparmorEnabled: false
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
store:
configFile: /home/user/.config/containers/storage.conf
graphDriverName: overlay
graphRoot: /home/user/.local/share/containers/storage
runRoot: /run/user/1000/containers
volumePath: /home/user/.local/share/containers/storage/volumes
Key details here: crun 1.23.1 as the OCI runtime, cgroups v2 with systemd manager, overlay storage driver, sqlite database backend, and SELinux enabled. This is the modern container stack that RHEL 10 ships out of the box.
Option 2: Install Latest Upstream Version
The OS repository ships Podman 5.6.0, which is stable and well-tested. But if you need the absolute latest features, the upstream project releases faster than the distribution packages. Check the latest release version:
curl -sL https://api.github.com/repos/containers/podman/releases/latest | grep tag_name
At the time of writing, this returns:
"tag_name": "v5.8.1",
So the upstream version is 5.8.1 while the OS repo ships 5.6.0. The gap is relatively small. Here’s how to decide which to use:
- OS repo (5.6.0): receives security patches from Red Hat, tested with the RHEL 10 kernel, no additional repo configuration needed. Best for production servers where stability matters more than features.
- Upstream (5.8.1): gets new features faster (Quadlet improvements, networking changes, bug fixes). Best for development workstations or if you specifically need a feature from the newer release.
To install the latest upstream Podman on Rocky/AlmaLinux 10, use the COPR repository maintained by the Podman team:
sudo dnf copr enable rhcontainerbot/podman-next -y
sudo dnf install -y podman
After enabling the COPR repo, verify the version jumped to the upstream release:
podman --version
One thing to keep in mind: mixing COPR Podman with OS repo Buildah and Skopeo can occasionally cause version mismatches. If you go the COPR route, monitor the repo for matching Buildah and Skopeo updates. For this guide, the remaining examples use the OS repo versions (5.6.0, 1.41.8, 1.20.0) since they represent what most production systems will run.
Podman Basics
With Podman installed, here’s a practical walkthrough of everyday container operations. All commands work identically for rootless (regular user) and rootful (sudo) execution.
Pull an Image
Grab the official Nginx image from Docker Hub:
podman pull docker.io/library/nginx:latest
Podman resolves the image layers and pulls them into local storage:
Trying to pull docker.io/library/nginx:latest...
Getting image source signatures
Copying blob 4d3d0e9f5a53 done |
Copying blob a99d3c2f0c3e done |
Copying blob f22338c40b82 done |
Copying blob 4cf51f9c4c50 done |
Copying blob 82d7fe0a3c37 done |
Copying blob 5e72adee8d3a done |
Copying blob 7e1d45dbb75c done |
Copying config 9bea9f2796 done |
Writing manifest to image destination
9bea9f27962e3a495cfe98b15e21caed83ca80027ddcfded86f6939e5adad595
List Local Images
Check what images are stored locally:
podman images
The output shows the repository, tag, image ID, creation date, and size:
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/library/nginx latest 9bea9f2796e3 2 weeks ago 192 MB
Run a Container
Start an Nginx container in detached mode, mapping port 8080 on the host to port 80 in the container:
podman run -d --name web -p 8080:80 docker.io/library/nginx:latest
Podman returns the full container ID:
a3f7b2c1e8d94f5a6b0c3e7d1f9a2b4c5e8f0a1b3d6c9e2f5a8b1d4c7e0f3a2
List Running Containers
Verify the container is up:
podman ps
The output shows the container status, ports, and name:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3f7b2c1e8d9 docker.io/library/nginx:latest nginx -g daemon o... 5 seconds ago Up 5 seconds 0.0.0.0:8080->80/tcp web
Test the Service
Send a request to confirm Nginx is serving traffic:
curl -sI http://localhost:8080
You should see HTTP 200 and the Nginx version header:
HTTP/1.1 200 OK
Server: nginx/1.27.4
Date: Tue, 25 Mar 2026 10:15:32 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 05 Feb 2025 11:06:32 GMT
Connection: keep-alive
ETag: "67a34e28-267"
Accept-Ranges: bytes
Container Logs
View what the container has been writing to stdout:
podman logs web
Nginx logs the startup and the curl request we just made:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
10.88.0.1 - - [25/Mar/2026:10:15:32 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/8.11.1" "-"
Inspect a Container
Extract specific details using Go template format:
podman inspect web --format "{{.State.Status}} {{.NetworkSettings.IPAddress}} {{.Config.Image}}"
This returns the container’s current state, IP address, and source image:
running 10.88.0.2 docker.io/library/nginx:latest
Container Resource Usage
Check how much CPU and memory the container is consuming:
podman stats --no-stream
A single snapshot of resource usage:
ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU %
a3f7b2c1e8d9 web 0.00% 4.625MB / 3.82GB 0.12% 1.198kB / 796B 0B / 16.4kB 3 62.637ms 0.00%
Execute Commands Inside a Container
Run a command inside the running container without attaching to it:
podman exec web nginx -v
This confirms the Nginx version running inside the container:
nginx version: nginx/1.27.4
Stop and Remove a Container
Clean up when you’re done:
podman stop web && podman rm web
Both commands return the container name to confirm the action:
web
web
To stop and remove in a single step, use podman rm -f web. This sends SIGKILL and removes the container immediately.
Volume Mounts and SELinux
Mounting host directories into containers is where SELinux catches most people off guard on RHEL-based systems. The :Z suffix on volume mounts is not optional here. Without it, SELinux blocks the container process from reading the mounted directory, and you get silent permission denied errors.
Create a test directory with a simple HTML file:
mkdir -p /tmp/webroot
echo "<h1>Custom page served from a Podman volume mount</h1>" > /tmp/webroot/index.html
Start an Nginx container with the volume mounted using the :Z flag:
podman run -d --name web-vol -p 8081:80 -v /tmp/webroot:/usr/share/nginx/html:Z docker.io/library/nginx:latest
The :Z flag tells Podman to relabel the mounted directory with the correct SELinux context (container_file_t) so the container process can access it. Without this flag on an SELinux-enforcing system, Nginx would return a 403 Forbidden error because the files retain their original user_home_t or tmp_t context.
Test that the custom content is being served:
curl -s http://localhost:8081
The response confirms Nginx is reading from the mounted volume:
<h1>Custom page served from a Podman volume mount</h1>
You can verify the SELinux relabeling worked by checking the context on the mounted directory:
ls -Z /tmp/webroot/
The container_file_t type confirms SELinux relabeling was applied:
system_u:object_r:container_file_t:s0:c1,c2 index.html
Use :z (lowercase) when multiple containers need to share the same volume. Use :Z (uppercase) when only one container should access it. The uppercase version applies a more restrictive MCS label, which is the safer default.
Clean up this container before moving on:
podman rm -f web-vol
Pod Management
Pods are one of the features that set Podman apart from Docker. A pod groups multiple containers into a shared network namespace, meaning they can communicate over localhost without exposing ports. This is the same concept as Kubernetes pods, and Podman can even generate Kubernetes YAML from running pods.
Create a pod with port 9090 mapped to port 80:
podman pod create --name mypod -p 9090:80
Podman returns the pod ID:
e4b5c6d7a8f9012345678abcdef01234
Add an Nginx container to the pod:
podman run -d --pod mypod --name pod-nginx docker.io/library/nginx:latest
List all pods to see the status:
podman pod ps
The output shows the pod, its infra container, and the Nginx container:
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
e4b5c6d7a8f9 mypod Running 10 seconds ago 1a2b3c4d5e6f 2
The “2” in the container count includes the infra container (a pause process that holds the network namespace) plus the Nginx container. Any additional containers added to this pod share the same IP and can reach each other on localhost.
Test the pod’s web server:
curl -s http://localhost:9090 | head -5
You can also generate Kubernetes-compatible YAML from a running pod, which is useful for migrating workloads to a cluster:
podman generate kube mypod
Clean up the pod (this removes all containers in it):
podman pod rm -f mypod
Podman Networks and Secrets
Podman 5.x uses Netavark as its networking backend (replacing CNI plugins from earlier versions). You can create custom networks to isolate containers or connect multiple containers on the same subnet.
Custom Networks
Create a new bridge network:
podman network create mynet
Podman returns the network name:
mynet
List all networks to confirm it was created:
podman network ls
The output shows the default podman network alongside the new one:
NETWORK ID NAME DRIVER
2f259bab93aa podman bridge
5c8e3f1a9b2d mynet bridge
Inspect the network for subnet details:
podman network inspect mynet --format "{{range .Subnets}}{{.Subnet}}{{end}}"
This returns the assigned subnet:
10.89.0.0/24
Containers on custom networks can reach each other by name (DNS resolution is built in). This is essential for multi-container applications like a web server connecting to a database:
podman run -d --name db --network mynet docker.io/library/redis:latest
podman run -it --rm --network mynet docker.io/library/redis:latest redis-cli -h db ping
The ping returns PONG, confirming DNS-based container name resolution works across the custom network.
Clean up:
podman rm -f db
podman network rm mynet
Secrets Management
Podman can store sensitive data (passwords, API keys, certificates) as secrets, keeping them out of environment variables and command history:
echo "my-database-password" | podman secret create db_password -
Podman returns the secret ID:
a1b2c3d4e5f6a7b8c9d0e1f2
List stored secrets:
podman secret ls
The output shows the secret metadata (the actual value is never displayed):
ID NAME DRIVER CREATED UPDATED
a1b2c3d4e5f6a7b8c9d0e1f2 db_password file 5 seconds ago 5 seconds ago
Use the secret in a container. The secret is mounted as a file inside the container at /run/secrets/<name>:
podman run --rm --secret db_password docker.io/library/alpine:latest cat /run/secrets/db_password
The container reads the secret value:
my-database-password
Clean up the secret:
podman secret rm db_password
Run Containers as Systemd Services with Quadlets
Quadlets are the modern way to run Podman containers as systemd services. They replaced podman generate systemd, which was deprecated in Podman 5.x. Instead of generating unit files from running containers, you write declarative .container files and let systemd’s generator handle the rest. This is cleaner, version-controllable, and survives reboots without manual intervention.
For system-wide services (running as root), Quadlet files go in /etc/containers/systemd/. For rootless user services, they go in ~/.config/containers/systemd/.
Create a Quadlet file for an Nginx web server:
sudo vi /etc/containers/systemd/web.container
Add the following configuration:
[Unit]
Description=Nginx Web Server
After=local-fs.target
[Container]
Image=docker.io/library/nginx:latest
PublishPort=8080:80
Volume=/tmp/webroot:/usr/share/nginx/html:Z
[Install]
WantedBy=multi-user.target default.target
The [Container] section maps directly to Podman run flags: PublishPort equals -p, Volume equals -v. The file name (minus the .container extension) becomes the systemd unit name.
Make sure the webroot directory and content exist:
sudo mkdir -p /tmp/webroot
echo "<h1>Served by Podman Quadlet</h1>" | sudo tee /tmp/webroot/index.html
Reload systemd and start the service:
sudo systemctl daemon-reload
sudo systemctl start web
Check the service status:
sudo systemctl status web
The output confirms the container is running under systemd management:
● web.service - Nginx Web Server
Loaded: loaded (/etc/containers/systemd/web.container; generated)
Active: active (running) since Tue 2026-03-25 10:30:45 UTC; 5s ago
Main PID: 12345 (conmon)
Tasks: 3 (limit: 23155)
Memory: 8.2M
CPU: 156ms
CGroup: /system.slice/web.service
├─12345 /usr/bin/conmon --api-version 1 -c a3f7b2c1e8d9...
├─12348 nginx: master process nginx -g daemon off;
└─12349 nginx: worker process
Enable the service to start on boot:
sudo systemctl enable web
Verify Nginx is responding through the Quadlet-managed container:
curl -s http://localhost:8080
The response shows the custom content:
<h1>Served by Podman Quadlet</h1>
Quadlets also support .pod, .volume, .network, and .kube file types for managing complete application stacks through systemd. The Podman container guide for Linux covers additional Quadlet patterns for multi-container setups.
Stop and clean up:
sudo systemctl stop web
sudo rm /etc/containers/systemd/web.container
sudo systemctl daemon-reload
Building Images with Buildah
Buildah builds OCI container images without requiring a daemon or even a Containerfile. You can script image creation step by step from the command line, which gives you more control than a Dockerfile-based build. It also supports standard Containerfile/Dockerfile builds for when that workflow makes more sense.
Build from a Base Image (Scripted)
Create a lightweight Nginx image from Alpine using Buildah commands directly:
ctr=$(buildah from docker.io/library/alpine:latest)
echo $ctr
Buildah creates a working container and returns its name:
alpine-working-container
Install Nginx inside the working container:
buildah run $ctr -- apk add --no-cache nginx
Configure the image metadata (exposed port and default command):
buildah config --port 80 $ctr
buildah config --cmd "nginx -g 'daemon off;'" $ctr
Commit the working container as a new image:
buildah commit $ctr my-nginx:v1
Buildah writes the image and returns its ID:
Getting image source signatures
Copying blob 4abcdef12345 done |
Copying config 9a8b7c6d5e done |
Writing manifest to image destination
9a8b7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a0b9c8d7e6f5a4b3c2d1e0f9a8b
Verify the image was created:
buildah images
The output shows the new image at a fraction of the original Nginx size:
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/my-nginx v1 9a8b7c6d5e4f 10 seconds ago 10.8 MB
docker.io/library/alpine latest c157a85ed455 2 weeks ago 8.83 MB
docker.io/library/nginx latest 9bea9f2796e3 2 weeks ago 192 MB
Clean up the working container (the committed image stays):
buildah rm $ctr
Build from a Containerfile
For repeatable builds, a Containerfile (Buildah’s name for a Dockerfile) is the standard approach. Create a simple Python HTTP server application:
mkdir -p /tmp/myapp
Create the Python application file:
sudo vi /tmp/myapp/app.py
Add the following content:
from http.server import HTTPServer, BaseHTTPRequestHandler
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(b"Hello from Buildah container!")
HTTPServer(("0.0.0.0", 8000), Handler).serve_forever()
Create the Containerfile:
sudo vi /tmp/myapp/Containerfile
Add the build instructions:
FROM docker.io/library/python:3.13-slim
WORKDIR /app
COPY app.py .
EXPOSE 8000
CMD ["python", "app.py"]
Build the image with Buildah:
buildah bud -t my-python-app:v1 /tmp/myapp/
Buildah processes each layer and outputs the build progress:
STEP 1/5: FROM docker.io/library/python:3.13-slim
Trying to pull docker.io/library/python:3.13-slim...
Getting image source signatures
Copying blob 6a3e7a856c98 done |
Copying blob 2e38c74cb6ae done |
Copying blob 4d3d0e9f5a53 done |
Copying blob b1f2c4a8d7e3 done |
Copying config f8e9a1b2c3 done |
Writing manifest to image destination
STEP 2/5: WORKDIR /app
STEP 3/5: COPY app.py .
STEP 4/5: EXPOSE 8000
STEP 5/5: CMD ["python", "app.py"]
COMMIT my-python-app:v1
Getting image source signatures
Copying blob a7b8c9d0e1f2 done |
Copying config 3e4f5a6b7c done |
Writing manifest to image destination
--> 3e4f5a6b7c8d
Successfully tagged localhost/my-python-app:v1
3e4f5a6b7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f
Run the image with Podman and test it:
podman run -d --name pyapp -p 8000:8000 localhost/my-python-app:v1
Verify the application responds:
curl -s http://localhost:8000
The response confirms the Buildah-built image is working:
Hello from Buildah container!
Clean up:
podman rm -f pyapp
Buildah images are fully compatible with Podman and any OCI-compliant runtime. If you’re migrating from Docker, you can use your existing Dockerfiles as-is. Buildah reads both Containerfile and Dockerfile filenames. For more on how Podman compares to other container runtimes like CRI-O and containerd, see our runtime comparison guide.
Inspecting and Copying Images with Skopeo
Skopeo operates on container images without pulling them locally. This is the tool you want when you need to inspect an image’s metadata, copy images between registries, or mirror images to a local directory for offline use.
Inspect Remote Images
Get metadata for the Alpine image on Docker Hub without downloading it:
skopeo inspect docker://docker.io/library/alpine:latest
Skopeo returns the full image manifest including digest, architecture, OS, and labels:
{
"Name": "docker.io/library/alpine",
"Digest": "sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c",
"RepoTags": [
"20230901",
"20231219",
"3.19",
"3.20",
"3.21",
"edge",
"latest"
],
"Created": "2025-02-14T03:28:33Z",
"DockerVersion": "",
"Labels": null,
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:f18232174bc91741fdf8dca31980cf2ec9b105e43a8e4c5f1e86e8cf3a274c01"
],
"LayersData": [
{
"MIMEType": "application/vnd.oci.image.layer.v1.tar+gzip",
"Digest": "sha256:f18232174bc91741fdf8dca31980cf2ec9b105e43a8e4c5f1e86e8cf3a274c01",
"Size": 3649613,
"Annotations": null
}
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
]
}
List Available Tags
Check all available tags for an image:
skopeo list-tags docker://docker.io/library/nginx | python3 -c "import sys,json; tags=json.load(sys.stdin)['Tags']; print(f'Total tags: {len(tags)}'); print('Latest 10:'); [print(f' {t}') for t in tags[-10:]]"
This shows the total tag count and the most recent entries:
Total tags: 1091
Latest 10:
stable-perl
stable
otel
mainline-perl
mainline-otel
mainline-alpine-slim
mainline-alpine-perl
mainline-alpine-otel
mainline-alpine
mainline
Copy Images Between Registries
Skopeo copies images directly between registries without pulling to the local machine first. This is useful for mirroring images to a private registry:
skopeo copy docker://docker.io/library/nginx:latest docker://registry.example.com/nginx:latest
For private registries that require authentication, Skopeo reads credentials from ~/.config/containers/auth.json (the same file Podman uses after podman login).
Copy to a Local Directory
Save an image to a local directory for offline transfer or inspection:
skopeo copy docker://docker.io/library/alpine:latest dir:/tmp/alpine-image
Skopeo downloads the layers and manifest to the specified directory:
Getting image source signatures
Copying blob f18232174bc9 done |
Copying config 1d34ffeaf1 done |
Writing manifest to image destination
Examine what Skopeo saved:
ls -la /tmp/alpine-image/
The directory contains the manifest, config, and layer blobs:
total 3580
drwxr-xr-x. 2 user user 4096 Mar 25 10:45 .
drwxrwxrwt. 8 root root 4096 Mar 25 10:45 ..
-rw-r--r--. 1 user user 585 Mar 25 10:45 1d34ffeaf190be...
-rw-r--r--. 1 user user 3649613 Mar 25 10:45 f18232174bc917...
-rw-r--r--. 1 user user 432 Mar 25 10:45 manifest.json
-rw-r--r--. 1 user user 33 Mar 25 10:45 version
You can also copy to OCI layout (oci:/tmp/alpine-oci) or Docker archive format (docker-archive:/tmp/alpine.tar), depending on what the destination system expects. Skopeo supports these transport types: docker://, dir:, oci:, docker-archive:, docker-daemon:, and containers-storage:.
For more on how container runtimes handle images, see the guide on interacting with containerd and crictl in Kubernetes.
Podman vs Docker Comparison
If you’re evaluating whether to stick with Docker or switch to Podman, this table covers the key differences. For RHEL-based systems, the choice is already made for you since Red Hat ships Podman and does not include Docker in the default repositories.
| Feature | Podman | Docker |
|---|---|---|
| Daemon | No (daemonless) | Yes (dockerd) |
| Rootless | Built-in | Requires setup |
| CLI compatibility | Drop-in replacement | N/A |
| Compose | podman-compose or podman compose | docker compose |
| Systemd integration | Quadlets (native) | Requires wrapper |
| Pod support | Yes (like K8s) | No |
| Build tool | Buildah (separate) | Built-in |
| Image format | OCI | OCI + Docker |
| SELinux | Full support | Partial |
| Default on RHEL | Yes (since RHEL 8) | No |
The practical takeaway: if you know Docker, you already know Podman. Run alias docker=podman and most workflows transfer directly. The real advantages show up in rootless mode (no privilege escalation needed), Quadlet integration (containers managed by systemd natively), and pod support (grouping containers like Kubernetes). For Podman usage on Debian-based systems, check out our guides on installing Podman on Ubuntu.
Quick Reference Table
Bookmark this table for daily use. It covers the most common operations across all three tools.
| Task | Command |
|---|---|
| Pull an image | podman pull docker.io/library/nginx:latest |
| List local images | podman images |
| Run container (detached) | podman run -d --name web -p 8080:80 nginx:latest |
| Run container (interactive) | podman run -it --rm alpine:latest sh |
| List running containers | podman ps |
| List all containers | podman ps -a |
| Stop a container | podman stop web |
| Remove a container | podman rm web |
| Force remove | podman rm -f web |
| View logs | podman logs web |
| Exec into container | podman exec -it web bash |
| Inspect container | podman inspect web |
| Resource stats | podman stats --no-stream |
| Volume mount (SELinux) | podman run -v /host:/container:Z image |
| Create a pod | podman pod create --name mypod -p 8080:80 |
| Run in a pod | podman run -d --pod mypod nginx:latest |
| List pods | podman pod ps |
| Remove pod | podman pod rm -f mypod |
| Create network | podman network create mynet |
| Create secret | echo "val" | podman secret create name - |
| Generate K8s YAML | podman generate kube mypod |
| Remove all containers | podman rm -af |
| Remove all images | podman rmi -af |
| Buildah: create container | buildah from alpine:latest |
| Buildah: run command | buildah run $ctr -- apk add nginx |
| Buildah: commit image | buildah commit $ctr my-image:v1 |
| Buildah: build from file | buildah bud -t app:v1 . |
| Buildah: list images | buildah images |
| Skopeo: inspect remote | skopeo inspect docker://nginx:latest |
| Skopeo: list tags | skopeo list-tags docker://nginx |
| Skopeo: copy image | skopeo copy docker://src docker://dst |
| Skopeo: save to dir | skopeo copy docker://nginx dir:/tmp/img |