Kubernetes 1.24 removed dockershim, and that forced every cluster operator to pick a container runtime that speaks CRI natively. The two real options are containerd and CRI-O. Docker still runs underneath containerd (it always has), and it remains the go-to tool for building images and local development. But in production Kubernetes, Docker as a runtime is out of the picture.
This guide breaks down Docker, containerd, and CRI-O with real version data, architecture details, and practical guidance on when each one makes sense. If you’re setting up a Kubernetes cluster or evaluating runtimes for an existing deployment, the comparison tables and CLI equivalents here will save you time.
Tested March 2026 | Kubernetes 1.35.1, Docker CE 29.3.0, containerd 2.1, CRI-O 1.35
How Container Runtimes Fit in the Stack
Before comparing the three, it helps to understand where each one sits. A container runtime isn’t a single binary. There are high-level runtimes (Docker, containerd, CRI-O) that manage images, networking, and lifecycle, and low-level runtimes (runc, crun) that actually create the Linux namespaces and cgroups. The Kubernetes kubelet talks to the high-level runtime through the Container Runtime Interface (CRI).
| Layer | Docker | containerd | CRI-O |
|---|---|---|---|
| Kubernetes CRI | Not CRI-native (removed in 1.24) | Built-in CRI plugin | Purpose-built for CRI |
| High-level runtime | dockerd + containerd | containerd daemon | CRI-O daemon |
| Image management | Docker image store | containerd image store | containers/image library |
| Low-level runtime (OCI) | runc (via containerd) | runc (default) | runc or crun |
| Primary interface | Docker CLI / API | CRI gRPC + ctr/nerdctl | CRI gRPC + crictl |
| Scope | Full developer platform | Core container runtime | Kubernetes-only runtime |
The key insight: Docker has always used containerd internally. When you run docker run, the Docker daemon hands off to containerd, which hands off to runc. Kubernetes cutting out Docker just means the kubelet now talks to containerd directly, skipping the Docker daemon layer entirely.
Head-to-Head Comparison
This table covers the major decision points across all three runtimes, based on their current releases as of March 2026.
| Feature | Docker CE 29.3.0 | containerd 2.1 | CRI-O 1.35 |
|---|---|---|---|
| CRI compliance | No (dockershim removed in K8s 1.24) | Yes, built-in CRI plugin | Yes, purpose-built for CRI |
| Image building | Yes (docker build, BuildKit) | No (use BuildKit or kaniko separately) | No (use Buildah or kaniko) |
| Kubernetes support | Not supported as K8s runtime since 1.24 | Default runtime in most K8s distributions | Default runtime in OpenShift |
| Standalone container use | Full support | Yes, via ctr or nerdctl | No standalone mode |
| CLI tool | docker | ctr, nerdctl, crictl | crictl only |
| Resource footprint | Highest (dockerd + containerd + runc) | Medium (containerd + runc) | Lowest (single daemon + runc/crun) |
| OCI compliance | Yes | Yes | Yes |
| Image pull performance | Standard pull, supports lazy pulling via stargz | Supports stargz and eStargz lazy pulling natively | Standard pull |
| CNI support | Uses Docker networking (bridge, overlay) | Full CNI plugin support | Full CNI plugin support |
| Security features | Seccomp, AppArmor, SELinux, rootless mode | Seccomp, AppArmor, SELinux, rootless containers | Seccomp, AppArmor, SELinux, read-only rootfs default |
| Logging | Multiple log drivers (json-file, syslog, fluentd) | CRI log format (compatible with K8s log collection) | CRI log format, journald integration |
| Update mechanism | Package manager (apt/dnf) | Package manager or bundled with K8s installers | Versioned with Kubernetes (1:1 version mapping) |
| Docker Compose support | Native (docker compose) | Via nerdctl (nerdctl compose) | Not applicable |
| Windows container support | Yes | Yes | No |
| Maintained by | Docker Inc. (Moby project) | CNCF (graduated project) | CNCF (incubating project, Red Hat driven) |
Docker
Docker is the tool that popularized containers. It bundles image building, container management, networking, volumes, and a developer-friendly CLI into one package. Under the hood, Docker CE 29.3.0 consists of three main components: the Docker daemon (dockerd), containerd (the actual container runtime), and runc (the OCI-compliant low-level runtime that spawns containers).
Even though Kubernetes no longer uses Docker as a runtime, every image you build with docker build is an OCI image that runs on containerd and CRI-O without modification. Docker didn’t lose its place in the container ecosystem. It lost its place as a Kubernetes runtime specifically because the kubelet needed to talk CRI, and wrapping Docker through dockershim added complexity and latency.
Docker Components
The Docker engine is actually a stack of three layers working together:
- dockerd – The Docker daemon that exposes the Docker API. Handles image builds, volumes, networks, and the
dockerCLI interactions - containerd – Manages the full container lifecycle (pull, create, start, stop, delete). Docker delegates all runtime operations to containerd
- runc – Creates the actual Linux container (namespaces, cgroups, seccomp). containerd invokes runc for each container
This layered architecture means that when Kubernetes talks to containerd directly, it’s using the same runtime that Docker uses. The only difference is that the Docker daemon and its API are no longer in the path.
Docker CLI and Compose
Docker’s CLI remains the most mature and feature-rich container management tool available. Building images, running containers, managing networks, and debugging are all first-class operations.
docker build -t myapp:v1.0 .
docker run -d --name myapp -p 8080:80 myapp:v1.0
docker logs -f myapp
docker exec -it myapp /bin/sh
Docker Compose (now integrated as docker compose rather than the old docker-compose binary) handles multi-container applications with a single YAML file. For local development environments that need a database, cache, and application server running together, Compose is hard to beat.
docker compose up -d
docker compose logs -f
docker compose down -v
Where Docker Fits Today
Docker is a development tool. Build images locally, test with Compose, push to a registry, and deploy to Kubernetes where containerd or CRI-O takes over. Trying to run Docker as a Kubernetes runtime in 2026 means using an unsupported configuration. The images are the same either way, so there’s no compatibility concern.
containerd
containerd started as Docker’s internal runtime and became a standalone CNCF graduated project. It handles image transfer, container execution, storage, and networking at the system level. Version 2.1 is the current release, and it ships as the default runtime in Kubernetes distributions including kubeadm, Minikube (tested with Kubernetes 1.35.1), k3s, and most managed Kubernetes services (EKS, GKE, AKS).
The CRI plugin is compiled directly into the containerd binary. No separate process, no shim. The kubelet connects to containerd’s Unix socket, and containers start through runc. This is the simplest path from Kubernetes to a running container.
CLI Tools: ctr, nerdctl, and crictl
containerd has three CLI options, each serving a different purpose. For a detailed walkthrough of ctr and crictl usage in a Kubernetes context, see the containerd runtime interaction guide.
ctr is the low-level containerd client. It’s useful for debugging but not designed for daily use:
ctr images pull docker.io/library/nginx:latest
ctr run --rm docker.io/library/nginx:latest nginx-test
nerdctl provides a Docker-compatible CLI on top of containerd. If you want Docker-like commands without the Docker daemon, nerdctl is the answer:
nerdctl run -d --name web -p 8080:80 nginx:latest
nerdctl build -t myapp:v1.0 .
nerdctl compose up -d
The syntax is nearly identical to Docker. nerdctl supports image building (via BuildKit), compose files, and volume management.
crictl is the CRI-specific debugging tool for Kubernetes nodes. It talks to the CRI socket and shows what the kubelet sees:
crictl ps
crictl images
crictl logs CONTAINER_ID
crictl pods
On a production Kubernetes node, crictl is the right tool for troubleshooting. It works with both containerd and CRI-O.
Why containerd Is the Default
Most Kubernetes distributions default to containerd because it strikes the right balance: lightweight enough for production, feature-rich enough for edge cases (lazy image pulling, snapshotter plugins, runtime class support), and battle-tested through years of running inside Docker. It also supports standalone container use, which matters for CI/CD runners and non-Kubernetes workloads on the same node.
CRI-O
CRI-O exists for one reason: to be a Kubernetes container runtime and nothing else. It implements the CRI specification, pulls OCI images, and runs OCI containers. That’s it. No image building, no standalone container support, no compose files. This singular focus is both its strength and its limitation.
Red Hat created CRI-O, and it’s the default runtime in OpenShift. Version 1.35 matches Kubernetes 1.35, because CRI-O follows Kubernetes version numbering exactly. When Kubernetes 1.36 ships, CRI-O 1.36 follows. This tight coupling means you never have to wonder about compatibility.
Design Philosophy
CRI-O’s codebase is smaller than containerd’s because it doesn’t implement features Kubernetes doesn’t need. No snapshotters, no content store API, no streaming. It delegates to well-known libraries: containers/image for pulling, containers/storage for the image store, and runc or crun for execution.
The crun runtime (written in C) is the default on OpenShift and uses slightly less memory than runc (written in Go). For large-scale clusters where thousands of containers start and stop per minute, this adds up.
CRI-O on Kubernetes
Operating CRI-O in Kubernetes is straightforward. The kubelet configuration points to the CRI-O socket, and crictl is used for node-level debugging:
crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps
crictl --runtime-endpoint unix:///var/run/crio/crio.sock images
You cannot use CRI-O to run standalone containers or build images. For image building in a CRI-O environment, use Buildah or kaniko (for in-cluster builds). This separation of concerns is intentional.
When CRI-O Makes Sense
If you’re running OpenShift, CRI-O is the only supported runtime. For vanilla Kubernetes, CRI-O is worth considering when you want the smallest possible runtime surface area, especially in security-sensitive environments. It doesn’t expose APIs that Kubernetes doesn’t use, which reduces the attack surface.
When to Use Each Runtime
The choice depends on your use case, not on which runtime is “better” in abstract terms.
Local Development and CI/CD
Use Docker. The CLI, BuildKit integration, Compose, and the massive ecosystem of Docker-based tooling make it the practical choice for developer workstations and CI pipelines. Every major CI platform (GitHub Actions, GitLab CI, Jenkins) has native Docker support. The images you build with Docker run identically on containerd and CRI-O in production.
Production Kubernetes
Use containerd as the default choice. It’s what kubeadm, k3s, Minikube, and most managed Kubernetes services (EKS, GKE, AKS) use out of the box. The ecosystem support, documentation, and community experience are the broadest. Unless you have a specific reason to choose otherwise, containerd is the safe pick.
Use CRI-O if you’re running OpenShift (it’s required) or if you want the minimal runtime footprint for security-hardened clusters. CRI-O’s 1:1 version mapping with Kubernetes simplifies upgrade planning.
Edge and IoT Deployments
containerd is the better fit for edge. k3s uses it by default, it supports standalone containers (useful when not everything runs in Kubernetes), and the nerdctl CLI provides Docker-like convenience without the Docker daemon overhead. CRI-O’s Kubernetes-only design doesn’t serve mixed workloads well at the edge.
Migrating from Docker to containerd in Kubernetes
If you’re still running a pre-1.24 cluster with Docker as the runtime (or using cri-dockerd as a bridge), migrating to containerd is straightforward. The official Kubernetes runtime documentation covers the full process. Here’s what actually changes:
Your container images don’t change. OCI images are runtime-agnostic. An image built with docker build runs on containerd without modification.
The kubelet configuration changes. Instead of pointing to the Docker socket, it points to the containerd socket at /run/containerd/containerd.sock.
Node-level debugging changes. Replace docker ps and docker logs with crictl ps and crictl logs. Your kubectl commands stay exactly the same because kubectl talks to the API server, not the runtime.
The migration is done node by node: drain the node, stop Docker, configure containerd, restart the kubelet, and uncordon. Workloads are rescheduled automatically. In practice, most teams complete the migration in a maintenance window without application downtime.
Quick Reference: CLI Equivalents
This table maps common container operations across Docker, nerdctl (containerd), and crictl (CRI-O and containerd on Kubernetes nodes).
| Operation | Docker | nerdctl (containerd) | crictl (CRI-O / containerd) |
|---|---|---|---|
| Run a container | docker run -d nginx | nerdctl run -d nginx | N/A (kubelet manages pods) |
| List containers | docker ps | nerdctl ps | crictl ps |
| List images | docker images | nerdctl images | crictl images |
| Pull an image | docker pull nginx | nerdctl pull nginx | crictl pull nginx |
| View logs | docker logs CONTAINER | nerdctl logs CONTAINER | crictl logs CONTAINER |
| Exec into container | docker exec -it CONTAINER sh | nerdctl exec -it CONTAINER sh | crictl exec -it CONTAINER sh |
| Stop a container | docker stop CONTAINER | nerdctl stop CONTAINER | crictl stop CONTAINER |
| Remove a container | docker rm CONTAINER | nerdctl rm CONTAINER | crictl rm CONTAINER |
| Remove an image | docker rmi IMAGE | nerdctl rmi IMAGE | crictl rmi IMAGE |
| Build an image | docker build -t app . | nerdctl build -t app . | Not supported |
| Compose up | docker compose up -d | nerdctl compose up -d | Not supported |
| List pods | N/A | N/A | crictl pods |
| Inspect container | docker inspect CONTAINER | nerdctl inspect CONTAINER | crictl inspect CONTAINER |
The nerdctl CLI is the closest 1:1 replacement for Docker commands. If your muscle memory is all docker run and docker build, switching to nerdctl requires almost no relearning. crictl is intentionally limited because it’s a debugging tool for Kubernetes nodes, not a general-purpose container manager.
Hi Kibet, thank you for a well written and informative article!
Thank Kibet, very synthetic and clear article