Docker has become the standard for packaging and deploying applications in isolated containers. Combined with Docker Compose, you get a straightforward way to define and manage multi-container applications using a single YAML file. This guide walks you through installing Docker CE and Docker Compose v2 on Ubuntu 24.04 (Noble Numbat) and Debian 13 (Trixie), then covers real-world usage from basic commands through production-ready configurations.
We will install Docker from the official Docker repository – not the Snap package, not the distro default docker.io package. The official repo gives you the latest stable releases, faster updates, and the full Docker Engine feature set including Compose v2 as a plugin.
Prerequisites
- Ubuntu 24.04 or Debian 13 server/desktop with sudo access
- A non-root user account
- Stable internet connection
- At least 2 GB of free disk space
Step 1 – Remove Old Docker Packages
Before installing Docker CE from the official repository, remove any conflicting packages that may have been installed from distro defaults. This prevents version conflicts and ensures a clean installation.
sudo apt remove -y docker docker-engine docker.io containerd runc docker-compose docker-doc podman-docker
If none of these packages are installed, apt will report that there is nothing to remove. That is perfectly fine – move on to the next step.
Step 2 – Install Docker CE from the Official Docker Repository
Install the prerequisite packages that allow apt to fetch packages over HTTPS and manage GPG keys.
sudo apt update && sudo apt install -y ca-certificates curl gnupg lsb-release
Add the official Docker GPG key to your system keyring.
sudo install -m 0755 -d /etc/apt/keyrings
For Ubuntu 24.04, run the following commands to add the Docker repository.
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "${VERSION_CODENAME}") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
For Debian 13, use the Debian-specific URL instead.
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo "${VERSION_CODENAME}") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Now update the package index and install Docker CE along with the Compose plugin and related tools.
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
This installs Docker Engine, the CLI client, containerd runtime, BuildKit for image builds, and the Compose v2 plugin. You now have docker compose available as a subcommand – not the old standalone docker-compose binary.
Step 3 – Post-Install Configuration
By default, Docker commands require sudo. Add your user to the docker group so you can run Docker commands without elevated privileges.
sudo usermod -aG docker $USER
Apply the new group membership by logging out and back in, or run the following to activate it in your current session.
newgrp docker
Enable Docker to start automatically at boot.
sudo systemctl enable --now docker
Verify that Docker is working properly by running the hello-world container.
docker run hello-world
You should see output confirming that Docker pulled the image and ran the container. Also confirm the installed versions.
docker version
docker compose version
The docker compose version command should return something like Docker Compose version v2.x.x. If you see this, both Docker and Compose v2 are installed and ready.
Step 4 – Docker Basics – Essential Commands
Before diving into Compose, you should be comfortable with the core Docker CLI commands. These are the building blocks you will use daily. For a deeper introduction to container concepts, see our guide on understanding Docker container technology.
Pull an image from Docker Hub.
docker pull nginx:alpine
Run a container in detached mode with port mapping.
docker run -d --name webserver -p 8080:80 nginx:alpine
List running containers to see what is active.
docker ps
View container logs – add -f to follow in real time.
docker logs -f webserver
Execute a command inside a running container – useful for debugging.
docker exec -it webserver /bin/sh
Inspect a container to see its full configuration, network settings, and mount points.
docker inspect webserver
Stop and remove the container when done.
docker stop webserver && docker rm webserver
Use docker ps -a to list all containers including stopped ones. This helps you find containers that are consuming disk space even after they have exited.
Step 5 – Write a Docker Compose File – WordPress with MariaDB and Redis
Docker Compose lets you define a full application stack in a single docker-compose.yml file. Here is a production-style example that runs WordPress with a MariaDB database and Redis object cache. This is a common stack for high-performance WordPress hosting. If you are interested in other self-hosted setups, check our guide on running WordPress with Docker Compose.
Create a project directory and the compose file.
mkdir -p ~/wordpress-stack && cd ~/wordpress-stack
Create the .env file first – this keeps sensitive values out of the compose file itself.
cat <<'EOF' > .env
MYSQL_ROOT_PASSWORD=strongrootpass2024
MYSQL_DATABASE=wordpress
MYSQL_USER=wpuser
MYSQL_PASSWORD=wpuserpass2024
WORDPRESS_DB_HOST=mariadb:3306
WORDPRESS_DB_USER=wpuser
WORDPRESS_DB_PASSWORD=wpuserpass2024
WORDPRESS_DB_NAME=wordpress
EOF
Now create the docker-compose.yml file.
cat <<'EOF' > docker-compose.yml
services:
mariadb:
image: mariadb:11
container_name: wp-mariadb
restart: unless-stopped
env_file:
- .env
volumes:
- db_data:/var/lib/mysql
networks:
- wp-network
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
redis:
image: redis:7-alpine
container_name: wp-redis
restart: unless-stopped
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
networks:
- wp-network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 15s
timeout: 5s
retries: 3
wordpress:
image: wordpress:6-php8.3-apache
container_name: wp-app
restart: unless-stopped
depends_on:
mariadb:
condition: service_healthy
redis:
condition: service_healthy
env_file:
- .env
environment:
WORDPRESS_CONFIG_EXTRA: |
define('WP_REDIS_HOST', 'redis');
define('WP_REDIS_PORT', 6379);
ports:
- "8080:80"
volumes:
- wp_data:/var/www/html
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini:ro
networks:
- wp-network
volumes:
db_data:
redis_data:
wp_data:
networks:
wp-network:
driver: bridge
EOF
Optionally create a PHP config override to increase upload limits.
cat <<'EOF' > uploads.ini
upload_max_filesize = 64M
post_max_size = 64M
memory_limit = 256M
max_execution_time = 300
EOF
Step 6 – Managing Your Stack with Docker Compose Commands
With your compose file ready, use these commands to control the stack. All commands run from the directory containing your docker-compose.yml.
Start all services in detached mode.
docker compose up -d
Docker will pull images if they are not already cached, create the named volumes and network, then start containers in dependency order. The depends_on with health check conditions means WordPress waits until MariaDB and Redis pass their checks before starting.
Check the status of all services in the stack.
docker compose ps
View logs from all services, or target a specific one.
docker compose logs -f wordpress
Execute a command inside a running Compose service.
docker compose exec mariadb mysql -u wpuser -pwpuserpass2024 wordpress
Stop and tear down the entire stack. Add -v to also remove named volumes (careful – this deletes your data).
docker compose down
Rebuild and restart after making changes to a Dockerfile or build context.
docker compose up -d --build
Step 7 – Understanding Volumes and Networks
Volumes and networks are two of the most important concepts in Docker. Getting them right is the difference between a throwaway dev environment and a reliable production deployment.
Named Volumes vs Bind Mounts
Named volumes are managed by Docker and stored under /var/lib/docker/volumes/. They are the preferred method for persisting data in production because Docker handles the lifecycle and permissions. In the compose file above, db_data, redis_data, and wp_data are all named volumes.
docker volume ls
docker volume inspect wordpress-stack_db_data
Bind mounts map a specific host path into the container. They are useful when you need direct access to the files from the host – for example, mounting config files or a local development directory. In the compose file, the uploads.ini line is a bind mount.
volumes:
- ./my-app-code:/var/www/html # bind mount - host path on the left
- app_data:/var/www/html # named volume - just a name on the left
Use named volumes for databases and persistent state. Use bind mounts for configuration files and development source code.
Docker Networks
Docker Compose automatically creates a bridge network for each project. Services within the same network can reach each other by container name – that is why WordPress connects to mariadb:3306 using the service name as the hostname.
You can define custom networks when you need to isolate groups of services. For example, a frontend network that only the web server and reverse proxy share, and a backend network for the database.
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # no external access - database stays isolated
List and inspect networks with these commands.
docker network ls
docker network inspect wordpress-stack_wp-network
Step 8 – Environment Variables and the .env File
Hardcoding passwords and configuration values in your compose file is a bad practice. Docker Compose natively supports .env files, which it reads automatically from the project directory.
In the WordPress example above, the env_file directive injects all variables from .env into the container. You can also reference variables directly in the compose file using ${VARIABLE_NAME} syntax.
services:
app:
image: myapp:${APP_VERSION:-latest}
ports:
- "${HOST_PORT:-3000}:3000"
The :- syntax provides a default value if the variable is not set. This is useful for making your compose file work in multiple environments without modification.
Always add .env to your .gitignore to prevent committing secrets to version control. Provide a .env.example file with placeholder values so other team members know which variables to set.
echo ".env" >> .gitignore
Step 9 – Multi-Stage Dockerfile Builds
Multi-stage builds let you compile code in one stage and copy only the final artifact into a minimal runtime image. This dramatically reduces image size and removes build dependencies from your production container. For more on optimizing container images, see our article on Docker build best practices.
Here is a practical example for a Go application.
cat <<'EOF' > Dockerfile
# Stage 1 - Build
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /app/server ./cmd/server
# Stage 2 - Runtime
FROM alpine:3.20
RUN apk --no-cache add ca-certificates tzdata
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --from=builder /app/server .
USER appuser
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD wget -qO- http://localhost:8080/health || exit 1
ENTRYPOINT ["./server"]
EOF
The build stage pulls in the full Go toolchain, compiles the binary, then the runtime stage starts from a minimal Alpine image and copies only the compiled binary. The final image is typically under 20 MB instead of hundreds.
Integrate this with Compose by using the build directive.
services:
api:
build:
context: .
dockerfile: Dockerfile
target: runtime # optionally target a specific stage
ports:
- "8080:8080"
Step 10 – Production Hardening Tips
Running Docker in production requires more than just docker compose up. Here are the key settings every DevOps engineer should configure.
Restart Policies
Always set a restart policy so containers recover from crashes and system reboots. The unless-stopped policy restarts the container unless you explicitly stop it.
services:
app:
restart: unless-stopped # options: no, always, on-failure, unless-stopped
Health Checks
Health checks tell Docker whether your application is actually functioning, not just that the process is running. Compose can use health check status in depends_on conditions to control startup order – as shown in the WordPress example.
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Resource Limits
Prevent a single container from consuming all host resources. Set memory and CPU limits in the deploy section.
services:
app:
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
Logging Driver
Docker’s default JSON file logging driver can fill up disk space quickly on busy services. Set log rotation or switch to a different driver.
services:
app:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
For centralized logging, consider using the syslog, fluentd, or gelf driver to ship logs to your logging infrastructure.
Step 11 – Docker System Cleanup
Docker accumulates unused images, stopped containers, and dangling volumes over time. Regular cleanup prevents disk space issues – a common pain point on long-running servers.
Remove all stopped containers, unused networks, dangling images, and build cache in one command.
docker system prune -f
Include unused volumes in the cleanup (be careful – this deletes volume data that is not attached to a running container).
docker system prune -af --volumes
Check disk usage to see exactly where space is being consumed.
docker system df
Remove specific resources when you need finer control.
docker image prune -a -f --filter "until=168h"
That command removes all images not used by any container that are older than 7 days. For automated cleanup on production servers, add a cron job.
echo "0 3 * * 0 root docker system prune -af --filter 'until=168h'" | sudo tee /etc/cron.d/docker-cleanup
Step 12 – Troubleshooting Common Docker Issues
Even experienced engineers hit these problems. Here are the most common issues and their fixes.
Permission Denied When Running Docker Commands
If you see permission denied while trying to connect to the Docker daemon socket, your user is not in the docker group or the group change has not taken effect.
sudo usermod -aG docker $USER && newgrp docker
If the issue persists after a fresh login, check that the Docker socket has the correct permissions.
ls -la /var/run/docker.sock
The socket should be owned by root:docker. If the group is wrong, fix it with the following command.
sudo chown root:docker /var/run/docker.sock
Port Conflicts
If Docker fails to bind a port because something else is already listening, identify the conflicting process.
sudo ss -tlnp | grep :8080
Either stop the conflicting service or change the host port mapping in your compose file. You can also bind to a specific IP to avoid conflicts on multi-homed hosts.
ports:
- "127.0.0.1:8080:80" # bind only to localhost
DNS Resolution Failures Inside Containers
Containers sometimes fail to resolve external hostnames if the host DNS configuration uses a loopback address (common with systemd-resolved on Ubuntu). Force Docker to use a public DNS server.
sudo mkdir -p /etc/docker
echo '{"dns": ["1.1.1.1", "8.8.8.8"]}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
Test DNS resolution from inside a container to confirm the fix. If you are running a local DNS server, point Docker to that instead.
docker run --rm alpine nslookup google.com
Disk Space Exhaustion
Docker stores everything under /var/lib/docker by default. If your root partition fills up, Docker will stop working. Check usage with the following command.
docker system df -v
For immediate relief, run the cleanup commands from Step 11. For a long-term fix, either move the Docker data directory to a larger partition or configure log rotation as shown in the production tips.
To move the Docker data directory, stop Docker, move the data, and update the configuration.
sudo systemctl stop docker
sudo rsync -aP /var/lib/docker/ /mnt/data/docker/
echo '{"data-root": "/mnt/data/docker"}' | sudo tee /etc/docker/daemon.json
sudo systemctl start docker
Summary
You now have Docker CE and Docker Compose v2 installed on your Ubuntu 24.04 or Debian 13 system, with a solid understanding of how to use them in practice. We covered the official installation from Docker’s repository, essential CLI commands, a complete WordPress stack with MariaDB and Redis, volume and network management, environment variable handling, multi-stage builds, production hardening, cleanup procedures, and common troubleshooting scenarios.
Docker Compose v2 is a significant improvement over the legacy standalone binary. It integrates directly with the Docker CLI, supports advanced features like health check-based dependency ordering, and receives updates alongside Docker Engine. Make sure your team has moved away from the old docker-compose command and is using docker compose (with a space) going forward.
Related Guides
- How to Install Docker on Ubuntu – Complete Guide
- Deploy Containers with Podman on Debian
- Install Kubernetes Cluster on Ubuntu Using kubeadm
- How to Set Up a Private Docker Registry
- Monitor Docker Containers with Prometheus and Grafana



































































Thanks
It’s very useful.
I initially got the following error when I ran “docker version”:
“Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?”
So I ran “sudo systemctl start docker”, then ran “docker version”, and I was able to get the correct system output.