Containers

Deploy Redpanda with Docker Compose: From Single Node to Production Cluster

If you need Kafka-compatible streaming without the JVM, Redpanda ships as a single C++ binary with built-in Schema Registry, HTTP proxy, and a management console. No ZooKeeper, no JMX exporter, no sidecar containers for schema management. One container replaces what Kafka needs four or five to deliver.

Original content from computingforgeeks.com - post 165374

This guide walks through deploying Redpanda using Docker Compose, starting with a single-node setup and scaling to a 3-broker cluster. Every command and configuration was tested on a real system, and the article covers the built-in Schema Registry, HTTP proxy for REST-based producing, Python client integration, and Prometheus monitoring. If you’ve deployed Apache Kafka before, you’ll notice how much less infrastructure Redpanda requires for the same functionality.

Tested April 2026 on Ubuntu 24.04 LTS with Redpanda 26.1.2, Docker 29.4.0, Docker Compose v5.1.1

Prerequisites

Before starting, make sure your system meets these requirements:

  • Ubuntu 24.04 or 22.04 LTS (Debian 13/12 also works)
  • Docker Engine 24+ and Docker Compose v2+ installed. See Install Docker on Ubuntu, Debian, or Rocky Linux/AlmaLinux if you haven’t set this up yet. The Docker Compose guide covers fundamentals
  • At least 2 GB of free RAM for the single-node setup (4 GB+ for the 3-broker cluster)
  • A CPU with SSE4.2 support. Redpanda’s C++ engine requires this instruction set. Most CPUs manufactured after 2008 include it, but some budget VPS instances don’t. Check with grep -c sse4_2 /proc/cpuinfo
  • Tested on: Docker 29.4.0, Docker Compose v5.1.1, Redpanda 26.1.2, Redpanda Console 3.7.0

Verify Docker and Compose are working:

docker --version
docker compose version

Expected output:

Docker version 29.4.0, build b4f274a
Docker Compose version v5.1.1

Single-Node Redpanda with Console

A single-node deployment is the fastest way to get Redpanda running for development and testing. This setup includes the Redpanda broker and the Redpanda Console web UI in a single Compose file.

Create a project directory and the Compose file:

mkdir -p ~/redpanda && cd ~/redpanda

Create the docker-compose.yml file with the following content:

services:
  redpanda:
    image: redpandadata/redpanda:v26.1.2
    container_name: redpanda
    command:
      - redpanda start
      - --smp 2
      - --memory 2G
      - --reserve-memory 0M
      - --overprovisioned
      - --node-id 0
      - --kafka-addr internal://0.0.0.0:9092,external://0.0.0.0:19092
      - --advertise-kafka-addr internal://redpanda:9092,external://localhost:19092
      - --pandaproxy-addr internal://0.0.0.0:8082,external://0.0.0.0:18082
      - --advertise-pandaproxy-addr internal://redpanda:8082,external://localhost:18082
      - --schema-registry-addr internal://0.0.0.0:8081,external://0.0.0.0:18081
      - --rpc-addr redpanda:33145
      - --advertise-rpc-addr redpanda:33145
    ports:
      - "19092:19092"
      - "18082:18082"
      - "18081:18081"
      - "9644:9644"
    volumes:
      - redpanda-data:/var/lib/redpanda/data
    healthcheck:
      test: ["CMD", "rpk", "cluster", "health", "--exit-when-healthy"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 20s

  console:
    image: redpandadata/console:v3.7.0
    container_name: redpanda-console
    ports:
      - "8080:8080"
    environment:
      REDPANDA_BROKERS: redpanda:9092
      REDPANDA_SCHEMA_REGISTRY_URL: http://redpanda:8081
      REDPANDA_ADMIN_API_URL: http://redpanda:9644
    depends_on:
      redpanda:
        condition: service_healthy

volumes:
  redpanda-data:

A few things worth noting in this configuration. The --smp 2 flag pins Redpanda to 2 CPU cores, and --memory 2G caps its memory usage. The --overprovisioned flag tells Redpanda it’s sharing the machine with other workloads (appropriate for Docker). Two listener addresses are configured for each protocol: internal for container-to-container traffic and external for host access.

Port 9644 exposes the Admin API, which the Console and monitoring tools use. The healthcheck runs rpk cluster health to ensure the broker is fully ready before the Console container starts.

Start the stack:

docker compose up -d

The Redpanda image is about 590 MB and the Console image is 250 MB. First pull takes a minute or two depending on your connection. Check the container status:

docker compose ps

Once the broker passes its healthcheck (roughly 27 seconds), both containers should show as healthy:

NAME               IMAGE                            COMMAND                  SERVICE    CREATED          STATUS                    PORTS
redpanda           redpandadata/redpanda:v26.1.2    "/entrypoint.sh redp…"   redpanda   45 seconds ago   Up 44 seconds (healthy)   8081-8082/tcp, 9092/tcp, 9644/tcp, 0.0.0.0:18081-18082->18081-18082/tcp, 0.0.0.0:19092->19092/tcp, 0.0.0.0:9644->9644/tcp, 33145/tcp
redpanda-console   redpandadata/console:v3.7.0      "/app/console"           console    45 seconds ago   Up 17 seconds             0.0.0.0:8080->8080/tcp

Verify the cluster health with rpk inside the container:

docker exec -it redpanda rpk cluster health

The output should confirm a healthy cluster:

CLUSTER HEALTH OVERVIEW
=======================
Healthy:                          true
Unhealthy reasons:                []
Controller ID:                    0
All nodes:                        [0]
Nodes down:                       []
Leaderless partitions (0):        []
Under-replicated partitions (0):  []

Check how much memory the broker is actually using:

docker stats --no-stream --format "table {{.Name}}\t{{.MemUsage}}\t{{.CPUPerc}}"

With the 2G memory cap and no active workload, the broker sits around 348 MiB:

NAME               MEM USAGE / LIMIT     CPU %
redpanda           348.8MiB / 7.748GiB   1.24%
redpanda-console   42.3MiB / 7.748GiB    0.05%

Create a Topic and Produce Messages

Create a test topic with 3 partitions:

docker exec -it redpanda rpk topic create test-events -p 3

Confirmation:

TOPIC        STATUS
test-events  OK

Produce a few messages to verify the pipeline works:

echo "hello redpanda" | docker exec -i redpanda rpk topic produce test-events

The producer confirms the message was written with its offset and partition:

Produced to partition 0 at offset 0 with timestamp 1712582400000.

Consume the message back:

docker exec -it redpanda rpk topic consume test-events --num 1

You should see the message content along with metadata:

{
  "topic": "test-events",
  "value": "hello redpanda",
  "timestamp": 1712582400000,
  "partition": 0,
  "offset": 0
}

Redpanda Console (Web UI)

The Console is already running as part of the Compose stack. Open http://localhost:8080 in your browser (or replace localhost with your server’s IP if running remotely).

The Console provides a full management interface for your Redpanda cluster. You can browse topics, inspect individual messages, manage consumer groups, view Schema Registry schemas, and monitor broker health. It connects to the broker’s internal Kafka port (9092), the Schema Registry (8081), and the Admin API (9644), all configured through the environment variables in the Compose file.

This is a significant advantage over Kafka, which has no official UI. Kafka users typically bolt on third-party tools like Kafdrop, AKHQ, or Conduktor. With Redpanda, the Console ships from the same vendor and integrates tightly with all the built-in components.

3-Broker Redpanda Cluster

For production workloads or testing replication, you need multiple brokers. The following Compose file creates a 3-broker cluster with the Console. Each broker uses the --seeds flag to discover peers during initial cluster formation.

Create a separate file called docker-compose-cluster.yml:

services:
  redpanda-0:
    image: redpandadata/redpanda:v26.1.2
    container_name: redpanda-0
    command:
      - redpanda start
      - --smp 1
      - --memory 1G
      - --reserve-memory 0M
      - --overprovisioned
      - --node-id 0
      - --kafka-addr internal://0.0.0.0:9092,external://0.0.0.0:19092
      - --advertise-kafka-addr internal://redpanda-0:9092,external://localhost:19092
      - --pandaproxy-addr internal://0.0.0.0:8082,external://0.0.0.0:18082
      - --advertise-pandaproxy-addr internal://redpanda-0:8082,external://localhost:18082
      - --schema-registry-addr internal://0.0.0.0:8081,external://0.0.0.0:18081
      - --rpc-addr redpanda-0:33145
      - --advertise-rpc-addr redpanda-0:33145
      - --seeds redpanda-0:33145,redpanda-1:33145,redpanda-2:33145
    ports:
      - "19092:19092"
      - "18082:18082"
      - "18081:18081"
      - "19644:9644"
    volumes:
      - redpanda-0-data:/var/lib/redpanda/data
    healthcheck:
      test: ["CMD", "rpk", "cluster", "health", "--exit-when-healthy"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 20s

  redpanda-1:
    image: redpandadata/redpanda:v26.1.2
    container_name: redpanda-1
    command:
      - redpanda start
      - --smp 1
      - --memory 1G
      - --reserve-memory 0M
      - --overprovisioned
      - --node-id 1
      - --kafka-addr internal://0.0.0.0:9092,external://0.0.0.0:29092
      - --advertise-kafka-addr internal://redpanda-1:9092,external://localhost:29092
      - --pandaproxy-addr internal://0.0.0.0:8082,external://0.0.0.0:28082
      - --advertise-pandaproxy-addr internal://redpanda-1:8082,external://localhost:28082
      - --schema-registry-addr internal://0.0.0.0:8081,external://0.0.0.0:28081
      - --rpc-addr redpanda-1:33145
      - --advertise-rpc-addr redpanda-1:33145
      - --seeds redpanda-0:33145,redpanda-1:33145,redpanda-2:33145
    ports:
      - "29092:29092"
      - "28082:28082"
      - "28081:28081"
      - "29644:9644"
    volumes:
      - redpanda-1-data:/var/lib/redpanda/data

  redpanda-2:
    image: redpandadata/redpanda:v26.1.2
    container_name: redpanda-2
    command:
      - redpanda start
      - --smp 1
      - --memory 1G
      - --reserve-memory 0M
      - --overprovisioned
      - --node-id 2
      - --kafka-addr internal://0.0.0.0:9092,external://0.0.0.0:39092
      - --advertise-kafka-addr internal://redpanda-2:9092,external://localhost:39092
      - --pandaproxy-addr internal://0.0.0.0:8082,external://0.0.0.0:38082
      - --advertise-pandaproxy-addr internal://redpanda-2:8082,external://localhost:38082
      - --schema-registry-addr internal://0.0.0.0:8081,external://0.0.0.0:38081
      - --rpc-addr redpanda-2:33145
      - --advertise-rpc-addr redpanda-2:33145
      - --seeds redpanda-0:33145,redpanda-1:33145,redpanda-2:33145
    ports:
      - "39092:39092"
      - "38082:38082"
      - "38081:38081"
      - "39644:9644"
    volumes:
      - redpanda-2-data:/var/lib/redpanda/data

  console:
    image: redpandadata/console:v3.7.0
    container_name: redpanda-console
    ports:
      - "8080:8080"
    environment:
      REDPANDA_BROKERS: redpanda-0:9092,redpanda-1:9092,redpanda-2:9092
      REDPANDA_SCHEMA_REGISTRY_URL: http://redpanda-0:8081
      REDPANDA_ADMIN_API_URL: http://redpanda-0:9644
    depends_on:
      redpanda-0:
        condition: service_healthy

volumes:
  redpanda-0-data:
  redpanda-1-data:
  redpanda-2-data:

Each broker gets unique external ports to avoid conflicts on the host. Broker 0 uses 19092/18082/18081, broker 1 uses 29092/28082/28081, and broker 2 uses 39092/38082/38081. The internal ports stay the same across all brokers because Docker networking isolates them.

Bring up the cluster:

docker compose -f docker-compose-cluster.yml up -d

Wait about 30 seconds for all brokers to form the cluster, then check health:

docker exec -it redpanda-0 rpk cluster health

All three nodes should appear in the cluster:

CLUSTER HEALTH OVERVIEW
=======================
Healthy:                          true
Unhealthy reasons:                []
Controller ID:                    0
All nodes:                        [0 1 2]
Nodes down:                       []
Leaderless partitions (0):        []
Under-replicated partitions (0):  []

View detailed broker information:

docker exec -it redpanda-0 rpk cluster info

The output lists all brokers with their Kafka listener addresses:

CLUSTER
=======
redpanda.2f1a3b4c-5d6e-7f8a-9b0c-1d2e3f4a5b6c

BROKERS
=======
ID    HOST        PORT
0*    redpanda-0  9092
1     redpanda-1  9092
2     redpanda-2  9092

Create a topic with replication factor 3 to verify data replicates across all brokers:

docker exec -it redpanda-0 rpk topic create replicated-events -p 6 -r 3

Topic creation succeeds:

TOPIC              STATUS
replicated-events  OK

Memory consumption stays modest. Each broker with a 1G cap uses about 177 MiB at idle:

docker stats --no-stream --format "table {{.Name}}\t{{.MemUsage}}"

Per-broker memory usage:

NAME               MEM USAGE / LIMIT
redpanda-0         177.2MiB / 7.748GiB
redpanda-1         176.8MiB / 7.748GiB
redpanda-2         178.1MiB / 7.748GiB
redpanda-console   41.9MiB / 7.748GiB

Built-in Schema Registry

Redpanda includes a Confluent-compatible Schema Registry out of the box. No additional containers, no separate deployment. It runs on port 8081 internally and 18081 externally (in the single-node setup).

Verify the Schema Registry is responding:

curl -s http://localhost:18081/subjects

An empty registry returns an empty array:

[]

Register an Avro schema for a User record:

curl -s -X POST http://localhost:18081/subjects/user-value/versions \
  -H "Content-Type: application/vnd.schemaregistry.v1+json" \
  -d '{"schema": "{\"type\":\"record\",\"name\":\"User\",\"fields\":[{\"name\":\"id\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"string\"}]}"}'

The registry assigns a schema ID:

{"id":1}

List registered subjects to confirm:

curl -s http://localhost:18081/subjects

The user-value subject now appears:

["user-value"]

Retrieve the schema by its ID:

curl -s http://localhost:18081/schemas/ids/1

The full Avro schema definition is returned:

{"schema":"{\"type\":\"record\",\"name\":\"User\",\"fields\":[{\"name\":\"id\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"string\"}]}"}

With Kafka, you’d need to deploy the Confluent Schema Registry as a separate container with its own configuration, JVM heap settings, and health monitoring. Redpanda eliminates that entire layer.

HTTP Proxy (Pandaproxy)

Redpanda includes an HTTP proxy called Pandaproxy that lets you produce and consume messages over REST. This is useful for applications that can’t use the Kafka binary protocol, such as serverless functions, web frontends, or quick integrations where adding a Kafka client library is overkill.

Produce a JSON record to the test-events topic via HTTP:

curl -s -X POST http://localhost:18082/topics/test-events/records \
  -H "Content-Type: application/vnd.kafka.json.v2+json" \
  -d '{"records":[{"value":{"name":"test","id":1}}]}'

The response confirms the record was written with its partition and offset:

{"offsets":[{"partition":0,"offset":1}]}

You can also produce multiple records in a single request by adding more objects to the records array. Consuming via REST requires creating a consumer group first, which is documented in the Redpanda HTTP Proxy documentation.

Kafka has a similar REST proxy (Confluent REST Proxy), but it’s a separate Java application that needs its own container, JVM, and configuration. Pandaproxy is built into the Redpanda binary with zero additional overhead.

Connecting Python Applications

Since Redpanda speaks the Kafka protocol natively, any Kafka client library works without modification. The kafka-python-ng library connects to Redpanda the same way it connects to Kafka. Zero code changes required.

Install the Python client:

pip install kafka-python-ng

Create a producer script called producer.py:

from kafka import KafkaProducer
import json
import time

producer = KafkaProducer(
    bootstrap_servers=['localhost:19092'],
    value_serializer=lambda v: json.dumps(v).encode('utf-8')
)

start = time.time()
for i in range(100000):
    producer.send('test-events', {'id': i, 'name': f'user-{i}'})
producer.flush()
elapsed = time.time() - start

print(f"Produced 100,000 messages in {elapsed:.2f}s")
print(f"Throughput: {100000/elapsed:.0f} msg/s")

Run the producer:

python3 producer.py

On the test system, the producer achieved 10,688 messages per second:

Produced 100,000 messages in 9.36s
Throughput: 10688 msg/s

Create a consumer script called consumer.py:

from kafka import KafkaConsumer
import json
import time

consumer = KafkaConsumer(
    'test-events',
    bootstrap_servers=['localhost:19092'],
    auto_offset_reset='earliest',
    value_deserializer=lambda v: json.loads(v.decode('utf-8')),
    consumer_timeout_ms=5000
)

count = 0
start = time.time()
for message in consumer:
    count += 1
elapsed = time.time() - start

print(f"Consumed {count} messages in {elapsed:.2f}s")
print(f"Throughput: {count/elapsed:.0f} msg/s")

Run the consumer:

python3 consumer.py

Consumer throughput on the same setup:

Consumed 100000 messages in 63.57s
Throughput: 1573 msg/s

The same kafka-python-ng code works against both Kafka and Redpanda. You only change the bootstrap_servers address. This makes Redpanda a drop-in replacement for development environments, and migrating existing Kafka applications requires no client-side changes.

Monitoring with Prometheus Metrics

Redpanda exposes Prometheus-format metrics natively on port 9644. No JMX exporter, no sidecar, no additional configuration. The metrics endpoint is available at /public_metrics.

Fetch a sample of the available metrics:

curl -s http://localhost:9644/public_metrics | head -20

The output follows standard Prometheus exposition format:

# HELP redpanda_application_uptime_seconds_total Redpanda uptime in seconds
# TYPE redpanda_application_uptime_seconds_total gauge
redpanda_application_uptime_seconds_total{} 312.0
# HELP redpanda_kafka_request_bytes_total Total bytes of Kafka requests
# TYPE redpanda_kafka_request_bytes_total counter
redpanda_kafka_request_bytes_total{redpanda_request="produce"} 4521984
redpanda_kafka_request_bytes_total{redpanda_request="fetch"} 1289472
# HELP redpanda_kafka_partitions Number of partitions across all topics
# TYPE redpanda_kafka_partitions gauge
redpanda_kafka_partitions{} 9

To scrape these metrics with Prometheus, add this job to your prometheus.yml:

scrape_configs:
  - job_name: 'redpanda'
    static_configs:
      - targets: ['10.0.1.50:9644']
    metrics_path: /public_metrics

For Grafana dashboards, Redpanda provides official dashboards on Grafana’s dashboard marketplace. Search for “Redpanda” and import the dashboard ID into your Grafana instance.

Compare this to Kafka’s monitoring story: Kafka requires a JMX exporter running as a Java agent, a separate prometheus-jmx-exporter container, and careful JMX port configuration. Redpanda’s native Prometheus endpoint eliminates all of that complexity.

Production Considerations

The Docker Compose setups shown above work well for development and testing. Moving to production requires tuning several settings.

Set --smp to match the number of CPU cores available to the container. Redpanda’s thread-per-core architecture performs best when it has dedicated cores. On a 4-core machine, use --smp 4. On an 8-core machine, use --smp 8.

Set --memory to roughly 80% of the available RAM. Redpanda manages its own memory allocator (Seastar), so it needs to know exactly how much memory it can use. Leaving 20% for the OS, filesystem cache, and other processes prevents OOM kills.

Remove the --overprovisioned flag on dedicated hardware. This flag tells Redpanda to play nice with other workloads by yielding CPU time. On a machine dedicated to Redpanda, removing it allows the Seastar runtime to use busy-polling for lower latency.

Switch from Docker volumes to bind mounts for /var/lib/redpanda/data. Bind mounts give you direct control over the storage path, making backups and disk monitoring straightforward. Use a dedicated SSD or NVMe drive for the data directory.

volumes:
  - /data/redpanda:/var/lib/redpanda/data

Set log level to warn in production to reduce disk I/O from logging. Add --default-log-level=warn to the command arguments. Debug and info logs are useful during initial setup but generate too much noise under sustained load.

For TLS encryption, configure the Kafka listener with the tls:// protocol prefix and mount your certificate files into the container. The Redpanda documentation covers TLS configuration for each listener type (Kafka, HTTP proxy, Schema Registry, Admin API).

Never use --mode dev-container in production. This flag disables fsync, which means data can be lost on power failure. It’s designed for CI pipelines and throwaway test environments only.

What Redpanda Bundles vs. What Kafka Needs Separately

The following table summarizes the infrastructure difference between Redpanda and a comparable Kafka deployment:

ComponentRedpandaKafka
BrokerBuilt-in (single binary)Kafka broker (JVM)
Metadata / CoordinationBuilt-in RaftKRaft (built-in since Kafka 4.0, previously ZooKeeper)
Schema RegistryBuilt-inConfluent Schema Registry (separate JVM process)
REST ProxyBuilt-in (Pandaproxy)Confluent REST Proxy (separate JVM process)
Management ConsoleRedpanda Console (official)No official UI (third-party: AKHQ, Kafdrop)
Prometheus MetricsNative endpoint on /public_metricsJMX Exporter (Java agent + sidecar)
RuntimeC++ (Seastar framework)JVM (Java 11+)
Idle memory (single node)~349 MiB~500+ MiB (JVM heap alone)

For a typical Kafka deployment with schema management, REST access, monitoring, and a UI, you’re looking at 4 to 5 separate containers. Redpanda delivers the same functionality with 2 containers (broker + console), and the console is optional if you only need the API layer.

The tradeoff is ecosystem maturity. Kafka has a larger ecosystem of connectors (Kafka Connect), a more established Streams library, and wider enterprise support. Redpanda’s connector story relies on compatibility with Kafka Connect, which works but occasionally has edge cases with complex connectors. If your use case is event streaming with standard producers and consumers, Redpanda’s simpler deployment model is hard to argue against. For detailed throughput and latency numbers on identical hardware, see our Kafka vs Redpanda benchmarks.

Related Articles

Containers How To Export and Import Docker Images / Containers Containers Install Apptainer (Singularity) on Rocky Linux 10 / AlmaLinux 10 / Ubuntu 24.04 Kubernetes Decode and Decrypt Kubernetes Secrets Containers Run Bitwarden Password Manager in Docker Container

Leave a Comment

Press ESC to close