How To

Nginx vs Apache vs Caddy vs HAProxy: Performance Benchmark (2026)

Picking a web server or reverse proxy used to be a two-horse race between Nginx and Apache. Caddy showed up with automatic HTTPS and zero-config TLS, and HAProxy remains the standard for high-availability load balancing. The question is no longer which one is “best” but which one fits your workload.

Original content from computingforgeeks.com - post 164546

We installed all four on the same Rocky Linux 10 VM (4 cores, 4 GB RAM), served the same static HTML page, and benchmarked each with hey from a separate client on the same virtual network. Same hardware, same OS, same test file, same benchmark parameters. The numbers below come from that test.

Tested March 2026 on Rocky Linux 10.1 (kernel 6.12) with Nginx 1.26.3, Apache 2.4.63 (event MPM), Caddy 2.11.2, HAProxy 3.0.5 LTS

Quick Decision Table

If you already know your use case, start here:

Use CaseBest PickWhy
Static content, reverse proxy, general web servingNginxHighest RPS (60K), lowest idle memory (22 MB), battle-tested
.htaccess, PHP-FPM, shared hostingApachePer-directory config, mod_php, widest CMS compatibility
Automatic HTTPS, simple config, small teamsCaddyZero-config TLS via Let’s Encrypt, one-file Caddyfile, HTTP/3 built in
TCP/HTTP load balancing, HA, connection poolingHAProxyBest HTTPS throughput (51K RPS), health checks, stick tables, rate limiting

Test Environment

Both VMs ran on the same virtualization host connected via a virtual bridge (sub-millisecond network latency):

  • Server VM: Rocky Linux 10.1, 4 vCPUs (KVM), 4 GB RAM, kernel 6.12.0
  • Client VM: Rocky Linux 10.1, 2 vCPUs, 2 GB RAM
  • Benchmark tool: hey (Go HTTP load generator), 100 concurrent connections, 30-second runs, targeting 1 million requests
  • Test file: 164-byte static HTML page
  • SSL: Self-signed RSA 2048-bit certificate, TLS 1.2/1.3
  • Each server was configured with 4 workers/threads to match the CPU count. Only one server ran at a time to avoid resource contention
  • HAProxy was benchmarked as a reverse proxy (proxying to an Nginx backend on port 8000), which is its real-world use case. The other three served files directly

Benchmark Results

Raw numbers from 30-second benchmark runs at 100 concurrent connections:

MetricNginx 1.26.3Apache 2.4.63Caddy 2.11.2HAProxy 3.0.5*
HTTP requests/sec60,38246,90851,82657,164
HTTPS requests/sec53,16234,43447,43751,298
HTTP p50 latency1.5 ms1.1 ms1.6 ms1.5 ms
HTTP p99 latency4.7 ms16.0 ms6.9 ms5.0 ms
HTTPS p50 latency1.6 ms1.3 ms1.8 ms1.7 ms
HTTPS p99 latency5.5 ms24.7 ms7.4 ms5.4 ms
Idle memory22.3 MB51.2 MB53.1 MB29.3 MB
Memory under load50.5 MB218.8 MB63.6 MB34.6 MB
Failed requests0000

*HAProxy was tested as a reverse proxy with an Nginx backend, which is its intended use case. Direct file serving numbers are not applicable since HAProxy is not a web server.

What the Numbers Tell Us

Nginx leads on throughput. At 60,382 HTTP RPS and 53,162 HTTPS RPS, Nginx handled more requests than any other server in our tests. It also used the least idle memory at 22.3 MB.

HAProxy excels at HTTPS with minimal overhead. Despite proxying every request through an Nginx backend, HAProxy’s HTTPS performance (51,298 RPS) was within 4% of Nginx serving directly. Its memory under load (34.6 MB) was the lowest of any server, which matters when you are proxying thousands of concurrent connections.

Caddy trades some throughput for simplicity. At 51,826 HTTP RPS and 47,437 HTTPS RPS, Caddy performs well while requiring far less configuration. The HTTPS overhead is smaller than Nginx or Apache because Caddy uses Go’s native TLS stack (which handles HTTP/2 multiplexing efficiently).

Apache’s p99 latency is its weakness. While Apache’s median latency was actually the lowest (1.1ms p50), its tail latency spiked to 16ms at p99 for HTTP and 24.7ms for HTTPS. This means most requests are fast, but under load, some requests wait significantly longer. Apache also used 218 MB under load (4x more than Nginx), because each connection requires its own thread in the event MPM worker pool.

Nginx

Nginx uses an event-driven, non-blocking architecture where a small number of worker processes handle thousands of concurrent connections. This design is why it dominates in throughput benchmarks and uses minimal memory. Nginx is the default reverse proxy for most production deployments, from WordPress sites to Kubernetes ingress controllers.

Install on Rocky Linux 10:

sudo dnf install -y nginx

Confirmed version:

nginx -v

Confirmed as the latest stable branch:

nginx version: nginx/1.26.3

Benchmark configuration (4 worker processes, one per CPU core):

worker_processes 4;

events {
    worker_connections 4096;
}

http {
    server {
        listen 80;
        listen 443 ssl;
        ssl_certificate /etc/pki/tls/certs/bench.crt;
        ssl_certificate_key /etc/pki/tls/private/bench.key;
        ssl_protocols TLSv1.2 TLSv1.3;
        root /var/www/html;
    }
}
StrengthLimitation
Highest throughput in our tests (60K RPS)Config syntax is verbose compared to Caddy
Lowest idle memory (22 MB)No built-in automatic HTTPS
Mature ecosystem, extensive documentationDynamic modules require recompilation or NGINX Plus
HTTP/2 support, WebSocket proxyingNative HTTP/3 support requires the QUIC branch or NGINX Plus

Apache HTTP Server

Apache has been around since 1995 and powers a significant share of the web. The event MPM (multi-processing module) replaced the older prefork model and handles concurrency much better, but Apache still uses more memory per connection than Nginx because each connection maps to a thread.

Install on Rocky Linux 10:

sudo dnf install -y httpd mod_ssl

Version and MPM confirmation:

httpd -v

Rocky Linux 10 ships Apache 2.4.63:

Server version: Apache/2.4.63 (Rocky Linux)
Server built:   Dec 10 2025 00:00:00

Confirm the event MPM is active (the performant multi-threaded model):

httpd -V | grep MPM

This should show event, not prefork:

Server MPM:     event
StrengthLimitation
.htaccess per-directory config (shared hosting)Highest memory under load (218 MB)
mod_php for legacy PHP applicationsWorst p99 latency (16ms HTTP, 24ms HTTPS)
Widest module ecosystemLower throughput than Nginx, Caddy, or HAProxy
Mature, well-documented, everywhereConfig files are verbose and scattered across multiple directories

Caddy

Caddy is written in Go and its main selling point is automatic HTTPS. Point it at a domain, and it obtains and renews Let’s Encrypt certificates without any configuration. The Caddyfile format is the most readable config of any server in this comparison. Caddy also supports HTTP/3 (QUIC) out of the box.

Install on Rocky Linux 10 from the official COPR repository:

sudo dnf install -y 'dnf-command(copr)'
sudo dnf copr enable -y @caddy/caddy
sudo dnf install -y caddy

Confirmed version:

caddy version

Caddy 2.11.2 is the latest stable release:

v2.11.2

Benchmark Caddyfile (entire configuration):

:80 {
    root * /var/www/html
    file_server
}

:443 {
    tls /etc/pki/tls/certs/bench.crt /etc/pki/tls/private/bench.key
    root * /var/www/html
    file_server
}

That is the entire config. Compare this to the Nginx and Apache configurations above. For a deeper look at Caddy, see our Caddy installation guide.

StrengthLimitation
Automatic HTTPS (zero-config Let’s Encrypt)Higher idle memory than Nginx (53 MB vs 22 MB)
HTTP/3 (QUIC) built inSmaller plugin ecosystem than Nginx or Apache
Simplest configuration of any serverFewer tuning knobs for advanced users
Good HTTPS performance (only 8% drop from HTTP)Written in Go (garbage collection can cause rare latency spikes)

HAProxy

HAProxy is a TCP/HTTP load balancer, not a web server. It does not serve files from disk. Its job is to sit in front of your application servers, terminate SSL, balance traffic, perform health checks, and manage connections. We benchmarked it as a reverse proxy (fronting an Nginx backend) because that is how it runs in production.

Install on Rocky Linux 10:

sudo dnf install -y haproxy

Confirmed version:

haproxy -v

HAProxy 3.0.5 is the current LTS branch (supported through Q2 2029):

HAProxy version 3.0.5-8e879a5 2024/09/19 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2029.

Benchmark configuration with 4 threads:

global
    maxconn 10000
    nbthread 4
    ssl-default-bind-options ssl-min-ver TLSv1.2

defaults
    mode http
    timeout connect 5s
    timeout client 30s
    timeout server 30s

frontend http_front
    bind *:80
    default_backend webservers

frontend https_front
    bind *:443 ssl crt /etc/pki/tls/certs/bench.pem
    default_backend webservers

backend webservers
    server local 127.0.0.1:8000

Even though HAProxy proxied every request through an Nginx backend (adding network overhead and processing), it still delivered 51,298 HTTPS RPS with the lowest memory footprint under load (34.6 MB). For HAProxy installation and configuration details, see our dedicated guide.

StrengthLimitation
Best HTTPS p99 latency (5.4ms)Not a web server (cannot serve static files)
Lowest memory under load (34.6 MB)Configuration syntax has a learning curve
Health checks, stick tables, rate limiting built inNo dynamic content processing (no CGI, no PHP)
Connection pooling, graceful reloadsNeeds a backend server for any content serving

Configuration Complexity

How much effort does it take to serve a static site with HTTPS? Here is the minimum configuration for each server, measured by lines of config needed:

ServerLines for Static + HTTPSAutomatic HTTPSConfig Reload
Caddy6Yes (built in)caddy reload
Nginx15No (certbot needed)nginx -s reload
HAProxy18No (certbot + PEM concat)systemctl reload haproxy
Apache20No (certbot + mod_ssl)apachectl graceful

Caddy’s automatic HTTPS is genuinely useful. On a fresh server, caddy run --config Caddyfile with just a domain name in the config will obtain a Let’s Encrypt certificate, configure HTTPS, set up HTTP-to-HTTPS redirect, and enable HTTP/2 and HTTP/3 with zero additional steps. For the other three servers, you need to install certbot, obtain the certificate, configure the server to use it, and set up a renewal hook.

SSL/TLS Termination Performance

SSL termination is one of the most CPU-intensive operations a reverse proxy performs. The overhead percentage shows how much throughput drops when switching from HTTP to HTTPS:

ServerHTTP RPSHTTPS RPSSSL Overhead
Caddy51,82647,4378.5%
HAProxy57,16451,29810.3%
Nginx60,38253,16212.0%
Apache46,90834,43426.6%

Caddy has the smallest SSL overhead at 8.5%, likely because Go’s TLS stack handles HTTP/2 multiplexing natively without the OpenSSL layer. Apache takes the biggest hit at 26.6% because mod_ssl adds significant per-connection overhead in the event MPM’s thread model.

When to Use Each Server

Choose Nginx When

  • You need maximum throughput for static content or reverse proxy
  • Memory efficiency matters (containers, small VMs)
  • You are running Kubernetes (most ingress controllers use Nginx)
  • You want the largest community and ecosystem of any web server

Choose Apache When

  • Your application relies on .htaccess files (WordPress shared hosting, CMS platforms)
  • You need mod_php or mod_rewrite compatibility
  • Your team already knows Apache and performance is not your bottleneck

Choose Caddy When

  • You want automatic HTTPS with zero configuration
  • HTTP/3 support is a requirement
  • Your team is small and config simplicity saves real time
  • You are running API services or Go applications

Choose HAProxy When

  • You need TCP or HTTP load balancing across multiple backends
  • Health checks and automatic failover are critical
  • You want the lowest memory footprint under high concurrency
  • You need advanced traffic management: rate limiting, stick tables, ACLs

Feature Comparison

Beyond raw performance, these servers differ significantly in features:

FeatureNginxApacheCaddyHAProxy
Serve static filesYesYesYesNo
Reverse proxyYesYes (mod_proxy)YesYes
Load balancingBasic (round-robin, least_conn)mod_proxy_balancerBasicAdvanced (10+ algorithms)
Health checksNGINX Plus onlymod_proxy_hcheckBasicAdvanced (L4 + L7)
Automatic HTTPSNoNoYesNo
HTTP/2YesYesYesYes
HTTP/3 (QUIC)NGINX Plus / quic branchNoYesExperimental
.htaccess supportNoYesNoNo
Rate limitingBuilt inmod_ratelimitPluginBuilt in (stick tables)
WebSocketYesmod_proxy_wstunnelYesYes
Config hot reloadYes (graceful)Yes (graceful)Yes (API)Yes (seamless)
Written inCCGoC

Benchmark Methodology

Transparency matters for benchmarks. Here is exactly how the tests were run so you can reproduce them.

The benchmark tool hey was installed on the client VM:

go install github.com/rakyll/hey@latest

Each test ran for 30 seconds with 100 concurrent connections:

# HTTP benchmark
~/go/bin/hey -z 30s -c 100 -q 0 http://192.168.1.115/

# HTTPS benchmark
~/go/bin/hey -z 30s -c 100 -q 0 https://192.168.1.115/

Memory was captured during peak load using ps on the server:

ps aux | grep '[n]ginx' | awk '{sum+=$6} END {printf "%.1f MB\n", sum/1024}'

Between each server test, the previous server was stopped completely and the system was given 10 seconds to settle. No two servers ran simultaneously. The HAProxy documentation provides additional benchmarking guidance for production tuning.

Related Articles

Monitoring Install and Configure Telegraf on RHEL 8 / CentOS 8 Debian Using NetworkManager (NMCLI) on Ubuntu and Debian Networking How To Install Tailscale Client on pfSense Networking Add DNS Reverse Lookup Zone in Windows Server 2025

Leave a Comment

Press ESC to close