Picking a web server or reverse proxy used to be a two-horse race between Nginx and Apache. Caddy showed up with automatic HTTPS and zero-config TLS, and HAProxy remains the standard for high-availability load balancing. The question is no longer which one is “best” but which one fits your workload.
We installed all four on the same Rocky Linux 10 VM (4 cores, 4 GB RAM), served the same static HTML page, and benchmarked each with hey from a separate client on the same virtual network. Same hardware, same OS, same test file, same benchmark parameters. The numbers below come from that test.
Tested March 2026 on Rocky Linux 10.1 (kernel 6.12) with Nginx 1.26.3, Apache 2.4.63 (event MPM), Caddy 2.11.2, HAProxy 3.0.5 LTS
Quick Decision Table
If you already know your use case, start here:
| Use Case | Best Pick | Why |
|---|---|---|
| Static content, reverse proxy, general web serving | Nginx | Highest RPS (60K), lowest idle memory (22 MB), battle-tested |
| .htaccess, PHP-FPM, shared hosting | Apache | Per-directory config, mod_php, widest CMS compatibility |
| Automatic HTTPS, simple config, small teams | Caddy | Zero-config TLS via Let’s Encrypt, one-file Caddyfile, HTTP/3 built in |
| TCP/HTTP load balancing, HA, connection pooling | HAProxy | Best HTTPS throughput (51K RPS), health checks, stick tables, rate limiting |
Test Environment
Both VMs ran on the same virtualization host connected via a virtual bridge (sub-millisecond network latency):
- Server VM: Rocky Linux 10.1, 4 vCPUs (KVM), 4 GB RAM, kernel 6.12.0
- Client VM: Rocky Linux 10.1, 2 vCPUs, 2 GB RAM
- Benchmark tool:
hey(Go HTTP load generator), 100 concurrent connections, 30-second runs, targeting 1 million requests - Test file: 164-byte static HTML page
- SSL: Self-signed RSA 2048-bit certificate, TLS 1.2/1.3
- Each server was configured with 4 workers/threads to match the CPU count. Only one server ran at a time to avoid resource contention
- HAProxy was benchmarked as a reverse proxy (proxying to an Nginx backend on port 8000), which is its real-world use case. The other three served files directly
Benchmark Results
Raw numbers from 30-second benchmark runs at 100 concurrent connections:
| Metric | Nginx 1.26.3 | Apache 2.4.63 | Caddy 2.11.2 | HAProxy 3.0.5* |
|---|---|---|---|---|
| HTTP requests/sec | 60,382 | 46,908 | 51,826 | 57,164 |
| HTTPS requests/sec | 53,162 | 34,434 | 47,437 | 51,298 |
| HTTP p50 latency | 1.5 ms | 1.1 ms | 1.6 ms | 1.5 ms |
| HTTP p99 latency | 4.7 ms | 16.0 ms | 6.9 ms | 5.0 ms |
| HTTPS p50 latency | 1.6 ms | 1.3 ms | 1.8 ms | 1.7 ms |
| HTTPS p99 latency | 5.5 ms | 24.7 ms | 7.4 ms | 5.4 ms |
| Idle memory | 22.3 MB | 51.2 MB | 53.1 MB | 29.3 MB |
| Memory under load | 50.5 MB | 218.8 MB | 63.6 MB | 34.6 MB |
| Failed requests | 0 | 0 | 0 | 0 |
*HAProxy was tested as a reverse proxy with an Nginx backend, which is its intended use case. Direct file serving numbers are not applicable since HAProxy is not a web server.
What the Numbers Tell Us
Nginx leads on throughput. At 60,382 HTTP RPS and 53,162 HTTPS RPS, Nginx handled more requests than any other server in our tests. It also used the least idle memory at 22.3 MB.
HAProxy excels at HTTPS with minimal overhead. Despite proxying every request through an Nginx backend, HAProxy’s HTTPS performance (51,298 RPS) was within 4% of Nginx serving directly. Its memory under load (34.6 MB) was the lowest of any server, which matters when you are proxying thousands of concurrent connections.
Caddy trades some throughput for simplicity. At 51,826 HTTP RPS and 47,437 HTTPS RPS, Caddy performs well while requiring far less configuration. The HTTPS overhead is smaller than Nginx or Apache because Caddy uses Go’s native TLS stack (which handles HTTP/2 multiplexing efficiently).
Apache’s p99 latency is its weakness. While Apache’s median latency was actually the lowest (1.1ms p50), its tail latency spiked to 16ms at p99 for HTTP and 24.7ms for HTTPS. This means most requests are fast, but under load, some requests wait significantly longer. Apache also used 218 MB under load (4x more than Nginx), because each connection requires its own thread in the event MPM worker pool.
Nginx
Nginx uses an event-driven, non-blocking architecture where a small number of worker processes handle thousands of concurrent connections. This design is why it dominates in throughput benchmarks and uses minimal memory. Nginx is the default reverse proxy for most production deployments, from WordPress sites to Kubernetes ingress controllers.
Install on Rocky Linux 10:
sudo dnf install -y nginx
Confirmed version:
nginx -v
Confirmed as the latest stable branch:
nginx version: nginx/1.26.3
Benchmark configuration (4 worker processes, one per CPU core):
worker_processes 4;
events {
worker_connections 4096;
}
http {
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/pki/tls/certs/bench.crt;
ssl_certificate_key /etc/pki/tls/private/bench.key;
ssl_protocols TLSv1.2 TLSv1.3;
root /var/www/html;
}
}
| Strength | Limitation |
|---|---|
| Highest throughput in our tests (60K RPS) | Config syntax is verbose compared to Caddy |
| Lowest idle memory (22 MB) | No built-in automatic HTTPS |
| Mature ecosystem, extensive documentation | Dynamic modules require recompilation or NGINX Plus |
| HTTP/2 support, WebSocket proxying | Native HTTP/3 support requires the QUIC branch or NGINX Plus |
Apache HTTP Server
Apache has been around since 1995 and powers a significant share of the web. The event MPM (multi-processing module) replaced the older prefork model and handles concurrency much better, but Apache still uses more memory per connection than Nginx because each connection maps to a thread.
Install on Rocky Linux 10:
sudo dnf install -y httpd mod_ssl
Version and MPM confirmation:
httpd -v
Rocky Linux 10 ships Apache 2.4.63:
Server version: Apache/2.4.63 (Rocky Linux)
Server built: Dec 10 2025 00:00:00
Confirm the event MPM is active (the performant multi-threaded model):
httpd -V | grep MPM
This should show event, not prefork:
Server MPM: event
| Strength | Limitation |
|---|---|
| .htaccess per-directory config (shared hosting) | Highest memory under load (218 MB) |
| mod_php for legacy PHP applications | Worst p99 latency (16ms HTTP, 24ms HTTPS) |
| Widest module ecosystem | Lower throughput than Nginx, Caddy, or HAProxy |
| Mature, well-documented, everywhere | Config files are verbose and scattered across multiple directories |
Caddy
Caddy is written in Go and its main selling point is automatic HTTPS. Point it at a domain, and it obtains and renews Let’s Encrypt certificates without any configuration. The Caddyfile format is the most readable config of any server in this comparison. Caddy also supports HTTP/3 (QUIC) out of the box.
Install on Rocky Linux 10 from the official COPR repository:
sudo dnf install -y 'dnf-command(copr)'
sudo dnf copr enable -y @caddy/caddy
sudo dnf install -y caddy
Confirmed version:
caddy version
Caddy 2.11.2 is the latest stable release:
v2.11.2
Benchmark Caddyfile (entire configuration):
:80 {
root * /var/www/html
file_server
}
:443 {
tls /etc/pki/tls/certs/bench.crt /etc/pki/tls/private/bench.key
root * /var/www/html
file_server
}
That is the entire config. Compare this to the Nginx and Apache configurations above. For a deeper look at Caddy, see our Caddy installation guide.
| Strength | Limitation |
|---|---|
| Automatic HTTPS (zero-config Let’s Encrypt) | Higher idle memory than Nginx (53 MB vs 22 MB) |
| HTTP/3 (QUIC) built in | Smaller plugin ecosystem than Nginx or Apache |
| Simplest configuration of any server | Fewer tuning knobs for advanced users |
| Good HTTPS performance (only 8% drop from HTTP) | Written in Go (garbage collection can cause rare latency spikes) |
HAProxy
HAProxy is a TCP/HTTP load balancer, not a web server. It does not serve files from disk. Its job is to sit in front of your application servers, terminate SSL, balance traffic, perform health checks, and manage connections. We benchmarked it as a reverse proxy (fronting an Nginx backend) because that is how it runs in production.
Install on Rocky Linux 10:
sudo dnf install -y haproxy
Confirmed version:
haproxy -v
HAProxy 3.0.5 is the current LTS branch (supported through Q2 2029):
HAProxy version 3.0.5-8e879a5 2024/09/19 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2029.
Benchmark configuration with 4 threads:
global
maxconn 10000
nbthread 4
ssl-default-bind-options ssl-min-ver TLSv1.2
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
frontend http_front
bind *:80
default_backend webservers
frontend https_front
bind *:443 ssl crt /etc/pki/tls/certs/bench.pem
default_backend webservers
backend webservers
server local 127.0.0.1:8000
Even though HAProxy proxied every request through an Nginx backend (adding network overhead and processing), it still delivered 51,298 HTTPS RPS with the lowest memory footprint under load (34.6 MB). For HAProxy installation and configuration details, see our dedicated guide.
| Strength | Limitation |
|---|---|
| Best HTTPS p99 latency (5.4ms) | Not a web server (cannot serve static files) |
| Lowest memory under load (34.6 MB) | Configuration syntax has a learning curve |
| Health checks, stick tables, rate limiting built in | No dynamic content processing (no CGI, no PHP) |
| Connection pooling, graceful reloads | Needs a backend server for any content serving |
Configuration Complexity
How much effort does it take to serve a static site with HTTPS? Here is the minimum configuration for each server, measured by lines of config needed:
| Server | Lines for Static + HTTPS | Automatic HTTPS | Config Reload |
|---|---|---|---|
| Caddy | 6 | Yes (built in) | caddy reload |
| Nginx | 15 | No (certbot needed) | nginx -s reload |
| HAProxy | 18 | No (certbot + PEM concat) | systemctl reload haproxy |
| Apache | 20 | No (certbot + mod_ssl) | apachectl graceful |
Caddy’s automatic HTTPS is genuinely useful. On a fresh server, caddy run --config Caddyfile with just a domain name in the config will obtain a Let’s Encrypt certificate, configure HTTPS, set up HTTP-to-HTTPS redirect, and enable HTTP/2 and HTTP/3 with zero additional steps. For the other three servers, you need to install certbot, obtain the certificate, configure the server to use it, and set up a renewal hook.
SSL/TLS Termination Performance
SSL termination is one of the most CPU-intensive operations a reverse proxy performs. The overhead percentage shows how much throughput drops when switching from HTTP to HTTPS:
| Server | HTTP RPS | HTTPS RPS | SSL Overhead |
|---|---|---|---|
| Caddy | 51,826 | 47,437 | 8.5% |
| HAProxy | 57,164 | 51,298 | 10.3% |
| Nginx | 60,382 | 53,162 | 12.0% |
| Apache | 46,908 | 34,434 | 26.6% |
Caddy has the smallest SSL overhead at 8.5%, likely because Go’s TLS stack handles HTTP/2 multiplexing natively without the OpenSSL layer. Apache takes the biggest hit at 26.6% because mod_ssl adds significant per-connection overhead in the event MPM’s thread model.
When to Use Each Server
Choose Nginx When
- You need maximum throughput for static content or reverse proxy
- Memory efficiency matters (containers, small VMs)
- You are running Kubernetes (most ingress controllers use Nginx)
- You want the largest community and ecosystem of any web server
Choose Apache When
- Your application relies on .htaccess files (WordPress shared hosting, CMS platforms)
- You need mod_php or mod_rewrite compatibility
- Your team already knows Apache and performance is not your bottleneck
Choose Caddy When
- You want automatic HTTPS with zero configuration
- HTTP/3 support is a requirement
- Your team is small and config simplicity saves real time
- You are running API services or Go applications
Choose HAProxy When
- You need TCP or HTTP load balancing across multiple backends
- Health checks and automatic failover are critical
- You want the lowest memory footprint under high concurrency
- You need advanced traffic management: rate limiting, stick tables, ACLs
Feature Comparison
Beyond raw performance, these servers differ significantly in features:
| Feature | Nginx | Apache | Caddy | HAProxy |
|---|---|---|---|---|
| Serve static files | Yes | Yes | Yes | No |
| Reverse proxy | Yes | Yes (mod_proxy) | Yes | Yes |
| Load balancing | Basic (round-robin, least_conn) | mod_proxy_balancer | Basic | Advanced (10+ algorithms) |
| Health checks | NGINX Plus only | mod_proxy_hcheck | Basic | Advanced (L4 + L7) |
| Automatic HTTPS | No | No | Yes | No |
| HTTP/2 | Yes | Yes | Yes | Yes |
| HTTP/3 (QUIC) | NGINX Plus / quic branch | No | Yes | Experimental |
| .htaccess support | No | Yes | No | No |
| Rate limiting | Built in | mod_ratelimit | Plugin | Built in (stick tables) |
| WebSocket | Yes | mod_proxy_wstunnel | Yes | Yes |
| Config hot reload | Yes (graceful) | Yes (graceful) | Yes (API) | Yes (seamless) |
| Written in | C | C | Go | C |
Benchmark Methodology
Transparency matters for benchmarks. Here is exactly how the tests were run so you can reproduce them.
The benchmark tool hey was installed on the client VM:
go install github.com/rakyll/hey@latest
Each test ran for 30 seconds with 100 concurrent connections:
# HTTP benchmark
~/go/bin/hey -z 30s -c 100 -q 0 http://192.168.1.115/
# HTTPS benchmark
~/go/bin/hey -z 30s -c 100 -q 0 https://192.168.1.115/
Memory was captured during peak load using ps on the server:
ps aux | grep '[n]ginx' | awk '{sum+=$6} END {printf "%.1f MB\n", sum/1024}'
Between each server test, the previous server was stopped completely and the system was given 10 seconds to settle. No two servers ran simultaneously. The HAProxy documentation provides additional benchmarking guidance for production tuning.