Install and Configure Nginx on Ubuntu 24.04 / Debian 13 as Web Server and Reverse Proxy

Nginx sits behind a significant portion of the internet’s busiest websites. It handles static content with minimal resource overhead, terminates TLS, load balances across application backends, and reverse proxies traffic to upstream services. This guide walks through every step of deploying Nginx on Ubuntu 24.04 (Noble Numbat) or Debian 13 (Trixie), from installing the latest stable release directly from the official Nginx repository through advanced production hardening.

We will cover virtual hosts, PHP-FPM integration, reverse proxy and WebSocket support, upstream load balancing, Let’s Encrypt TLS automation, HTTP/2 and HTTP/3 (QUIC), security headers, compression, rate limiting, logging, performance tuning, and common troubleshooting scenarios.

Prerequisites

  • A fresh Ubuntu 24.04 or Debian 13 server with root or sudo access.
  • A registered domain name pointing to the server’s public IP (required for Let’s Encrypt).
  • Ports 80, 443 TCP and 443 UDP open in your firewall.

Step 1: Install Nginx from the Official Nginx Repository

The default distribution packages often lag behind. Installing from the official nginx.org repository gets you the latest stable build with current security patches and features like HTTP/3 support.

Start by installing the prerequisite packages needed to add the repository securely.

For Ubuntu 24.04:

sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring

For Debian 13:

sudo apt install curl gnupg2 ca-certificates lsb-release debian-archive-keyring

Import the official Nginx signing key so apt can verify package integrity.

curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
    | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null

Verify that the downloaded key has the correct fingerprint.

gpg --dry-run --quiet --no-keyring --import --import-options import-show \
    /usr/share/keyrings/nginx-archive-keyring.gpg

The output should contain the fingerprint 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62. If it does not match, remove the file and investigate before continuing.

Add the stable repository. Replace ubuntu with debian if you are running Debian 13.

echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
https://nginx.org/packages/ubuntu $(lsb_release -cs) nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list

Pin the official repository so that packages from nginx.org always take priority over distribution packages.

echo -e "Package: *\nPin: origin nginx.org\nPin: release o=nginx\nPin-Priority: 900\n" \
    | sudo tee /etc/apt/preferences.d/99nginx

Now install Nginx.

sudo apt update && sudo apt install nginx -y

Step 2: Enable, Start, and Verify Nginx

Enable the service so it starts automatically on boot, then start it and confirm the process is running.

sudo systemctl enable nginx
sudo systemctl start nginx
sudo systemctl status nginx

You should see active (running) in the output. Verify the installed version to confirm the official repo build.

nginx -v

Open a browser and navigate to http://your_server_ip. The default Nginx welcome page confirms the installation is working.

Step 3: Nginx Directory Structure

Understanding the file layout saves time when you need to debug or extend your configuration later. Here is what lives under /etc/nginx/.

PathPurpose
/etc/nginx/nginx.confMain configuration file. Sets global directives (worker processes, events, http block).
/etc/nginx/conf.d/Drop-in directory for server blocks. Files ending in .conf are auto-included.
/etc/nginx/sites-available/Stores all virtual host config files (available but not necessarily active). Present on some setups.
/etc/nginx/sites-enabled/Symlinks to files in sites-available. Only linked configs are loaded.
/etc/nginx/mime.typesMaps file extensions to MIME types.
/etc/nginx/snippets/Reusable config fragments you can include from server blocks.
/var/log/nginx/Default location for access and error logs.
/usr/share/nginx/html/Default document root.

The official Nginx repo packages use /etc/nginx/conf.d/ as the primary drop-in directory. If you prefer the sites-available / sites-enabled pattern, create both directories and add an include directive in nginx.conf.

sudo mkdir -p /etc/nginx/sites-available /etc/nginx/sites-enabled

Then add this line inside the http {} block in /etc/nginx/nginx.conf.

include /etc/nginx/sites-enabled/*;

Step 4: Create a Virtual Host for a Static Site

Create a document root and drop in a simple index page.

sudo mkdir -p /var/www/example.com/html
echo "<h1>example.com is live</h1>" | sudo tee /var/www/example.com/html/index.html
sudo chown -R www-data:www-data /var/www/example.com

Create the server block configuration.

sudo tee /etc/nginx/conf.d/example.com.conf > /dev/null <<'EOF'
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    root /var/www/example.com/html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }

    access_log /var/log/nginx/example.com.access.log;
    error_log  /var/log/nginx/example.com.error.log;
}
EOF

Test the configuration syntax and reload.

sudo nginx -t && sudo systemctl reload nginx

Step 5: PHP-FPM Integration

Install PHP-FPM. Adjust the version number to match your requirements.

sudo apt install php8.3-fpm php8.3-common php8.3-mysql php8.3-xml php8.3-curl -y

PHP-FPM listens on a Unix socket by default. Confirm the socket path.

ls /run/php/php8.3-fpm.sock

Add a location block for PHP processing inside your server block. This passes .php requests to PHP-FPM through the FastCGI protocol.

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.3-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_index index.php;
    }

Update the index directive to include index.php so Nginx serves PHP index files automatically.

    index index.php index.html;

Test and reload.

sudo nginx -t && sudo systemctl reload nginx

Create a quick test file to verify PHP processing is working.

echo "<?php phpinfo();" | sudo tee /var/www/example.com/html/info.php

Visit http://example.com/info.php and confirm the PHP info page renders. Remove this file once you have confirmed it works.

Step 6: Reverse Proxy Configuration

Nginx excels as a reverse proxy sitting in front of application servers like Node.js, Python (Gunicorn/Uvicorn), or Java. The following config forwards all traffic to a backend running on port 3000 while preserving original client information through proxy headers.

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

WebSocket support requires two additional headers so the HTTP connection can be upgraded. Add these directives inside the relevant location block.

    location /ws/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_read_timeout 86400s;
    }

The proxy_read_timeout is bumped to 24 hours because WebSocket connections are long-lived and the default 60-second timeout will close idle sockets prematurely.

Step 7: Load Balancing with Upstream Blocks

The upstream directive groups backend servers. Nginx supports several balancing algorithms out of the box.

upstream app_backends {
    # Round-robin is the default; requests rotate through each server
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
    server 10.0.0.13:8080 backup;  # only used when others are down
}

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://app_backends;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

To change the algorithm, add one directive at the top of the upstream block.

  • least_conn sends new requests to the backend with the fewest active connections. Good when request processing times vary.
  • ip_hash hashes the client IP to ensure the same client always hits the same backend. Useful for applications that store session data locally.
upstream app_backends {
    least_conn;
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
}

# Or for sticky sessions:
upstream app_backends {
    ip_hash;
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
}

You can also assign weights to distribute traffic unevenly. A server with weight=3 receives three times the traffic of a server with weight=1.

Step 8: SSL/TLS with Let’s Encrypt

Install Certbot and its Nginx plugin.

sudo apt install certbot python3-certbot-nginx -y

Obtain a certificate. Certbot will automatically modify your server block to enable HTTPS and set up a redirect from HTTP.

sudo certbot --nginx -d example.com -d www.example.com

Follow the interactive prompts. Certbot creates the certificate files under /etc/letsencrypt/live/example.com/ and injects the ssl_certificate and ssl_certificate_key directives into your config.

Certbot installs a systemd timer for automatic renewal. Verify that the timer is active.

sudo systemctl list-timers | grep certbot

Test the renewal process without making changes.

sudo certbot renew --dry-run

If you prefer manual SSL configuration without Certbot modifying files, generate the certificate with the certonly flag and configure the server block yourself.

sudo certbot certonly --nginx -d example.com -d www.example.com

Then reference the certificate in your server block.

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name example.com www.example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 10m;

    root /var/www/example.com/html;
    index index.html;
}

Step 9: HTTP/2 and HTTP/3 (QUIC) Configuration

HTTP/2 multiplexes streams over a single TCP connection, reducing latency. HTTP/3 goes further by using QUIC (UDP) to eliminate head-of-line blocking entirely. Nginx 1.25.0 and later include built-in HTTP/3 support.

To enable both protocols, update the listen directives in your SSL server block.

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;

    listen 443 quic reuseport;
    listen [::]:443 quic reuseport;

    server_name example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols       TLSv1.3;

    # Tell browsers that HTTP/3 is available
    add_header Alt-Svc 'h3=":443"; ma=86400' always;

    root /var/www/example.com/html;
    index index.html;
}

Key points to keep in mind:

  • HTTP/3 requires TLSv1.3. Older protocols will not work with QUIC.
  • The reuseport parameter should only appear on one server block per port across all your configs. If you have multiple virtual hosts on port 443, only the first (or default) server should include reuseport.
  • QUIC uses UDP on port 443. Open this in your firewall: sudo ufw allow 443/udp.
  • The Alt-Svc header tells browsers that HTTP/3 is available. Without it, clients will not attempt a QUIC connection.

Step 10: Security Headers

Security headers protect your visitors from common web attacks. Add these inside your server block or in a shared snippet file that you include from each virtual host.

# Enforce HTTPS for one year, include subdomains
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

# Prevent your site from being embedded in iframes on other domains
add_header X-Frame-Options "SAMEORIGIN" always;

# Block MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;

# Basic Content Security Policy - adjust to match your site's needs
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self';" always;

# Control referrer information sent with requests
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

# Restrict browser features
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;

After adding these headers, use a tool like curl -I https://example.com or securityheaders.com to verify they are being sent correctly. Adjust the Content-Security-Policy to fit your application. Overly restrictive policies will break functionality; overly permissive policies reduce protection.

Step 11: Gzip and Brotli Compression

Compression reduces bandwidth and speeds up page loads. Gzip support is built into Nginx. Add these directives inside the http {} block in /etc/nginx/nginx.conf.

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 5;
gzip_min_length 256;
gzip_types
    text/plain
    text/css
    text/javascript
    application/javascript
    application/json
    application/xml
    application/rss+xml
    image/svg+xml
    font/woff2;

Brotli typically achieves better compression ratios than gzip but is not compiled into the official Nginx packages by default. You can install the dynamic module.

sudo apt install libnginx-mod-http-brotli-filter libnginx-mod-http-brotli-static -y

If the packages are not available for your release, you can build the ngx_brotli module from source. Once the module is loaded, add the Brotli directives alongside gzip.

brotli on;
brotli_comp_level 6;
brotli_types
    text/plain
    text/css
    text/javascript
    application/javascript
    application/json
    application/xml
    image/svg+xml
    font/woff2;

Both gzip and Brotli can be enabled simultaneously. Nginx will serve Brotli to clients that support it and fall back to gzip for others.

Step 12: Rate Limiting and Connection Limits

Rate limiting protects your server from brute-force attacks and abusive traffic. Define a shared memory zone in the http {} block, then apply it in specific locations.

# Define zones in the http {} block
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;

Apply the limits inside your server or location blocks.

server {
    listen 80;
    server_name example.com;

    # Allow burst of 20 requests, then enforce 10r/s
    limit_req zone=general burst=20 nodelay;

    # Limit each IP to 50 simultaneous connections
    limit_conn addr 50;

    location /login {
        # Stricter rate limit on login endpoints
        limit_req zone=login burst=5 nodelay;
        proxy_pass http://127.0.0.1:3000;
    }
}

The burst parameter queues excess requests rather than rejecting them immediately. Adding nodelay processes burst requests without delay but still counts them against the rate. When the burst queue is full, Nginx returns a 503 status.

You can customize the error response for rate-limited requests.

limit_req_status 429;

Step 13: Access Logs and Error Logs

Nginx writes two log files by default: an access log recording every request and an error log capturing warnings and failures. You can customize the log format and path per virtual host.

# Custom log format in the http {} block
log_format main '$remote_addr - $remote_user [$time_local] '
                '"$request" $status $body_bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '$request_time $upstream_response_time';

Apply the custom format to a specific virtual host.

    access_log /var/log/nginx/example.com.access.log main;
    error_log  /var/log/nginx/example.com.error.log warn;

Error log levels from least to most verbose: emerg, alert, crit, error, warn, notice, info, debug. For production, warn or error is a good balance between visibility and noise.

To disable access logging for specific paths (health checks, for example), use this pattern.

    location = /health {
        access_log off;
        return 200 "OK\n";
    }

Set up log rotation using logrotate (already configured by the Nginx package) or verify the existing config.

cat /etc/logrotate.d/nginx

Step 14: Performance Tuning

The default Nginx configuration works for light workloads. Under heavier traffic, tuning these parameters in /etc/nginx/nginx.conf makes a measurable difference.

# Set to 'auto' to match the number of CPU cores
worker_processes auto;

# Maximum open files per worker - raise this on busy servers
worker_rlimit_nofile 65535;

events {
    # Each worker can handle this many simultaneous connections
    worker_connections 4096;

    # Accept multiple connections at once
    multi_accept on;

    # Use epoll on Linux for better performance
    use epoll;
}

http {
    # Keep connections alive to avoid TCP handshake overhead
    keepalive_timeout 65;
    keepalive_requests 1000;

    # Buffer sizes for proxied responses
    proxy_buffer_size 128k;
    proxy_buffers 4 256k;
    proxy_busy_buffers_size 256k;

    # Client body and header limits
    client_max_body_size 50m;
    client_body_buffer_size 128k;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 16k;

    # File serving optimizations
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;

    # Cache open file descriptors
    open_file_cache max=10000 inactive=30s;
    open_file_cache_valid 60s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
}

A few notes on these settings:

  • worker_processes auto is almost always the right choice. It sets one worker per CPU core.
  • worker_connections multiplied by worker_processes gives you the theoretical maximum concurrent connections. A server with 4 cores and 4096 connections per worker can handle roughly 16,384 simultaneous connections.
  • sendfile offloads file transfers to the kernel, bypassing userspace. Combined with tcp_nopush, it batches data into full TCP packets before sending.
  • Increase client_max_body_size if your application accepts file uploads larger than the default 1 MB.
  • For reverse proxy setups, tune proxy_buffers to match the typical response size from your backend. Undersized buffers cause Nginx to spool responses to disk.

For upstream keepalive (maintaining persistent connections to your backends), add the keepalive directive inside the upstream block.

upstream app_backends {
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
    keepalive 32;
}

server {
    location / {
        proxy_pass http://app_backends;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

Setting Connection "" clears the header so that the upstream connection remains persistent rather than closing after each request.

Step 15: Troubleshooting Common Issues

Even well-planned deployments run into problems. Here are the issues that come up most often and how to resolve them.

Configuration Syntax Errors

Always test before reloading. This single command has saved countless outages.

sudo nginx -t

The output will tell you exactly which file and line number contains the error.

502 Bad Gateway

This means Nginx reached the backend but got an invalid response (or no response). Common causes:

  • The backend application is not running. Check with systemctl status your-app.
  • The socket or port in proxy_pass or fastcgi_pass does not match the backend’s listen address.
  • PHP-FPM is down. Restart it: sudo systemctl restart php8.3-fpm.
  • SELinux or AppArmor is blocking the connection (less common on Ubuntu, but worth checking on hardened systems).

Check the Nginx error log for the specific upstream error.

sudo tail -50 /var/log/nginx/error.log

504 Gateway Timeout

The backend is reachable but took too long to respond. Either the application is slow (database queries, external API calls) or the timeout values are too aggressive.

Increase the proxy timeouts for that location.

    proxy_connect_timeout 300s;
    proxy_send_timeout 300s;
    proxy_read_timeout 300s;

For PHP-FPM, also increase the equivalent FastCGI timeouts.

    fastcgi_connect_timeout 300s;
    fastcgi_send_timeout 300s;
    fastcgi_read_timeout 300s;

Permission Denied Errors

If the error log shows permission denied when accessing files, check that the Nginx worker process user (usually www-data or nginx) has read access to the document root and all parent directories.

sudo namei -l /var/www/example.com/html/index.html

This command shows the permission chain for every directory in the path. Look for any directory where the Nginx user lacks read and execute permissions. Fix ownership if needed.

sudo chown -R www-data:www-data /var/www/example.com
sudo chmod -R 755 /var/www/example.com

Port Already in Use

If Nginx fails to start with bind() to 0.0.0.0:80 failed (98: Address already in use), another process is occupying the port.

sudo ss -tlnp | grep ':80'

Stop or reconfigure the conflicting service (often Apache) before starting Nginx.

Checking the Full Running Configuration

To dump the complete, merged configuration that Nginx is using (including all includes), run this.

sudo nginx -T

This is extremely useful for verifying that your includes, snippets, and drop-in configs are being loaded in the correct order.

Summary

This guide covered a production-grade Nginx setup from the ground up on Ubuntu 24.04 and Debian 13. You installed the latest stable build from the official Nginx repository, configured virtual hosts, integrated PHP-FPM, set up reverse proxying with WebSocket support, balanced traffic across multiple backends, automated TLS certificates with Let’s Encrypt, enabled HTTP/2 and HTTP/3, hardened the server with security headers and rate limiting, tuned compression and performance, and built a troubleshooting playbook for the most frequent failure modes.

The single most important habit to build: run nginx -t before every reload. It takes a fraction of a second and catches problems before they affect live traffic.

For ongoing maintenance, keep the Nginx packages updated through the official repository, monitor your error logs for upstream issues, and revisit your TLS configuration periodically as best practices evolve.

LEAVE A REPLY

Please enter your comment!
Please enter your name here