PostgreSQL 17 brings meaningful improvements to logical replication, query performance, and memory management. This guide walks through a production-grade installation on Ubuntu 24.04 (Noble Numbat) or Debian 13 (Trixie) using the official PostgreSQL Global Development Group (PGDG) APT repository. You will finish with a running, secured, and backed-up PostgreSQL 17 instance ready for real workloads.
Every command in this guide has been tested on a clean Ubuntu 24.04 LTS server. The same steps apply to Debian 13 unless noted otherwise.
Prerequisites
- Ubuntu 24.04 LTS or Debian 13 (amd64 or arm64)
- A user account with
sudoprivileges - At least 1 GB RAM (2 GB+ recommended for production)
- Stable internet connection to pull packages from the PGDG repository
Step 1: Install PostgreSQL 17 from the Official PGDG APT Repository
The default Ubuntu and Debian repositories often ship older PostgreSQL versions. To get PostgreSQL 17, use the repository maintained by the PostgreSQL project itself. This guarantees timely security patches and minor-version updates.
Start by installing the prerequisite packages and importing the PGDG repository signing key.
sudo apt install curl ca-certificates -y
sudo install -d /usr/share/postgresql-common/pgdg
sudo curl -o /usr/share/postgresql-common/pgdg/apt.postgresql.org.asc --fail https://www.postgresql.org/media/keys/ACCC4CF8.asc
Now create the repository configuration file. The command below reads your distribution codename automatically.
. /etc/os-release
sudo sh -c "echo 'deb [signed-by=/usr/share/postgresql-common/pgdg/apt.postgresql.org.asc] https://apt.postgresql.org/pub/repos/apt ${VERSION_CODENAME}-pgdg main' > /etc/apt/sources.list.d/pgdg.list"
Update the package index and install PostgreSQL 17.
sudo apt update
sudo apt install postgresql-17 postgresql-client-17 -y
The postgresql-17 package pulls in the server, and postgresql-client-17 provides the psql command-line client and related utilities such as pg_dump.
Step 2: Verify the Installation and Enable the Service
Confirm that the correct version is installed.
psql --version
Expected output:
psql (PostgreSQL) 17.x
The installer creates a systemd service automatically. Make sure it is enabled at boot and currently running.
sudo systemctl enable postgresql
sudo systemctl start postgresql
sudo systemctl status postgresql
You should see active (exited) or active (running) in the status output. If the service fails to start, jump to the Troubleshooting section at the end of this guide.
Step 3: Access the PostgreSQL Shell
PostgreSQL creates a system user called postgres during installation. This account owns the database cluster and is the default superuser. Switch to it and open the interactive terminal.
sudo -i -u postgres
psql
You are now inside the psql shell connected to the default postgres database. Type \q to exit when you are done.
Step 4: Create Databases, Users, and Grant Privileges
In any production setup the first task after installation is creating dedicated databases and roles. Never let applications connect as the postgres superuser.
Create a new database user with an encrypted password.
sudo -i -u postgres
createuser --pwprompt appuser
Create a database owned by that user.
createdb -O appuser appdb
If you prefer doing everything from inside psql, the SQL equivalents are shown below.
CREATE USER appuser WITH ENCRYPTED PASSWORD 'StrongPassword!2025';
CREATE DATABASE appdb OWNER appuser;
GRANT ALL PRIVILEGES ON DATABASE appdb TO appuser;
To grant read-only access for a reporting role, you would do the following after connecting to appdb.
\c appdb
CREATE USER readonly_user WITH ENCRYPTED PASSWORD 'R3adOnly!';
GRANT CONNECT ON DATABASE appdb TO readonly_user;
GRANT USAGE ON SCHEMA public TO readonly_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly_user;
The ALTER DEFAULT PRIVILEGES line ensures that any tables created in the future also inherit the SELECT grant.
Step 5: Configure pg_hba.conf for Local and Remote Access
The pg_hba.conf file controls who can connect and how they authenticate. On Ubuntu 24.04 with PostgreSQL 17 from PGDG, this file lives at:
/etc/postgresql/17/main/pg_hba.conf
Open it with your preferred editor.
sudo nano /etc/postgresql/17/main/pg_hba.conf
Below is a sensible baseline. It keeps peer authentication for local socket connections (so sudo -i -u postgres psql keeps working) and enforces scram-sha-256 for everything else.
# TYPE DATABASE USER ADDRESS METHOD
# Local socket connections
local all postgres peer
local all all scram-sha-256
# IPv4 local connections
host all all 127.0.0.1/32 scram-sha-256
# IPv4 remote connections (adjust the subnet to match your network)
host all all 10.0.0.0/8 scram-sha-256
host all all 192.168.0.0/16 scram-sha-256
# IPv6 local connections
host all all ::1/128 scram-sha-256
Important: avoid using md5 for new deployments. PostgreSQL 17 defaults to scram-sha-256 which is significantly stronger. If you are migrating from an older cluster, set password_encryption = 'scram-sha-256' in postgresql.conf and reset all passwords before switching the HBA method.
After editing, reload the configuration.
sudo systemctl reload postgresql
Step 6: Tune postgresql.conf
The main server configuration file is located at:
/etc/postgresql/17/main/postgresql.conf
Open it and adjust the following parameters. The values below assume a server with 4 GB RAM and moderate concurrent load. Scale them to fit your hardware.
listen_addresses = '*'
port = 5432
max_connections = 200
# Memory
shared_buffers = 1GB
work_mem = 16MB
maintenance_work_mem = 256MB
effective_cache_size = 3GB
# WAL
wal_level = replica
max_wal_size = 2GB
min_wal_size = 512MB
# Logging
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_min_duration_statement = 500
log_line_prefix = '%t [%p] %u@%d '
A few notes on these settings:
- listen_addresses = ‘*’ tells PostgreSQL to accept TCP connections on all network interfaces. Pair this with your
pg_hba.confrules and firewall to control actual access. - shared_buffers is typically set to 25% of total RAM. Going beyond 8 GB rarely helps on Linux because the OS page cache handles the rest.
- work_mem applies per sort or hash operation per query. A value between 8 MB and 64 MB works for most OLTP workloads. Be careful: a query with five sort nodes will allocate up to 5 x work_mem.
- log_min_duration_statement = 500 logs any query that takes longer than 500 ms. This is the simplest way to catch slow queries without installing extra extensions.
Restart PostgreSQL to apply changes that require a restart (like shared_buffers and listen_addresses).
sudo systemctl restart postgresql
Step 7: Open the Firewall with UFW
If UFW is active on the server, you need to allow inbound traffic on port 5432. Limit it to your application subnet rather than opening it to the world.
sudo ufw allow from 10.0.0.0/8 to any port 5432 proto tcp
sudo ufw allow from 192.168.0.0/16 to any port 5432 proto tcp
sudo ufw reload
sudo ufw status verbose
If your application servers live on the public internet (a pattern you should avoid when possible), you can open the port globally with sudo ufw allow 5432/tcp. Just make sure your pg_hba.conf rules are tight.
Step 8: Test a Remote Connection
From a different machine that has the postgresql-client package installed, try connecting over the network.
psql -h 10.0.0.5 -U appuser -d appdb
Replace 10.0.0.5 with the actual IP address of your PostgreSQL server. You will be prompted for the password you set in Step 4. A successful connection drops you into the appdb=> prompt.
If the connection hangs or is refused, verify three things in order: (1) listen_addresses in postgresql.conf, (2) a matching host line in pg_hba.conf, and (3) the UFW rule for port 5432.
Step 9: Backup with pg_dump and pg_dumpall
Every DBA who has been in the field long enough has a war story about missing backups. Do not skip this section.
Single database backup (custom format, compressed):
pg_dump -U postgres -Fc appdb -f /var/backups/postgresql/appdb_$(date +%Y%m%d_%H%M%S).dump
The -Fc flag produces a custom-format archive that supports parallel restore, selective table restore, and compression out of the box. This is the format you should use for day-to-day backups.
Plain SQL dump (useful for migrations or version upgrades):
pg_dump -U postgres appdb > /var/backups/postgresql/appdb_$(date +%Y%m%d).sql
Full cluster backup (all databases and global objects like roles):
pg_dumpall -U postgres > /var/backups/postgresql/full_cluster_$(date +%Y%m%d).sql
To restore a custom-format dump:
pg_restore -U postgres -d appdb -Fc /var/backups/postgresql/appdb_20250115_120000.dump
Make sure the backup directory exists and has the right ownership.
sudo mkdir -p /var/backups/postgresql
sudo chown postgres:postgres /var/backups/postgresql
Step 10: Automate Backups with Cron
Manual backups are fine during initial testing but they will not save you at 3 AM on a Saturday. Set up a cron job under the postgres user.
sudo crontab -u postgres -e
Add the following lines. This runs a custom-format dump every night at 2:00 AM and removes backups older than 14 days.
# Daily PostgreSQL backup at 02:00
0 2 * * * /usr/bin/pg_dump -U postgres -Fc appdb -f /var/backups/postgresql/appdb_$(date +\%Y\%m\%d_\%H\%M\%S).dump 2>> /var/log/postgresql/backup.log
# Weekly full cluster backup on Sundays at 03:00
0 3 * * 0 /usr/bin/pg_dumpall -U postgres -f /var/backups/postgresql/full_cluster_$(date +\%Y\%m\%d).sql 2>> /var/log/postgresql/backup.log
# Purge backups older than 14 days at 04:00
0 4 * * * find /var/backups/postgresql -type f -mtime +14 -delete
Always test your restores. A backup you have never restored is not a backup; it is a hope.
Step 11: Point-in-Time Recovery with WAL Archiving
Logical dumps are good for many scenarios, but they only give you a snapshot at the moment the dump ran. Point-in-Time Recovery (PITR) lets you restore the database to any second between your base backup and the most recent WAL segment. This is what separates hobby setups from production ones.
Enable WAL Archiving
Edit postgresql.conf and set the following parameters.
wal_level = replica
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/17/wal_archive/%f'
Create the archive directory.
sudo mkdir -p /var/lib/postgresql/17/wal_archive
sudo chown postgres:postgres /var/lib/postgresql/17/wal_archive
Restart PostgreSQL to activate archive mode.
sudo systemctl restart postgresql
Take a Base Backup
Use pg_basebackup to create a physical copy of the data directory.
sudo -u postgres pg_basebackup -D /var/backups/postgresql/base_backup -Ft -z -P
The flags mean: -Ft tar format, -z gzip compression, -P show progress.
Perform a Recovery
If disaster strikes, the high-level process is:
- Stop PostgreSQL.
- Remove (or move) the current data directory.
- Extract the base backup into the data directory.
- Create a
recovery.signalfile in the data directory. - Set
restore_commandinpostgresql.conf(or in apostgresql.auto.confoverride). - Optionally set
recovery_target_timeto the exact moment you want to recover to. - Start PostgreSQL. It will replay archived WAL segments until the target is reached.
Example restore_command and target time you would add before starting recovery:
restore_command = 'cp /var/lib/postgresql/17/wal_archive/%f %p'
recovery_target_time = '2025-06-15 14:30:00'
After PostgreSQL finishes replaying WAL files, it will pause in recovery mode. Promote it to a read-write primary with:
sudo -u postgres pg_ctl promote -D /var/lib/postgresql/17/main
For production environments, consider using pgBackRest or barman to manage base backups and WAL archiving. These tools handle retention policies, parallel compression, and remote storage (S3, GCS) far better than shell scripts.
Step 12: Connection Pooling with pgBouncer
PostgreSQL forks a new process for every client connection. At a few hundred concurrent connections this is fine. Past that threshold, or with short-lived connections from web frameworks, you start burning CPU on process creation and context switching. pgBouncer fixes this by maintaining a pool of server-side connections and multiplexing client connections through them.
Install pgBouncer from the standard repository.
sudo apt install pgbouncer -y
Edit the main configuration file.
sudo nano /etc/pgbouncer/pgbouncer.ini
Set the following in the relevant sections.
[databases]
appdb = host=127.0.0.1 port=5432 dbname=appdb
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
default_pool_size = 50
max_client_conn = 500
log_connections = 1
log_disconnections = 1
The auth_file needs to contain the username and password hash. You can extract it from PostgreSQL directly.
sudo -u postgres psql -t -A -c "SELECT '\"' || usename || '\" \"' || passwd || '\"' FROM pg_shadow WHERE usename = 'appuser';" > /etc/pgbouncer/userlist.txt
The file will look something like this:
"appuser" "SCRAM-SHA-256$4096:..."
Start and enable pgBouncer.
sudo systemctl enable pgbouncer
sudo systemctl start pgbouncer
Clients now connect to port 6432 instead of 5432. From an application server:
psql -h 10.0.0.5 -p 6432 -U appuser -d appdb
A few pooling mode notes:
- session mode assigns one server connection per client session. Safest, but you save almost nothing on connection reuse.
- transaction mode (recommended) returns the server connection to the pool after each transaction. Works with most applications.
- statement mode returns connections after every statement. This breaks multi-statement transactions and is rarely the right choice.
Step 13: Useful psql Commands
The psql client is deceptively powerful. Knowing a handful of meta-commands will save you hours compared to typing out information_schema queries.
List all databases:
\l
Shows every database on the cluster along with its owner, encoding, and access privileges.
List tables in the current database:
\dt
Add a schema qualifier if you work with multiple schemas: \dt myschema.*
List all roles/users:
\du
Describe a table with extended info (size, description):
\d+ tablename
Toggle query timing:
\timing
After enabling this, every query result will include the execution time in milliseconds. Toggle it off by running \timing again.
Other commands worth memorizing:
\dilists indexes\dflists functions\sv function_nameshows a function’s source code\xtoggles expanded (vertical) display, useful for wide rows\watch 5re-runs the previous query every 5 seconds\copyimports/exports CSV data without needing superuser access
Step 14: Troubleshooting Common Issues
Connection Refused
This means nothing is listening on the expected address/port combination. Work through the checklist:
- Confirm PostgreSQL is running:
sudo systemctl status postgresql - Check
listen_addressesinpostgresql.conf. If set tolocalhost, the server will not accept connections from remote hosts. - Verify the port:
ss -tlnp | grep 5432 - Verify the firewall:
sudo ufw status
Authentication Failed
The error FATAL: password authentication failed for user "appuser" usually means one of the following:
- Wrong password. Reset it with
ALTER USER appuser WITH PASSWORD 'NewPassword';from a superuser session. - The
pg_hba.confline for the client’s address uses a different auth method than the client expects. If you recently changed frommd5toscram-sha-256, every user password must be reset because the stored hash format differs. - The client is matching a different
pg_hba.confrule than you expect. PostgreSQL evaluates rules top to bottom and uses the first match. Addlog_connections = onandlog_hostname = ontemporarily to see exactly which rule is matching.
Slow Queries
Start with the low-hanging fruit:
- Check for missing indexes. Run
EXPLAIN (ANALYZE, BUFFERS) SELECT ...;on the slow query. If you see sequential scans on large tables, an index is probably missing. - Look at
pg_stat_user_tables. Tables with a highseq_scancount and zeroidx_scancount are prime candidates for new indexes. - Check for bloat. Tables that see heavy UPDATE or DELETE traffic accumulate dead tuples. Run
VACUUM ANALYZE tablename;or check if autovacuum is keeping up by queryingpg_stat_user_tables.n_dead_tup. - Enable
pg_stat_statements. This extension tracks execution statistics for every query. Add it toshared_preload_librariesinpostgresql.conf, restart, then runCREATE EXTENSION pg_stat_statements;in your database.
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
This single query tells you which statements are consuming the most cumulative time on your server. It is the first thing any DBA should look at when investigating performance complaints.
Out of Shared Memory
If PostgreSQL refuses to start after increasing shared_buffers, the kernel’s shared memory limits may be too low. On modern Ubuntu and Debian systems this is rarely an issue because systemd sets generous defaults. If you do hit it, check the current limits:
sysctl kernel.shmmax
sysctl kernel.shmall
Increase them in /etc/sysctl.conf if needed, then run sudo sysctl -p to apply the changes.
Summary
You now have a PostgreSQL 17 server installed from the official PGDG repository on Ubuntu 24.04 or Debian 13 with:
- Secure authentication using scram-sha-256
- Tuned memory and connection settings
- Firewall rules limiting access to trusted subnets
- Automated daily backups with retention cleanup
- WAL archiving ready for point-in-time recovery
- pgBouncer handling connection pooling in transaction mode
From here, the natural next steps are setting up streaming replication for high availability, deploying a monitoring stack (Prometheus with postgres_exporter or pgWatch2), and testing your disaster recovery runbook end to end. A database that has never been restored from backup is a database waiting to teach you a painful lesson.


























































