Debian

Backup MariaDB and MySQL to Amazon S3 on Ubuntu 24.04

Shipping MySQL or MariaDB backups to S3 gets them off the database host and into durable, geographically redundant storage for pennies per month. This guide stands up a simple but production-shaped backup pipeline on Ubuntu 24.04 LTS: install MariaDB, take a consistent logical dump, compress it, sync it to an S3 bucket with the AWS CLI v2, and wrap the whole thing in a shell script you can run from a systemd timer.

Original content from computingforgeeks.com - post 1735

Tested April 2026 on Ubuntu 24.04.4 LTS with MariaDB 10.11.14 and AWS CLI v2 2.34.29

Step 1: Install MariaDB and tools

Install MariaDB and the helpers we need for the backup pipeline:

sudo apt update
sudo apt install -y mariadb-server mariadb-client pigz curl unzip

pigz is the parallel gzip implementation that cuts compression time in half on anything with more than one CPU. unzip is needed for the AWS CLI v2 installer.

Start MariaDB and confirm it’s running:

sudo systemctl enable --now mariadb
systemctl is-active mariadb
mariadb --version

The output shows the service is active and reports the MariaDB 10.11 LTS version shipping in noble:

active
mariadb  Ver 15.1 Distrib 10.11.14-MariaDB, for debian-linux-gnu (x86_64) using  EditLine wrapper

Step 2: Install AWS CLI v2

Ubuntu 24.04 dropped the awscli package from noble because the v1 Python tool is deprecated. Install the current v2 from Amazon’s official bundle instead:

cd /tmp
curl -sSL "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o awscliv2.zip
unzip -q awscliv2.zip
sudo ./aws/install

Verify the install and note the version. The v2 binary lands at /usr/local/bin/aws:

aws --version

A healthy install reports the v2 major version and the Python runtime it bundles:

aws-cli/2.34.29 Python/3.14.3 Linux/6.8.0-101-generic exe/x86_64.ubuntu.24

Step 3: Create an S3 bucket and an IAM user

Create the bucket on the AWS side before configuring the backup script. In the console (or with your existing Terraform/CloudFormation) create a bucket named something like company-db-backups-prod with versioning on and server-side encryption set to AES-256 or KMS.

Create an IAM user with programmatic access and attach a minimal policy that only allows Put and List on the target bucket. The policy JSON should look like:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject", "s3:PutObjectAcl"],
      "Resource": "arn:aws:s3:::company-db-backups-prod/*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": "arn:aws:s3:::company-db-backups-prod"
    }
  ]
}

No s3:DeleteObject, no s3:GetObject. The backup host can only create new objects, which means even a compromised database server can’t wipe historical backups.

Step 4: Store AWS credentials for the backup user

Create a dedicated system user for the backup job so the credentials aren’t in root’s shell history or any interactive user’s dotfiles:

sudo useradd -r -m -s /bin/bash backup
sudo -u backup mkdir -p ~backup/.aws
sudo -u backup tee ~backup/.aws/credentials >/dev/null <<'EOF'
[default]
aws_access_key_id = AKIAEXAMPLEKEYID
aws_secret_access_key = exampleSecretKeyDoNotUseForReal
region = eu-west-1
EOF
sudo chmod 600 ~backup/.aws/credentials

Replace the key ID and secret with the ones AWS gave you when you created the IAM user. Test that the credentials work by listing the bucket as the backup user:

sudo -u backup aws s3 ls s3://company-db-backups-prod/

Step 5: Create some test data

Before writing the backup script, seed a database so the dump has content to capture:

sudo mariadb -e "CREATE DATABASE testdb;
USE testdb;
CREATE TABLE users(id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100));
INSERT INTO users(name) VALUES ('alice'),('bob');
SELECT * FROM users;"

The SELECT returns the two seed rows, confirming the test database is in place:

id	name
1	alice
2	bob

Step 6: Dump the database

mariadb-dump (the modern name for mysqldump) produces a SQL file that fully recreates the schema and data of the specified databases. Use --single-transaction for InnoDB tables so the dump is internally consistent without locking writes:

sudo mariadb-dump --single-transaction --databases testdb > /tmp/testdb.sql
ls -lh /tmp/testdb.sql
head -5 /tmp/testdb.sql

You should see a small SQL file with a MariaDB dump header at the top:

-rw-rw-r-- 1 ubuntu ubuntu 2.2K Apr 12 00:11 /tmp/testdb.sql
/*M!999999\- enable the sandbox mode */
-- MariaDB dump 10.19  Distrib 10.11.14-MariaDB, for debian-linux-gnu (x86_64)
--
-- Host: localhost    Database: testdb
-- ------------------------------------------------------

Compress the dump. pigz gives you the same output format as gzip but uses every available core:

pigz /tmp/testdb.sql
ls -lh /tmp/testdb.sql.gz

The compressed file is a fraction of the original thanks to the highly-repetitive SQL structure:

-rw-rw-r-- 1 ubuntu ubuntu 851 Apr 12 00:11 /tmp/testdb.sql.gz

Step 7: Put it together in a backup script

Combine everything into a single shell script that a systemd timer can run unattended:

sudo tee /usr/local/bin/mysql-backup-to-s3.sh >/dev/null <<'EOF'
#!/bin/bash
set -euo pipefail

BUCKET="company-db-backups-prod"
HOSTNAME=$(hostname -s)
TS=$(date +%Y-%m-%d_%H%M%S)
WORKDIR="/var/backups/mysql"
FILE="${WORKDIR}/${HOSTNAME}-all-${TS}.sql.gz"

mkdir -p "${WORKDIR}"

# Dump all databases as a single compressed file
mariadb-dump \
  --all-databases \
  --single-transaction \
  --quick \
  --triggers \
  --routines \
  --events \
  --master-data=2 \
  | pigz > "${FILE}"

# Upload to S3 with KMS-SSE encryption server-side
aws s3 cp "${FILE}" "s3://${BUCKET}/${HOSTNAME}/${FILE##*/}" \
  --storage-class STANDARD_IA \
  --no-progress

# Keep the last 3 local copies and delete the rest
ls -1t "${WORKDIR}"/${HOSTNAME}-all-*.sql.gz | tail -n +4 | xargs -r rm --

echo "Backup completed: s3://${BUCKET}/${HOSTNAME}/${FILE##*/}"
EOF
sudo chmod 750 /usr/local/bin/mysql-backup-to-s3.sh
sudo chown root:backup /usr/local/bin/mysql-backup-to-s3.sh

The script dumps all databases, pipes straight into pigz (never touching disk unencrypted even for a second), uploads the result to the Infrequent Access storage class (about half the cost of standard for infrequent restores), and keeps the last 3 local copies for quick restores without reaching into S3.

Step 8: Schedule it with a systemd timer

systemd timers are more reliable than cron (they log to the journal, survive system resumes, and can retry on failure). Create a service unit and a timer unit:

sudo tee /etc/systemd/system/mysql-backup.service >/dev/null <<'EOF'
[Unit]
Description=MySQL/MariaDB backup to S3
After=mariadb.service

[Service]
Type=oneshot
User=backup
Group=backup
ExecStart=/usr/local/bin/mysql-backup-to-s3.sh
EOF

sudo tee /etc/systemd/system/mysql-backup.timer >/dev/null <<'EOF'
[Unit]
Description=Run MySQL/MariaDB backup to S3 daily at 02:30

[Timer]
OnCalendar=*-*-* 02:30:00
Persistent=true
RandomizedDelaySec=10m

[Install]
WantedBy=timers.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable --now mysql-backup.timer

Persistent=true makes the timer catch up if the host was off at 02:30, and RandomizedDelaySec=10m adds random jitter so 100 servers on the same schedule don’t all slam the API at the same instant.

Verify the timer is armed and the next run is scheduled:

systemctl list-timers mysql-backup.timer --no-pager

Step 9: Lifecycle rules on the bucket

Storage costs add up fast if you keep every nightly dump forever. Configure a lifecycle rule on the bucket (in the AWS console or via Terraform) that:

  • Transitions objects to Glacier Instant Retrieval after 30 days
  • Transitions to Glacier Deep Archive after 90 days
  • Expires (deletes) after 365 days

With versioning on, you still need a “noncurrent version expiration” rule to actually reclaim storage from deleted or replaced objects. Without that rule, deletes are soft and the bucket grows forever.

Wrap up

A tested, scheduled, encrypted, versioned S3 backup is the gold standard for database durability. For a related topic, see our MariaDB replication guide which gives you hot standby in addition to backups, our Prometheus MySQL exporter reference to watch backup job duration as a metric, and our systemctl reference for troubleshooting the timer if a backup run stops firing. Don’t forget to actually restore a backup once in a while: an untested backup is not a backup.

Related Articles

Debian Installing Mono Microsoft’s .NET Framework on Debian 12 Containers Deploy Kubernetes Cluster on Debian using k0s Arch Linux Share your Linux Terminal Session in Web Browser Databases Install Apache Kafka on Ubuntu 24.04 / 22.04

Leave a Comment

Press ESC to close