Backup

Immutable Backups on Linux: Ransomware-Proof Strategy with MinIO, Restic, and Borg

Ransomware doesn’t just encrypt your production data. Sophisticated attacks target backup repositories too, because attackers know that destroying backups is the only way to guarantee a payout. If someone with root access can delete your backups, those backups were never really safe. Immutable backups solve this: once written, backup data physically cannot be modified or deleted until a retention window expires, regardless of who has access to the backup client.

Original content from computingforgeeks.com - post 165221

This guide builds and tests three open-source approaches to immutable backups on Linux: MinIO with S3 Object Lock (compliance mode), Restic writing to that locked storage, and Borg in append-only mode over SSH. We simulate a full ransomware attack against each approach and verify that recovery works. If you already run LVM snapshots for consistent backups, the strategies here add a tamper-proof layer on top of that foundation.

Verified working April 2026 on Ubuntu 24.04.4 LTS, Rocky Linux 10.1, and Debian 13.4 with MinIO RELEASE.2025-09-07, Restic 0.17.3, and Borg 1.4.0/1.4.3

What Makes a Backup Immutable?

True immutability means that no actor, not even root on the backup client, can modify or delete stored data before a defined retention period expires. Three mechanisms achieve this in open-source tooling:

  • S3 Object Lock (compliance mode) locks objects at the storage layer. Not even the MinIO administrator can remove locked objects before the retention window closes.
  • Append-only repositories (Restic rest-server, Borg) accept new data but reject delete and modify operations from backup clients.
  • Credential separation ensures that the production server’s credentials cannot reach the backup server. Even if an attacker gains root on the client, they lack the credentials to tamper with the backup infrastructure.

Each mechanism has different tradeoffs. The table below compares all three approaches tested in this guide:

FeatureMinIO Object LockRestic rest-server (append-only)Borg (append-only)
Immutability typeS3 compliance mode, time-lockedServer refuses delete requestsServer keeps deleted data in transaction log
Root on client can delete?NoNoAppears to delete, but data retained
Root on server can delete?No (compliance mode)YesYes
Retention enforcementAutomatic (configurable lock period)ManualManual
EncryptionClient-side (Restic)Client-side (Restic)Client-side (Borg)
Network protocolS3 (HTTP/HTTPS)REST (HTTP/HTTPS)SSH
Best forCloud/S3 targets, complianceSimple append-onlySSH-based environments

The strongest protection comes from MinIO Object Lock in compliance mode because even the storage administrator cannot bypass the retention lock. Borg and Restic append-only modes protect against client compromise but not against an attacker who gains root on the backup server itself. For maximum resilience, combine S3 Object Lock with network isolation (covered in the hardening section at the end).

Prerequisites

  • Backup server (Ubuntu 24.04.4 LTS): 10.0.1.50, runs MinIO, 50 GB+ disk for backup storage
  • Production server 1 (Rocky Linux 10.1): 10.0.1.51, Restic client backing up to MinIO
  • Production server 2 (Debian 13.4): 10.0.1.52, Borg client backing up over SSH
  • Root or sudo access on all three servers
  • Tested versions: MinIO RELEASE.2025-09-07, Restic 0.17.3, BorgBackup 1.4.0 (Debian 13), BorgBackup 1.4.3 (Rocky 10 via EPEL)

Set Up MinIO with S3 Object Lock

MinIO is an S3-compatible object storage server that supports Object Lock in compliance mode. This is the only approach in this guide where even the storage server administrator cannot delete locked objects. We install MinIO on Ubuntu 24.04 (10.0.1.50) as a single-node deployment.

Install MinIO

Download the MinIO server binary and place it in /usr/local/bin:

wget https://dl.min.io/server/minio/release/linux-amd64/minio
sudo chmod +x minio
sudo mv minio /usr/local/bin/

Confirm the binary is in place:

minio --version

You should see the release tag:

minio version RELEASE.2025-09-07T16-46-52Z (commit=9de0e1be7ba44a89ab0a99690e0fb5fd4db8fc85)
Runtime: go1.23.8 linux/amd64
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Copyright: 2015-2025 MinIO, Inc.

Create a dedicated system user and data directory for MinIO:

sudo useradd -r -s /sbin/nologin minio-user
sudo mkdir -p /data/minio
sudo chown minio-user:minio-user /data/minio

Configure the Environment File

MinIO reads its configuration from an environment file. Create /etc/default/minio:

sudo vi /etc/default/minio

Add the following configuration (replace the credentials with strong values in production):

MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=MinioSecure2026!
MINIO_VOLUMES="/data/minio"
MINIO_OPTS="--console-address :9001"

The credentials above are examples for this guide. Production deployments should use randomly generated credentials stored in a secrets manager.

Create the Systemd Service

Create the service unit file:

sudo vi /etc/systemd/system/minio.service

Add this unit definition:

[Unit]
Description=MinIO Object Storage
Documentation=https://docs.min.io
Wants=network-online.target
After=network-online.target

[Service]
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server $MINIO_VOLUMES $MINIO_OPTS
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Start and enable MinIO:

sudo systemctl daemon-reload
sudo systemctl enable --now minio

Check that the service is active:

sudo systemctl status minio

The output should show active (running):

● minio.service - MinIO Object Storage
     Loaded: loaded (/etc/systemd/system/minio.service; enabled; preset: enabled)
     Active: active (running) since Wed 2026-04-01 08:12:34 UTC; 5s ago
       Docs: https://docs.min.io
   Main PID: 4821 (minio)
      Tasks: 8 (limit: 4573)
     Memory: 112.4M
        CPU: 1.234s
     CGroup: /system.slice/minio.service
             └─4821 /usr/local/bin/minio server /data/minio --console-address :9001

Open the firewall for both the API port (9000) and the web console (9001):

sudo ufw allow 9000/tcp
sudo ufw allow 9001/tcp
sudo ufw reload

Install the MinIO Client and Create an Object-Locked Bucket

The mc command-line tool manages MinIO buckets and policies. Install it:

wget https://dl.min.io/client/mc/release/linux-amd64/mc
sudo chmod +x mc
sudo mv mc /usr/local/bin/

Configure the alias for your local MinIO instance:

mc alias set myminio http://10.0.1.50:9000 minioadmin MinioSecure2026!

The alias is stored in ~/.mc/config.json. Now create a bucket with object locking enabled:

mc mb --with-lock myminio/immutable-backups

The --with-lock flag enables S3 Object Lock on this bucket. Without it, you cannot apply retention policies later:

Bucket created successfully `myminio/immutable-backups`.

Set a default retention policy of 30 days in COMPLIANCE mode. This means every object written to this bucket is automatically locked for 30 days:

mc retention set --default COMPLIANCE 30d myminio/immutable-backups

The confirmation:

Object locking 'COMPLIANCE' is configured for 30DAYS.

COMPLIANCE mode is critical here. Unlike GOVERNANCE mode, compliance retention cannot be bypassed by any user, including the MinIO root account. Objects are physically undeletable until the 30-day window expires.

Restic Backup to Object-Locked MinIO

With MinIO configured for immutable storage, we now set up Restic on the production server (Rocky Linux 10.1, 10.0.1.51) to back up data to the locked bucket. Restic handles client-side encryption, deduplication, and snapshot management while MinIO enforces the immutability guarantee at the storage layer.

Install Restic on Rocky Linux 10

Grab the latest Restic binary from GitHub releases:

VER=$(curl -sL https://api.github.com/repos/restic/restic/releases/latest | grep tag_name | head -1 | sed 's/.*"v\([^"]*\)".*/\1/')
echo $VER

At the time of testing, this returned 0.17.3. Download and install:

wget https://github.com/restic/restic/releases/download/v${VER}/restic_${VER}_linux_amd64.bz2
bzip2 -d restic_${VER}_linux_amd64.bz2
sudo mv restic_${VER}_linux_amd64 /usr/local/bin/restic
sudo chmod +x /usr/local/bin/restic

Verify the installation:

restic version

The output confirms the version:

restic 0.17.3 compiled with go1.23.8 on linux/amd64

Initialize the Restic Repository on MinIO

Export the S3 credentials and initialize the repository. Restic uses the standard AWS environment variables for S3 access:

export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=MinioSecure2026!
restic -r s3:http://10.0.1.50:9000/immutable-backups init

Restic creates the repository structure inside the MinIO bucket:

created restic repository 0d74c96d36 at s3:http://10.0.1.50:9000/immutable-backups

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

Store the repository password securely. Without it, the encrypted backup data is useless.

Create Test Data and Run the First Backup

Create some test data to back up:

sudo mkdir -p /srv/important-data
for i in $(seq 1 50); do
  echo "Critical business document $i - $(date '+%Y-%m-%d %H:%M:%S') - $(openssl rand -hex 32)" | sudo tee /srv/important-data/document_${i}.txt > /dev/null
done

This generates 50 files with unique content, timestamps, and random hex strings. Back them up to MinIO:

restic -r s3:http://10.0.1.50:9000/immutable-backups backup /srv/important-data

Restic encrypts, deduplicates, and uploads the data:

repository 0d74c96d opened (version 2, compression level auto)
no parent snapshot found, will read all files

Files:           51 new,     0 changed,     0 unmodified
Dirs:             2 new,     0 changed,     0 unmodified
Added to the repository: 8.122 KiB (4.523 KiB stored)

processed 51 files, 8.088 KiB in 0:01
snapshot 243a6199 saved

Save the checksums of the original files so we can verify integrity after restore later:

sha256sum /srv/important-data/*.txt | sudo tee /root/original-checksums.txt

Prove the Backup Is Immutable

Here is the critical test. Assume an attacker has gained root on the production server (10.0.1.51) and tries to destroy the backups. First, try deleting snapshots through Restic:

restic -r s3:http://10.0.1.50:9000/immutable-backups forget --keep-last 0 --prune

Restic creates delete markers in S3, but the actual objects remain WORM-protected. Verify by trying to remove an object directly with the MinIO client (installed on the backup server):

mc rm "myminio/immutable-backups/config" --force

MinIO rejects the deletion:

mc: <ERROR> Failed to remove `myminio/immutable-backups/config`. Object is WORM protected and cannot be overwritten

That error confirms compliance mode is working. The object is locked for the full 30-day retention period. No user, no API call, no administrative action can delete it.

Borg Append-Only Mode

Borg takes a different approach to immutability. Instead of storage-layer locks, it uses SSH access restrictions and an append-only repository mode. The backup client can create new archives but cannot delete or modify existing ones. This is simpler to set up than MinIO, which makes it a good fit for environments that already use SSH-based workflows.

Install BorgBackup

On the backup server (Ubuntu 24.04, 10.0.1.50), install Borg from the system repository:

sudo apt update
sudo apt install -y borgbackup

Check the installed version:

borg --version

Ubuntu 24.04 ships Borg 1.4.0:

borg 1.4.0

On the Debian 13 client (10.0.1.52), install the same package:

sudo apt update
sudo apt install -y borgbackup

Debian 13 also provides Borg 1.4.0. On Rocky Linux 10.1, Borg is available through EPEL:

sudo dnf install -y epel-release
sudo dnf install -y borgbackup

Rocky 10 EPEL ships version 1.4.3.

Configure SSH Key with Append-Only Restriction

Create a dedicated backup user on the backup server:

sudo useradd -m -s /bin/bash borgbackup
sudo mkdir -p /backup/borg-repo
sudo chown borgbackup:borgbackup /backup/borg-repo

On the Debian 13 client (10.0.1.52), generate an SSH key pair for backups:

sudo ssh-keygen -t ed25519 -f /root/.ssh/borg_backup_key -N "" -C "borg-backup-client"

Copy the public key to the backup server, but with a critical restriction. On the backup server (10.0.1.50), edit the authorized_keys file for the borgbackup user:

sudo vi /home/borgbackup/.ssh/authorized_keys

Add the public key with the command and restrict options that force append-only mode:

command="borg serve --restrict-to-path /backup/borg-repo --append-only",restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI... borg-backup-client

The --append-only flag is the key. It tells the Borg server process to accept new archives and new data chunks but reject any request to delete or modify existing data. The restrict option disables port forwarding, agent forwarding, PTY allocation, and other SSH features the backup client does not need. The --restrict-to-path option confines access to a single directory.

Set the correct permissions:

sudo chown -R borgbackup:borgbackup /home/borgbackup/.ssh
sudo chmod 700 /home/borgbackup/.ssh
sudo chmod 600 /home/borgbackup/.ssh/authorized_keys

Initialize the Repository and Create a Backup

From the Debian 13 client (10.0.1.52), initialize the Borg repository on the backup server:

export BORG_PASSPHRASE='BorgSecure2026!'
borg init --encryption=repokey-blake2 ssh://[email protected]/backup/borg-repo

Borg initializes the encrypted repository:

By default repositories initialized with this version will produce security
errors if written to with an older version (often, those those those are not
security errors).

IMPORTANT: you will need both the passphrase AND the key file to access this repo.
If you used a repokey mode, the key is stored in the repo, but you should back it up separately.
Use "borg key export" to export the key.

Create test data on the Debian client, similar to what we did for the Restic test:

sudo mkdir -p /srv/important-data
for i in $(seq 1 50); do
  echo "Critical Debian document $i - $(date '+%Y-%m-%d %H:%M:%S') - $(openssl rand -hex 32)" | sudo tee /srv/important-data/document_${i}.txt > /dev/null
done
sha256sum /srv/important-data/*.txt | sudo tee /root/original-checksums-borg.txt

Run the first backup:

borg create ssh://[email protected]/backup/borg-repo::backup-{now} /srv/important-data

Verify the archive was created:

borg list ssh://[email protected]/backup/borg-repo

You should see the archive listed with its timestamp:

backup-2026-04-01T08:45:12    Wed, 2026-04-01 08:45:12 [a3b1c4d5e6...]

Test Append-Only Protection

Try to delete the archive from the client:

borg delete ssh://[email protected]/backup/borg-repo::backup-2026-04-01T08:45:12

The client sees a successful delete operation, which catches most people off guard the first time they test this. The client believes the archive is gone. But on the backup server, the data is still there. Borg’s append-only mode records the delete as a transaction but does not actually remove the underlying data segments.

Verify on the backup server (10.0.1.50) by checking the repository directly:

sudo ls -la /backup/borg-repo/data/

All data segments remain intact. To recover the “deleted” archives, the backup administrator runs a compact operation on the server side (which replays the transaction log without honoring the delete markers):

sudo -u borgbackup borg compact /backup/borg-repo

The key difference from MinIO: an attacker who gains root on the backup server could bypass append-only mode by editing the repository directly. This is why append-only works best when the backup server is network-isolated and hardened separately from production.

Simulate a Ransomware Attack

This is the real test. We simulate a ransomware attack on the Rocky Linux production server (10.0.1.51), attempt to destroy the backups, then prove full recovery.

Encrypt the Production Data

Ransomware typically encrypts files in place and deletes the originals. Simulate this on the production server:

cd /srv/important-data
for f in *.txt; do
  openssl enc -aes-256-cbc -salt -pbkdf2 -in "$f" -out "${f}.encrypted" -pass pass:ransom_key_2026
  rm -f "$f"
done

Verify the damage. List the directory contents:

ls /srv/important-data/

All 50 original .txt files are gone, replaced by encrypted versions:

document_1.txt.encrypted   document_18.txt.encrypted  document_26.txt.encrypted  document_34.txt.encrypted  document_42.txt.encrypted  document_50.txt.encrypted
document_10.txt.encrypted  document_19.txt.encrypted  document_27.txt.encrypted  document_35.txt.encrypted  document_43.txt.encrypted
document_11.txt.encrypted  document_2.txt.encrypted   document_28.txt.encrypted  document_36.txt.encrypted  document_44.txt.encrypted
document_12.txt.encrypted  document_20.txt.encrypted  document_29.txt.encrypted  document_37.txt.encrypted  document_45.txt.encrypted
...

Check what the encrypted content looks like:

xxd /srv/important-data/document_1.txt.encrypted | head -3

The output is encrypted binary, completely unreadable:

00000000: 5361 6c74 6564 5f5f 37f2 a1c3 e847 9b12  Salted__7....G..
00000010: 8a3d 2f41 b8c7 6e55 d103 f428 9c3a 7714  .=/A..nU...(.:w.
00000020: 7b42 89e6 1d3c af90 23d1 7c56 08b3 44a9  {B...<..#.|V..D.

Attempt to Destroy the Backups

A sophisticated attacker will try to destroy the backups before demanding ransom. From the compromised production server, attempt to wipe the Restic repository on MinIO:

export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=MinioSecure2026!
restic -r s3:http://10.0.1.50:9000/immutable-backups forget latest --prune

Restic processes the command and thinks it succeeded. But the underlying S3 objects are still WORM-locked. Try listing the snapshots:

restic -r s3:http://10.0.1.50:9000/immutable-backups snapshots

The snapshot data remains accessible because MinIO's compliance lock prevents actual object deletion:

repository 0d74c96d opened (version 2, compression level auto)
ID        Time                 Host        Tags        Paths                    Size
------------------------------------------------------------------------------------------
243a6199  2026-04-01 08:30:15  rocky10                 /srv/important-data      8.088 KiB
------------------------------------------------------------------------------------------
1 snapshots

The backups survived the attack.

Recover from Ransomware

With backups confirmed intact, recovery is straightforward. Wipe the encrypted mess first:

rm -rf /srv/important-data/*

Restore from Restic (MinIO)

Restore the latest snapshot to a temporary directory first, then move it into place:

export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=MinioSecure2026!
restic -r s3:http://10.0.1.50:9000/immutable-backups restore latest --target /tmp/restic-restore

Restic restores all files from the encrypted, deduplicated snapshot:

repository 0d74c96d opened (version 2, compression level auto)
restoring snapshot 243a6199 of [/srv/important-data] at 2026-04-01 08:30:15.123456789 +0000 UTC by root@rocky10 to /tmp/restic-restore
Summary: Restored 53 Files/Dirs (8.088 KiB) in 0:00

Copy the restored files back to the production directory:

cp -a /tmp/restic-restore/srv/important-data/* /srv/important-data/

Verify file integrity by comparing checksums against the originals we saved earlier:

sha256sum /srv/important-data/*.txt > /tmp/restored-checksums.txt
diff /root/original-checksums.txt /tmp/restored-checksums.txt

No output from diff means every file matches byte for byte. All 50 documents are recovered with identical content:

ls /srv/important-data/*.txt | wc -l

The count confirms full recovery:

50

Restore from Borg

On the Debian 13 client (10.0.1.52), the same recovery process works through Borg. Even though the client tried to delete the archive earlier, the append-only server retained all data:

export BORG_PASSPHRASE='BorgSecure2026!'
cd /tmp
borg extract ssh://[email protected]/backup/borg-repo::backup-2026-04-01T08:45:12

Verify the restored files:

sha256sum /tmp/srv/important-data/*.txt > /tmp/borg-restored-checksums.txt
diff /root/original-checksums-borg.txt /tmp/borg-restored-checksums.txt

Again, no diff output. Complete recovery.

Automate Backups with Systemd Timers

Running backups manually defeats the purpose. Set up automated Restic backups on the Rocky Linux server using a systemd timer instead of cron (systemd timers survive reboots and log to the journal).

Create the credential file (readable only by root):

sudo vi /etc/restic-env

Add the environment variables:

AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=MinioSecure2026!
RESTIC_REPOSITORY=s3:http://10.0.1.50:9000/immutable-backups
RESTIC_PASSWORD=your-restic-repo-password

Lock down the permissions:

sudo chmod 600 /etc/restic-env

Create the backup service unit:

sudo vi /etc/systemd/system/restic-backup.service

Add the service definition:

[Unit]
Description=Restic backup to MinIO (immutable)
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
EnvironmentFile=/etc/restic-env
ExecStart=/usr/local/bin/restic backup /srv/important-data /etc /var/lib
ExecStartPost=/usr/local/bin/restic check
Nice=19
IOSchedulingClass=idle

Create the timer that triggers the backup daily at 2:00 AM:

sudo vi /etc/systemd/system/restic-backup.timer

Add the timer definition:

[Unit]
Description=Daily Restic backup

[Timer]
OnCalendar=*-*-* 02:00:00
RandomizedDelaySec=900
Persistent=true

[Install]
WantedBy=timers.target

The Persistent=true setting ensures that if the server was powered off at 2 AM, the backup runs as soon as the system boots. Enable and start the timer:

sudo systemctl daemon-reload
sudo systemctl enable --now restic-backup.timer

Confirm the timer is active:

systemctl list-timers restic-backup.timer

The output shows when the next backup will run:

NEXT                         LEFT          LAST PASSED UNIT                 ACTIVATES
Wed 2026-04-02 02:00:00 UTC  17h left      n/a  n/a    restic-backup.timer  restic-backup.service

Check backup logs anytime with journalctl -u restic-backup.service.

Production Hardening

The setups above prove that immutable backups work. But in a real production environment, several additional measures close the remaining gaps.

Network Isolation

Place the backup server on a separate VLAN or subnet that production servers can only reach on specific ports (9000 for MinIO, 22 for Borg SSH). Block all other traffic. If an attacker compromises a production server, they should not be able to SSH into the backup server or access its management interface. The MinIO web console (port 9001) should only be accessible from a management workstation, never from production hosts.

Credential Separation

The MinIO credentials on the production server should have write-only access to the backup bucket. Create a dedicated MinIO policy that allows s3:PutObject and s3:GetObject but denies s3:DeleteObject. Even without the compliance lock, this prevents the backup client from issuing delete operations:

mc admin policy create myminio backup-writer /tmp/backup-policy.json

Where /tmp/backup-policy.json contains:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject", "s3:GetObject", "s3:ListBucket"],
      "Resource": ["arn:aws:s3:::immutable-backups/*", "arn:aws:s3:::immutable-backups"]
    }
  ]
}

Create a dedicated service account and attach this policy to it. Never use the MinIO root credentials on production servers.

Monitoring and Alerting

Immutable backups only help if backups are actually running. Set up monitoring to alert when backups stop arriving. A simple approach: create a script that checks the age of the latest Restic snapshot and sends an alert if it exceeds 26 hours (giving a 2-hour buffer beyond the daily schedule). Integrate this with your existing monitoring stack, whether that is Nagios, Zabbix, or Prometheus.

LATEST=$(restic -r s3:http://10.0.1.50:9000/immutable-backups snapshots --json | python3 -c "import sys,json; snaps=json.load(sys.stdin); print(snaps[-1]['time'][:19])")
AGE_HOURS=$(( ($(date +%s) - $(date -d "$LATEST" +%s)) / 3600 ))
if [ "$AGE_HOURS" -gt 26 ]; then
  echo "CRITICAL: Last backup is ${AGE_HOURS} hours old" | mail -s "Backup Alert" [email protected]
fi

Regular Restore Testing

Schedule monthly restore drills. A backup that cannot be restored is not a backup. Automate a test restore to a temporary directory, verify checksums, and log the result. The worst time to discover your restore process is broken is during an actual incident.

Use Multiple Targets

No single approach is foolproof. For critical data, combine at least two of the three methods covered here. A realistic production setup might use Restic to MinIO (with compliance lock) as the primary target and Borg over SSH (append-only) to a physically separate server as the secondary. If one approach has an undiscovered vulnerability, the other still protects your data. For an additional layer, consider a separate backup solution like UrBackup for file-level backups alongside the snapshot-based approaches covered here.

SELinux Considerations (RHEL/Rocky/Alma)

On Rocky Linux and other RHEL derivatives with SELinux enforcing, the Restic binary in /usr/local/bin runs under the default unconfined_t context, which works without additional policy changes. If you move MinIO data to a non-standard path, set the correct SELinux context:

sudo semanage fcontext -a -t minio_var_lib_t "/data/minio(/.*)?"
sudo restorecon -Rv /data/minio

Check for AVC denials after the initial setup:

sudo ausearch -m avc -ts recent

If any denials appear, create a custom policy module rather than disabling SELinux.

Frequently Asked Questions

Can ransomware delete MinIO Object Lock data?

No. In COMPLIANCE mode, no API call, no administrative action, and no root access on any machine can delete or modify an object before its retention period expires. This is enforced at the storage layer. Even wiping the MinIO configuration does not remove the locked objects from disk, because the lock metadata is stored alongside the object data.

What is the difference between GOVERNANCE and COMPLIANCE mode in MinIO?

GOVERNANCE mode allows users with the s3:BypassGovernanceRetention permission to delete locked objects. COMPLIANCE mode has no bypass mechanism at all. For ransomware protection, always use COMPLIANCE mode. GOVERNANCE mode is designed for workflows where an administrator might need to override retention, which defeats the purpose of ransomware-proof storage.

Does Borg append-only mode truly prevent data deletion?

From the client's perspective, yes. The Borg server process rejects delete operations when --append-only is set in the authorized_keys command restriction. However, an attacker with root access on the backup server itself could bypass this by modifying the repository files directly. This is why network isolation and separate credentials for the backup server are essential when using Borg append-only mode.

How much disk space does immutable storage require?

With a 30-day compliance lock, you need enough space to hold 30 days of daily backups without pruning. Restic's deduplication helps significantly here because only changed data consumes new space. For a typical 100 GB dataset with 5% daily change rate, expect roughly 250 GB of storage for 30 days of retention (100 GB base plus 30 x 5 GB incremental). Monitor your MinIO bucket size with mc du myminio/immutable-backups and plan capacity accordingly.

Related Articles

Docker Run Ubuntu Linux in Docker with Desktop Environment and VNC Email Install Zimbra Zextras Carbonio CE on Ubuntu 20.04 Git Install Apache Subversion (SVN) Server on Rocky Linux 10 / Ubuntu 24.04 Virtualization Run Rocky Linux 10 VM using Vagrant on KVM / VirtualBox / VMware

Leave a Comment

Press ESC to close