Apptainer (formerly Singularity) is the standard container runtime for high-performance computing. Unlike Docker, it was designed from the ground up for shared, multi-tenant HPC clusters where users do not have root access. It runs containers as the invoking user, supports native MPI integration, and stores images as single, portable SIF (Singularity Image Format) files that you can copy around like any other file.

This guide covers installing Apptainer on Rocky Linux 10, AlmaLinux 10, and Ubuntu 24.04. We will also walk through building from source, core usage patterns, GPU passthrough, MPI workloads, and common troubleshooting steps.

What Is Apptainer?

Apptainer is a Linux Foundation project that provides OS-level virtualization tailored for scientific computing and HPC environments. It was originally developed at Lawrence Berkeley National Laboratory under the name Singularity, then moved to the Linux Foundation in late 2021 and rebranded as Apptainer.

Key characteristics that set Apptainer apart:

  • Rootless by design. Containers execute with the privileges of the calling user. There is no persistent daemon running as root.
  • Single-file images. A SIF file is an immutable, cryptographically verifiable, squashfs-based archive. You can scp it to another host and run it immediately.
  • Native MPI support. Apptainer’s hybrid MPI model lets the host MPI launcher talk directly to the MPI library inside the container, which is essential for tightly coupled parallel jobs on InfiniBand fabrics.
  • Docker and OCI compatibility. You can pull images from Docker Hub, GitHub Container Registry, or any OCI-compliant registry and convert them to SIF on the fly.
  • Reproducibility. Definition files (similar to Dockerfiles) let you version-control entire software stacks and rebuild them deterministically.

Apptainer vs Docker: When to Use Each

Docker and Apptainer solve different problems. Understanding the security model and operational differences will save you time choosing the right tool.

AspectApptainerDocker
Target environmentHPC clusters, shared multi-user systemsCloud services, CI/CD, microservices
Privilege modelRuns as calling user, no root daemonRequires root daemon (rootless mode available but not default)
Image formatSingle SIF fileLayered OCI image stored in a local registry
NetworkingShares host network by defaultIsolated network namespaces by default
MPI integrationNative hybrid model for InfiniBand/PMIPossible but awkward; requires host network mode
OrchestrationJob schedulers (Slurm, PBS, LSF)Kubernetes, Docker Compose, Swarm
FilesystemBind-mounts host paths by default ($HOME, /tmp, etc.)Isolated filesystem by default

Use Apptainer when you are running batch jobs on shared clusters, need reproducible scientific workflows, want portable single-file images, or must integrate with an MPI fabric. Use Docker when you are building microservices, need network isolation between containers, or are deploying on Kubernetes.

Install Apptainer on Rocky Linux 10 / AlmaLinux 10

On RHEL-compatible distributions, Apptainer is available through the EPEL (Extra Packages for Enterprise Linux) repository. The steps below apply equally to Rocky Linux 10 and AlmaLinux 10.

Step 1: Enable EPEL Repository

Install the EPEL release package if it is not already present on the system.

sudo dnf install -y epel-release

Step 2: Install Apptainer

With EPEL enabled, install Apptainer directly through dnf.

sudo dnf install -y apptainer

Step 3: Verify the Installation

Confirm that the binary is in your PATH and check the version.

apptainer --version

You should see output similar to apptainer version 1.3.x. If you need a newer release than what EPEL provides, see the “Build from Source” section below.

Install Apptainer on Ubuntu 24.04

Apptainer provides official packages for Ubuntu through their PPA and also distributes .deb packages alongside each GitHub release.

Option A: Install from the Official PPA

The Apptainer PPA is the most convenient way to stay current on Ubuntu. Add the repository and install.

sudo add-apt-repository -y ppa:apptainer/ppa
sudo apt update
sudo apt install -y apptainer

Option B: Install from a GitHub Release Package

If you prefer to pin a specific version or cannot use PPAs, download the .deb package from the Apptainer GitHub releases page. Replace the version number below with the release you want.

export APPTAINER_VERSION=1.3.6
wget https://github.com/apptainer/apptainer/releases/download/v${APPTAINER_VERSION}/apptainer_${APPTAINER_VERSION}_amd64.deb
sudo apt install -y ./apptainer_${APPTAINER_VERSION}_amd64.deb

Verify with apptainer --version after installation completes.

Build Apptainer from Source

Building from source gives you access to the latest features and patches before distribution packages catch up. This method works on any Linux distribution.

Install Build Dependencies

On Rocky Linux 10 / AlmaLinux 10:

sudo dnf groupinstall -y "Development Tools"
sudo dnf install -y libseccomp-devel squashfs-tools cryptsetup wget git golang

On Ubuntu 24.04:

sudo apt install -y build-essential libseccomp-dev pkg-config squashfs-tools cryptsetup-bin uidmap wget git golang

Apptainer requires Go 1.21 or later. If your distribution ships an older version, install a newer Go toolchain from the official Go downloads page before proceeding.

Clone and Build

Clone the Apptainer repository, check out the latest release tag, and build.

git clone https://github.com/apptainer/apptainer.git
cd apptainer
git checkout v1.3.6

Run the configure script and build.

./mconfig
cd builddir
make
sudo make install

The default install prefix is /usr/local. You can change this by passing --prefix=/your/path to ./mconfig. After installation, confirm the version.

apptainer --version

Basic Usage: Pulling Container Images

Apptainer can pull images from Docker Hub, Sylabs Cloud Library, and any OCI-compliant registry. Pulled images are converted to SIF format automatically.

Pull from Docker Hub

Use the docker:// URI scheme to pull images from Docker Hub or other OCI registries.

apptainer pull docker://ubuntu:24.04

This creates a file called ubuntu_24.04.sif in the current directory. You can also pull from GitHub Container Registry or any private registry by specifying the full path.

apptainer pull docker://ghcr.io/your-org/your-image:latest

Pull from Sylabs Cloud Library

The Sylabs Cloud Library hosts images built specifically for Apptainer/Singularity. Use the library:// URI scheme.

apptainer pull library://library/default/ubuntu:24.04

Running Containers

Apptainer provides three primary commands for interacting with containers: run, exec, and shell. Each serves a different purpose.

apptainer run

Executes the default runscript defined in the container image. This is equivalent to running the image’s %runscript section or its OCI entrypoint.

apptainer run ubuntu_24.04.sif

apptainer exec

Runs a specific command inside the container. This is the most common pattern for batch jobs and scripts.

apptainer exec ubuntu_24.04.sif cat /etc/os-release

You can chain this into Slurm job scripts or any shell pipeline just as you would a normal binary.

apptainer shell

Drops you into an interactive shell inside the container for debugging or exploratory work.

apptainer shell ubuntu_24.04.sif

You will land in a shell where the container’s filesystem is your root, but your home directory and other default bind paths from the host are still accessible.

Building Custom SIF Images

Definition files are Apptainer’s equivalent of Dockerfiles. They describe how to build an image from a base, what packages to install, and what to run at startup.

Example Definition File

Create a file called myapp.def with the following content:

Bootstrap: docker
From: ubuntu:24.04

%post
    apt-get update && apt-get install -y python3 python3-pip
    pip3 install numpy scipy

%environment
    export LC_ALL=C

%runscript
    echo "Running my scientific application"
    python3 "$@"

%labels
    Author YourName
    Version 1.0

Build the Image

Build the SIF file from the definition file. The --fakeroot flag allows unprivileged users to build images without actual root access.

apptainer build --fakeroot myapp.sif myapp.def

If your system administrator has configured setuid installations, you can also build with sudo apptainer build. On personal workstations, --fakeroot is usually the better choice because it avoids running the entire build as root.

You can also bootstrap from other sources. For example, to start from the Sylabs Cloud Library:

Bootstrap: library
From: ubuntu:24.04

Or from a local SIF file:

Bootstrap: localimage
From: /path/to/base.sif

Bind Mounts and Accessing Host Files

Apptainer bind-mounts your home directory, /tmp, and /proc//sys//dev into the container by default. This means your data files are typically available without any extra configuration.

To mount additional host directories, use the --bind (or -B) flag.

apptainer exec --bind /scratch/data:/data myapp.sif python3 /data/process.py

This maps the host path /scratch/data to /data inside the container. You can specify multiple bind mounts by separating them with commas.

apptainer exec --bind /scratch/data:/data,/project/shared:/shared myapp.sif bash

For read-only mounts, append :ro to the bind specification.

apptainer exec --bind /reference/genomes:/genomes:ro myapp.sif samtools index /genomes/hg38.fa

You can also set default bind paths system-wide in /etc/apptainer/apptainer.conf by editing the bind path directives. This is useful for cluster administrators who want /scratch or /data available inside every container without requiring users to remember bind flags.

GPU Support with the –nv Flag

Running GPU-accelerated workloads inside Apptainer containers is straightforward. The --nv flag exposes the host’s NVIDIA driver libraries and device files to the container.

Prerequisites

The host must have a working NVIDIA driver installation. You do not need to install the CUDA toolkit on the host; only the driver is required. The CUDA runtime and libraries live inside the container image.

Running a GPU Container

Pull a CUDA-enabled image and run it with GPU access.

apptainer pull docker://nvcr.io/nvidia/cuda:12.4.1-runtime-ubuntu24.04

Execute a command inside the container with GPU passthrough.

apptainer exec --nv cuda_12.4.1-runtime-ubuntu24.04.sif nvidia-smi

You should see the same GPU information as running nvidia-smi on the host. For AMD GPUs, use the --rocm flag instead, which performs the equivalent passthrough for ROCm devices.

GPU in Definition Files

When building images that use CUDA, base your definition file on an NVIDIA CUDA image from NGC (NVIDIA GPU Cloud). The --nv flag is a runtime option, not a build option, so GPU access is configured at the point you run the container, not when you build it.

MPI Integration for HPC Workloads

MPI is the backbone of distributed computing on HPC clusters. Apptainer supports two MPI execution models: the hybrid model and the bind model.

Hybrid Model

In the hybrid model, you install a compatible version of MPI inside the container. The host’s MPI launcher (mpirun or srun) starts the processes, and Apptainer’s process management passes control to the MPI library inside the container. This requires that the MPI version inside the container is ABI-compatible with the host MPI.

mpirun -np 4 apptainer exec mpi_app.sif /opt/app/my_simulation

With Slurm, the equivalent is:

srun --ntasks=64 apptainer exec mpi_app.sif /opt/app/my_simulation

Bind Model

The bind model avoids installing MPI inside the container entirely. Instead, you bind-mount the host’s MPI libraries into the container at runtime. This reduces image size and eliminates ABI compatibility concerns, but ties the image to the specific cluster’s MPI installation.

mpirun -np 4 apptainer exec --bind /opt/openmpi:/opt/openmpi mpi_app.sif /opt/app/my_simulation

Best Practices for MPI Containers

  • Match the major MPI version between host and container (for example, OpenMPI 4.x on both).
  • If your cluster uses InfiniBand, make sure the container has the appropriate user-space drivers (libibverbs, librdmacm) or bind-mount them from the host.
  • Test with a simple MPI hello-world before deploying production workloads.
  • Work with your cluster administrators to determine which model (hybrid or bind) is recommended for your site.

Troubleshooting

Below are solutions to the issues that come up most often when working with Apptainer.

FAKEROOT: “fakeroot is not enabled or not installed”

This happens when the fakeroot feature is not configured. On RHEL-based systems, make sure the fuse-overlayfs and fakeroot packages are installed.

sudo dnf install -y fuse-overlayfs fakeroot

On Ubuntu:

sudo apt install -y fuse-overlayfs uidmap

Also verify that your user has subuid and subgid mappings configured in /etc/subuid and /etc/subgid. Each user who needs fakeroot should have an entry like:

username:100000:65536

“No space left on device” During Build

Apptainer uses /tmp as its working directory during builds. If /tmp is on a small partition or is a tmpfs with limited space, large builds will fail. Set the APPTAINER_TMPDIR environment variable to a location with more space.

export APPTAINER_TMPDIR=/scratch/tmp
mkdir -p $APPTAINER_TMPDIR
apptainer build --fakeroot myapp.sif myapp.def

Cache Filling Up Disk

Apptainer caches pulled images and OCI blobs under ~/.apptainer/cache. Over time this can consume significant space. Clear it periodically.

apptainer cache clean

To relocate the cache directory, set APPTAINER_CACHEDIR in your shell profile.

export APPTAINER_CACHEDIR=/scratch/$USER/.apptainer_cache

Permission Denied on Bind Mounts

If you get permission errors when accessing bound directories inside the container, check that the target path exists inside the container. If it does not, Apptainer will try to create it, but this fails on read-only (SIF) images. Create the directory in your definition file’s %post section, or use the --writable-tmpfs flag to add a temporary writable overlay.

apptainer exec --writable-tmpfs --bind /scratch/data:/data myapp.sif ls /data

GPU Not Detected Inside Container

If nvidia-smi fails inside the container, confirm these points:

  • The --nv flag is included in your apptainer command.
  • The NVIDIA driver is loaded on the host (lsmod | grep nvidia should show modules).
  • The container’s CUDA version is compatible with the host driver version. Check NVIDIA’s CUDA compatibility matrix to verify.
  • On some systems, the NVIDIA device files are not in the default locations. Check if /dev/nvidia* devices exist on the host.

MPI Jobs Hang or Fail at Startup

MPI issues are almost always caused by version mismatches between the host and container MPI libraries, or missing InfiniBand/fabric drivers inside the container. Start debugging by running a simple MPI hello-world program. If it works, the issue is in your application. If it hangs, check the following:

  • Verify the MPI versions match (run mpirun --version on the host and inside the container).
  • Bind-mount the host’s InfiniBand libraries if the container does not include them.
  • Set APPTAINER_BIND=/etc/libibverbs.d if your cluster uses InfiniBand verb configuration files.
  • Check that PMI or PMIx is correctly configured if you are using Slurm’s srun.

Networking Issues When Pulling Images

If pulls from Docker Hub or other registries fail, check your proxy settings. Apptainer respects the standard HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables.

export HTTPS_PROXY=http://proxy.example.com:3128
apptainer pull docker://ubuntu:24.04

For certificate errors, you may need to add your organization’s CA certificate to the system trust store, or set APPTAINER_DOCKER_INSECURE=true as a temporary workaround (not recommended for production).

Conclusion

Apptainer provides a secure, portable, and HPC-native container runtime that fits naturally into multi-user cluster environments. Whether you install from EPEL on Rocky Linux 10 or AlmaLinux 10, use the PPA on Ubuntu 24.04, or build from source to track the latest release, the workflow is the same: pull or build a SIF image, then run it with the same user privileges you already have. Add --nv for GPU workloads, integrate with your site’s MPI, and you have a reproducible, portable software stack that works across clusters without modification.

LEAVE A REPLY

Please enter your comment!
Please enter your name here