AWS

Mount S3 Buckets as File Systems with Amazon S3 Files on EC2

AWS finally shipped a proper way to mount S3 buckets as file systems. Amazon S3 Files, which went GA on April 7, 2026, lets you mount any S3 bucket as an NFS share on EC2 instances. It’s built on EFS technology under the hood, so you get sub-millisecond latency for active data and full two-way sync between the mounted file system and the underlying S3 bucket. No more s3fs-fuse workarounds or staging data to local disk before processing. For a full overview of the service including pricing and architecture, see our Amazon S3 Files complete guide.

Original content from computingforgeeks.com - post 165394

Tested April 2026 on Amazon Linux 2023 with AWS CLI 2.34.26, amazon-efs-utils 3.0.0, S3 Files GA

The service creates a high-performance cache layer on top of your S3 bucket. Writes land in the cache first with sub-millisecond latency, then sync back to S3 within about 60 seconds. Reads of existing S3 objects stream directly from S3 at native GET throughput. This guide walks through creating an S3 file system, mounting it on EC2, testing performance, and handling the IAM setup that trips most people up. If you need the AWS CLI installed first, handle that before continuing.

Prerequisites

Before starting, confirm the following are in place:

  • AWS account with IAM permissions to create S3 buckets, IAM roles, and S3 file systems
  • EC2 instance running Amazon Linux 2023 (Ubuntu/Debian also supported) in a VPC
  • AWS CLI v2.34 or newer. Older versions lack the aws s3files subcommand entirely
  • S3 bucket with versioning enabled and server-side encryption (SSE-S3 or SSE-KMS). Both are mandatory for S3 Files
  • Security group allowing TCP port 2049 (NFS) between the EC2 instance and the mount target

Step 1: Create an S3 Bucket with Required Settings

S3 Files requires both versioning and server-side encryption on the bucket. You cannot create a file system against a bucket that has either one disabled. Create the bucket and enable both:

aws s3 mb s3://my-s3files-bucket --region us-east-1

Enable versioning:

aws s3api put-bucket-versioning --bucket my-s3files-bucket --versioning-configuration Status=Enabled

Then configure default encryption with AES-256 and bucket keys:

aws s3api put-bucket-encryption --bucket my-s3files-bucket --server-side-encryption-configuration '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"},"BucketKeyEnabled":true}]}'

Verify versioning is active:

aws s3api get-bucket-versioning --bucket my-s3files-bucket

The output should confirm Enabled:

{
    "Status": "Enabled"
}

Step 2: Create IAM Roles

S3 Files needs two IAM roles. The first is an access role that the S3 Files service assumes to interact with your bucket. The second is the EC2 instance role that lets your instance create and mount file systems. Getting the permissions wrong here is where most people get stuck.

S3 Files Access Role

This role is assumed by elasticfilesystem.amazonaws.com. Create the trust policy first:

echo '{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "elasticfilesystem.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}' | tee /tmp/s3files-trust-policy.json

Create the role:

aws iam create-role --role-name S3FilesAccessRole --assume-role-policy-document file:///tmp/s3files-trust-policy.json

Now attach the permission policy. The role needs S3 access to the bucket plus EventBridge permissions for change detection:

echo '{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketVersions",
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:GetBucketNotification",
        "s3:PutBucketNotification"
      ],
      "Resource": [
        "arn:aws:s3:::my-s3files-bucket",
        "arn:aws:s3:::my-s3files-bucket/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": "events:*",
      "Resource": "*"
    }
  ]
}' | tee /tmp/s3files-access-policy.json

Attach it as an inline policy:

aws iam put-role-policy --role-name S3FilesAccessRole --policy-name S3FilesBucketAccess --policy-document file:///tmp/s3files-access-policy.json

EC2 Instance Role

The instance role needs the managed AmazonS3FilesClientFullAccess policy plus permissions to pass the access role and manage network interfaces for mount targets. If your EC2 instance already has an instance profile, add these permissions to the existing role.

Attach the managed policy:

aws iam attach-role-policy --role-name MyEC2Role --policy-arn arn:aws:iam::aws:policy/AmazonS3FilesClientFullAccess

Create the inline policy for PassRole and EC2 network permissions. Replace the role ARN on the PassRole resource line with your S3FilesAccessRole ARN (change the account ID 123456789012 to yours):

echo '{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "arn:aws:iam::123456789012:role/S3FilesAccessRole"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3files:*",
        "elasticfilesystem:*"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeSubnets",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeNetworkInterfaces",
        "ec2:CreateNetworkInterface",
        "ec2:DeleteNetworkInterface"
      ],
      "Resource": "*"
    }
  ]
}' | tee /tmp/ec2-s3files-policy.json

Apply it to your EC2 role:

aws iam put-role-policy --role-name MyEC2Role --policy-name S3FilesEC2Access --policy-document file:///tmp/ec2-s3files-policy.json

Step 3: Configure Security Groups

Mount targets create elastic network interfaces in your VPC. The security group attached to the mount target needs to accept NFS traffic (TCP 2049) from your EC2 instances. If your EC2 instance and mount target share the same security group, authorize ingress from itself:

aws ec2 authorize-security-group-ingress --group-id sg-example --protocol tcp --port 2049 --source-group sg-example

If they use different security groups, replace --source-group with the EC2 instance’s security group ID. The mount will hang silently if port 2049 is blocked, so double-check this before proceeding.

Step 4: Install amazon-efs-utils

The mount.s3files helper is bundled with amazon-efs-utils. On Amazon Linux 2023, it’s available directly from the default repos:

sudo yum install -y amazon-efs-utils

On Ubuntu or Debian, use the AWS installer script:

curl https://amazon-efs-utils.aws.com/efs-utils-installer.sh | sudo sh -s -- --install

Confirm the mount helper is available:

mount.s3files --version

You should see version 3.0.0 or later:

3.0.0

Step 5: Create the S3 File System

This is where S3 Files maps a file system onto your bucket. The file system ID you get back is what you’ll use for mounting:

aws s3files create-file-system \
  --region us-east-1 \
  --bucket arn:aws:s3:::my-s3files-bucket \
  --role-arn arn:aws:iam::123456789012:role/S3FilesAccessRole

The response includes the file system ID and initial status:

{
    "fileSystemId": "fs-example",
    "bucket": "arn:aws:s3:::my-s3files-bucket",
    "status": "creating",
    "roleArn": "arn:aws:iam::123456789012:role/S3FilesAccessRole"
}

The file system takes 2 to 5 minutes to become available. Poll the status:

aws s3files get-file-system --file-system-id fs-example --region us-east-1 --query "status" --output text

Wait until the output shows available before continuing to the next step.

Step 6: Create the Mount Target

A mount target is a network endpoint in your VPC. It must be in the same Availability Zone as your EC2 instance. Replace the file system ID, subnet ID, and security group ID with your own values. You can only create one mount target per AZ per file system.

aws s3files create-mount-target \
  --file-system-id fs-example \
  --subnet-id subnet-example \
  --security-groups sg-example \
  --region us-east-1

The output includes the mount target ID and its private IP:

{
    "mountTargetId": "mt-0a1b2c3d4e5f67890",
    "fileSystemId": "fs-example",
    "subnetId": "subnet-example",
    "ipv4Address": "10.0.1.142",
    "status": "creating"
}

This takes roughly 5 minutes. The mount target creates an ENI in your subnet, which is why the EC2 role needs ec2:CreateNetworkInterface permission. Wait for the status to reach available before mounting. Poll the status with:

aws s3files list-mount-targets --file-system-id fs-example --region us-east-1 --query "MountTargets[0].LifeCycleState" --output text

Keep checking until it returns available.

Step 7: Mount the File System

Create the mount point and mount using the S3 Files type:

sudo mkdir -p /mnt/s3files

Mount the file system. Replace fs-example with your file system ID from Step 5:

sudo mount -t s3files fs-example:/ /mnt/s3files

Verify with df:

df -h /mnt/s3files

The output shows the virtual 8 exabyte capacity:

Filesystem     Size  Used Avail Use% Mounted on
127.0.0.1:/    8.0E     0  8.0E   0% /mnt/s3files

That 8.0E is just the virtual size. Your actual storage lives in S3 and you pay S3 rates for it. The 127.0.0.1 source is expected because the NFS client connects through the local stunnel TLS proxy.

Check the mount details with findmnt:

findmnt -T /mnt/s3files

This confirms NFS 4.2 with TLS encryption, 1 MB read/write buffers, and hard mount semantics:

TARGET       SOURCE      FSTYPE OPTIONS
/mnt/s3files 127.0.0.1:/ nfs4   rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,hard,proto=tcp,port=20563,timeo=600

Step 8: Test Read and Write Operations

First, upload a test file to S3 directly so we can verify the two-way sync:

echo "Hello from S3!" | aws s3 cp - s3://my-s3files-bucket/test-from-s3.txt

Read it from the mounted file system:

cat /mnt/s3files/test-from-s3.txt

The file content from S3 appears on the mount:

Hello from S3!

Now write from EC2 through the mount:

echo "Written from EC2 via S3 Files" | sudo tee /mnt/s3files/hello-from-ec2.txt

Create directories and nested files just like a local file system:

sudo mkdir -p /mnt/s3files/app-data
echo "config_key=test_value" | sudo tee /mnt/s3files/app-data/config.txt

The sync to S3 takes about 60 to 70 seconds. After waiting, verify the files show up in S3:

aws s3 ls s3://my-s3files-bucket/ --recursive

All files written from the mount point appear in the bucket:

2026-04-07 14:23:01         15 test-from-s3.txt
2026-04-07 14:25:33         31 hello-from-ec2.txt
2026-04-07 14:26:12         22 app-data/config.txt

Directories map to S3 prefixes. The app-data/ directory becomes a prefix in the bucket, which is exactly how S3 organizes objects.

Step 9: Persistent Mount via fstab

To survive reboots, add the file system to /etc/fstab. Replace the file system ID below with your own (from the create-file-system output). Two flags matter here: _netdev tells the system to wait for networking before mounting, and nofail prevents a boot failure if the mount target is unreachable.

FS_ID="your-file-system-id"
echo "$FS_ID:/ /mnt/s3files s3files _netdev,nofail 0 0" | sudo tee -a /etc/fstab

Test the fstab entry by unmounting and remounting:

sudo umount /mnt/s3files && sudo mount -a && df -h /mnt/s3files

You should see the same 8.0E mount restored from fstab. If it fails, check your file system ID and confirm the mount target is still in the available state.

Performance Observations

We ran basic throughput tests using dd and timed file operations during testing. These numbers are from an m5.xlarge instance in us-east-1:

TestResult
100 MB sequential write341 MB/s
100 MB first read (cold)289 MB/s
100 MB cached read4.7 GB/s
10 MB write104 MB/s
Sync to S3 bucket~65 seconds
5 concurrent file writes0.041 seconds

The cached read speed stands out. Files that were recently written or read serve from the local cache layer at memory-like speeds. Cold reads of files over 1 MB that already exist in S3 stream directly from S3 at native GET throughput, which means no S3 Files storage charges for those reads. Smaller files and recently written data always serve from the high-performance cache.

The ~65 second sync delay is the main thing to plan around. If your application writes a file and another service needs to read it from S3 immediately, you’ll need to account for that lag. For workloads where the file system is the primary interface (not the S3 API), this is invisible.

Troubleshooting

Error: “is not authorized to perform: s3:GetBucketNotification”

This shows up during create-file-system when the S3FilesAccessRole is missing the bucket notification permissions. S3 Files sets up event notifications on the bucket to detect changes made through the S3 API. Add both s3:GetBucketNotification and s3:PutBucketNotification to the role’s policy on the bucket ARN. These are frequently left out of custom IAM policies because they’re not standard S3 data permissions.

Error: “is not authorized to perform: events:ListRules”

The S3FilesAccessRole needs broad EventBridge permissions. S3 Files uses EventBridge rules to trigger synchronization when objects change in the bucket. The simplest fix is granting events:* on * to the access role. If your security team requires scoped permissions, you’ll need at minimum events:ListRules, events:PutRule, events:PutTargets, events:DeleteRule, and events:RemoveTargets.

Error: “User is not authorized to perform that action” on CreateMountTarget

Mount targets create elastic network interfaces in your VPC. The EC2 instance role (or whichever principal is calling the API) needs ec2:CreateNetworkInterface, ec2:DescribeSubnets, and ec2:DescribeSecurityGroups. These are VPC-level permissions that aren’t covered by the AmazonS3FilesClientFullAccess managed policy.

Clean Up

To avoid ongoing charges, tear down the resources in reverse order. Unmount first, then delete the mount target, then the file system. The S3 bucket remains untouched.

sudo umount /mnt/s3files

Delete the mount target:

aws s3files delete-mount-target --mount-target-id mt-0a1b2c3d4e5f67890 --region us-east-1

Wait for the mount target deletion to complete (a few minutes), then delete the file system:

aws s3files delete-file-system --file-system-id fs-example --region us-east-1

Clean up the IAM roles if they’re no longer needed:

aws iam delete-role-policy --role-name S3FilesAccessRole --policy-name S3FilesBucketAccess
aws iam delete-role --role-name S3FilesAccessRole

Remove the fstab entry to prevent mount errors on next boot:

sudo sed -i '/s3files/d' /etc/fstab

S3 Files changes the equation for workloads that need file system semantics on top of S3 storage. The fact that it uses NFS 4.2 means any application that works with EFS works here, but your data lives in a standard S3 bucket accessible through both the file system and the S3 API. For a comparison with traditional EFS mounts, see the EFS mount guide. If you need a high-performance parallel file system instead, AWS FSx remains the better fit for HPC workloads. For containerized setups using S3 Files mounts, the Docker Compose guide covers the fundamentals.

Related Articles

Storage Setup Pydio Cells Sharing Server on Ubuntu 22.04|20.04 Cloud Create Ceph Bucket User with Quotas using radosgw-admin Containers Deploy and Use OpenEBS Container Storage on Kubernetes Automation Install and use Packer on Ubuntu 22.04/20.04/18.04/16.04

Leave a Comment

Press ESC to close