AWS

Amazon S3 Files vs Mountpoint for S3 vs s3fs-fuse: Which S3 Mount Solution to Use

There are now three distinct ways to mount an S3 bucket as a file system on Linux: S3 Files (AWS’s new managed NFS layer), Mountpoint for S3 (AWS’s FUSE client), and s3fs-fuse (the community FUSE tool that’s been around for years). Each one makes fundamentally different tradeoffs between POSIX compliance, caching, write support, and cost.

Original content from computingforgeeks.com - post 165403

We benchmarked S3 Files and Mountpoint for S3 head-to-head on the same EC2 instance against the same bucket. The results were not close. s3fs-fuse is documented from known characteristics because it refused to install on Amazon Linux 2023, which itself tells you something about where the project stands. If you’re deciding between these three options, the data here should make the choice straightforward. For the full S3 Files walkthrough including setup and pricing, see the complete S3 Files guide. If you just need the EC2 mount steps, that’s covered separately.

Tested April 2026 on Amazon Linux 2023, t3.large EC2. S3 Files GA, Mountpoint for S3 1.22.2, benchmarked against same S3 bucket

Architecture Comparison

Before looking at benchmarks, it helps to understand what each solution actually is under the hood. S3 Files adds a managed NFS 4.2 layer backed by high-performance EFS storage on top of your S3 bucket. Mountpoint translates FUSE calls directly to S3 API calls with no caching layer. s3fs-fuse does the same thing but with an older codebase and optional local cache.

FeatureS3 FilesMountpoint for S3s3fs-fuse
Mount typeNFS 4.2 (managed)FUSEFUSE
CachingEFS-backed high-perf storageMetadata cache onlyOptional local cache
Write supportFull (create, overwrite, append)Sequential/append onlyFull
Rename supportYesNo (ENOTSUP)Yes (copy+delete)
Delete supportYesYes (since mid-2024)Yes
POSIX permissionsFull (UID/GID/mode)LimitedPartial
File lockingAdvisory locks (NFS)NoNo
ConsistencyRead-after-writeEventualEventual
EncryptionTLS in transit, KMS at restTLS in transitTLS optional
IAM authAutomaticAutomaticManual config
Max throughputTB/s aggregateGB/s~100 MB/s
AWS managedYesYes (client only)No (community)
CostStorage + access chargesFree (S3 costs only)Free (S3 costs only)

The biggest architectural difference is caching. S3 Files keeps a hot copy of recently accessed data on EFS-backed storage, which means repeated reads skip S3 entirely. Mountpoint and s3fs-fuse hit S3 on every read. That single difference explains most of the benchmark gaps below.

Benchmark Setup

Both mounts pointed at the same S3 bucket in us-east-1. The EC2 instance was a t3.large running Amazon Linux 2023, placed in the same AZ as the S3 Files mount target. Mountpoint for S3 was version 1.22.2, installed from the official RPM. All tests used dd, shell loops, and standard coreutils to simulate realistic file operations.

S3 Files was mounted via NFS at /mnt/s3files, Mountpoint at /mnt/s3mp. Tests ran back-to-back on the same instance with no other workload. Caches were dropped between cold read tests using sync && echo 3 > /proc/sys/vm/drop_caches.

Benchmark Results

These numbers are from real tests, not vendor claims. The write failures on Mountpoint were genuine I/O errors, not configuration issues.

TestS3 FilesMountpointNotes
100MB sequential write273 MB/s (0.38s)FAILED (I/O error)Mountpoint doesn’t support overwrite writes
10MB sequential write91 MB/s (0.12s)FAILED (I/O error)Same limitation
100MB read (cold)0.266s0.266sIdentical, both stream from S3
100MB read (cached)0.053s (1.9 GB/s)0.261sS3 Files 5x faster (EFS cache)
1000 small file writes58.9sALL FAILEDMountpoint can’t create files via tee/echo
1000 small file reads4.3s87.1sS3 Files 20x faster
Directory listing (1000+ files)0.039s0.163sS3 Files 4x faster
File renameWorks (instant)Not supportedMountpoint returns ENOTSUP
Resource usage (RSS)~45 MB~31 MBMountpoint slightly lighter

Write Performance

Mountpoint failed every write test. Not because of misconfiguration, but by design. It only supports sequential writes through the open-write-close pattern for new file creation. Any attempt to overwrite an existing file, write via tee or shell redirection, or perform random writes results in an I/O error. S3 Files handled all write patterns without issue, hitting 273 MB/s on the 100MB sequential test.

This is the single biggest differentiator. If your workload writes files (and most do), Mountpoint is not an option.

Read Performance: Cold vs Cached

Cold reads were identical at 0.266 seconds for a 100MB file. Both solutions streamed directly from S3 at that point, so the numbers should match. The gap appears on the second read. S3 Files served the cached copy at 1.9 GB/s (0.053s), while Mountpoint fetched from S3 again at the same 0.266s. That 5x improvement comes entirely from the EFS-backed cache layer.

For workloads that repeatedly access the same files (config files, libraries, templates, ML model weights), this caching behavior is transformative. For purely sequential, read-once workloads like log processing, the cache provides no benefit.

Small File Operations

The small file test exposed the largest gap: 4.3 seconds vs 87.1 seconds for reading 1000 small files. Every small file read on Mountpoint requires a separate S3 GET API call, each with its own latency overhead. S3 Files caches file metadata and content, so subsequent reads are served locally. The 20x difference makes Mountpoint impractical for any workload with thousands of small files, which describes most application deployments.

Directory listing followed the same pattern. S3 Files returned 1000+ entries in 39 milliseconds. Mountpoint took 163 milliseconds because it had to translate the listing into S3 ListObjectsV2 calls.

Where s3fs-fuse Fits

We attempted to install s3fs-fuse on Amazon Linux 2023 and it failed due to missing dependencies in the AL2023 repos. The project’s build process assumes older library versions. This is a meaningful data point: if the tool doesn’t build cleanly on a current AWS AMI, maintenance is falling behind.

Based on documented characteristics and years of production use across the industry, s3fs-fuse supports full writes and renames (implemented as copy-plus-delete on S3). It has no built-in high-performance caching layer, so every operation hits S3 directly. Latency runs 10 to 100 milliseconds per operation depending on file size and S3 region proximity. Throughput tops out around 100 MB/s in practice.

The rename implementation deserves a callout. When you mv a 1GB file on s3fs, it copies the entire 1GB to a new key and deletes the old one. On S3 Files, a rename is a metadata operation that completes instantly. On Mountpoint, rename isn’t supported at all.

s3fs-fuse is still the only option that runs on non-AWS infrastructure. If you need to mount S3 (or S3-compatible storage like MinIO) from an on-premises server, it’s your only FUSE choice. Just don’t expect the performance characteristics of either AWS-managed solution.

When to Use Each Solution

Choose S3 Files When

  • Full read/write POSIX access is required (applications that create, modify, rename, and delete files)
  • Latency-sensitive workloads with hot working sets that benefit from caching
  • Applications that need file locking, such as SQLite databases or lock-file-based coordination
  • Shared access from multiple EC2 instances, Lambda functions, or EKS pods simultaneously
  • You need read-after-write consistency guarantees at the file system level

The tradeoff is cost. S3 Files charges for high-performance cache storage and data access on top of standard S3 rates. For workloads that justify the performance, the cost is reasonable. For cold storage you rarely touch, it’s overkill. See the S3 Files pricing breakdown for exact numbers.

Choose Mountpoint for S3 When

  • Read-only or append-only workloads where you never overwrite existing files
  • Cost is the primary concern and you want zero additional charges beyond S3 storage and requests
  • Simple data ingestion pipelines that write new files sequentially and never modify them
  • You need the lightest possible footprint (31 MB RSS vs 45 MB for the S3 Files NFS client)
  • Analytics workloads that scan large files once without needing repeat access

Mountpoint is a solid tool when used within its constraints. The problem is that most applications expect a writable file system, and Mountpoint’s write support is limited enough that common operations like echo "data" > file fail. Test your exact workload before committing.

Choose s3fs-fuse When

  • You need writes and renames on a platform that doesn’t support S3 Files (on-premises, non-AWS clouds)
  • Mounting S3-compatible storage like MinIO, Ceph, or Wasabi
  • Quick development and testing environments where performance is not critical
  • Legacy systems that can’t run newer tooling

Avoid s3fs-fuse for production workloads on AWS. Both S3 Files and Mountpoint are better maintained, better integrated with IAM, and significantly faster. The Mountpoint GitHub repo gets regular updates, and S3 Files is a fully managed service backed by AWS support.

Cost Comparison

Cost is where the decision gets nuanced. Mountpoint and s3fs-fuse add zero cost on top of S3. S3 Files adds two cost dimensions: high-performance cache storage (per-GB for active data) and data access charges (per-GB for reads and writes through the cache). The exact rates depend on your region and usage patterns.

Cost ComponentS3 FilesMountpoints3fs-fuse
SoftwareFreeFreeFree
S3 storageStandard ratesStandard ratesStandard rates
S3 requestsStandard rates for large readsStandard ratesStandard rates
High-perf cachePer-GB (active data only)N/AN/A
Data accessPer-GB read/write to cacheN/AN/A
InfrastructureMount target (managed)NoneNone

For read-heavy workloads with a hot working set (say 50GB of frequently accessed files out of a 10TB bucket), S3 Files cache charges apply only to that 50GB. You’re not paying cache costs for the full 10TB. For write-heavy workloads that create new files and never read them again, S3 Files adds cost without proportional benefit. Mountpoint would be cheaper if the write pattern fits its constraints.

One cost factor people miss: s3fs-fuse generates significantly more S3 API requests than either AWS solution. Every metadata check, every ls, every stat call becomes an API request. On buckets with millions of objects, the S3 request charges from s3fs can exceed what S3 Files would have cost.

Practical Considerations

Consistency Model

S3 Files provides read-after-write consistency through its NFS layer. Write a file, read it back immediately, and you get the latest version. Mountpoint and s3fs-fuse rely on S3’s eventual consistency for listing operations (though S3 itself now offers strong read-after-write consistency for individual objects). The practical impact: if you write a file via S3 Files and immediately ls the directory, the file appears. On Mountpoint, there can be a brief delay before the listing reflects the new object.

Multi-Client Access

S3 Files supports concurrent access from multiple EC2 instances, Lambda functions, and EKS pods through a shared NFS mount target. This works like any NFS share, with advisory locking for coordination. Mountpoint mounts are per-instance with no cross-instance coordination. If two instances write to the same key through Mountpoint, last-write-wins with no conflict detection.

For shared file systems across a fleet, S3 Files is the clear choice. If you’ve been using EFS or FSx for shared access to S3 data, S3 Files gives you that capability without a separate file system service.

IAM and Security

Both AWS solutions (S3 Files and Mountpoint) use IAM roles automatically. Attach a role to your EC2 instance and the mount inherits those permissions. No access keys to manage or rotate. s3fs-fuse requires manual credential configuration, either through environment variables, a credentials file, or an IAM role (with extra setup). Managing credentials with the AWS CLI configured on the instance simplifies this, but it’s still more moving parts.

S3 Files adds TLS encryption in transit (NFS over TLS) and supports KMS encryption at rest. Mountpoint uses TLS for S3 API calls. s3fs-fuse supports TLS but doesn’t enforce it by default.

Quick Decision Matrix

If you want the short version:

Your RequirementBest OptionWhy
Read/write files normallyS3 FilesOnly option with full POSIX write support and caching
Read-only access, lowest costMountpointZero additional cost, solid read performance
Non-AWS infrastructures3fs-fuseOnly option that runs outside AWS
Shared access across instancesS3 FilesNFS mount target with advisory locking
ML training data pipelinesS3 Files (hot data) or Mountpoint (cold scans)Depends on access pattern
Log ingestion (write-only)MountpointSequential writes work, no cache cost needed
Application server (WordPress, etc.)S3 FilesApps need renames, overwrites, and locks
S3-compatible storage (MinIO)s3fs-fuseOnly FUSE tool that supports non-AWS S3

Migration Path

If you’re currently using s3fs-fuse on EC2, migrating to either AWS solution is straightforward. Both mount to a local path and present files the same way to applications. The main work is updating your mount commands and IAM configuration.

Moving from Mountpoint to S3 Files requires enabling the S3 Files feature on your bucket and creating a mount target in your VPC. The bucket data stays in place. You can run both mounts simultaneously during migration to validate behavior before cutting over.

mount-s3 your-bucket /mnt/s3mp --region us-east-1

Compare that to the S3 Files NFS mount:

sudo mount -t nfs4 -o nfsvers=4.2,rsize=1048576,wsize=1048576,timeo=600 mount-target-ip:/ /mnt/s3files

Both commands give you a directory of your S3 objects. The difference is everything that happens after: write support, caching, consistency, and performance under load. For the step-by-step EC2 setup, see the S3 Files EC2 mount guide.

The Bottom Line

S3 Files is the only solution that behaves like an actual file system. Mountpoint is a thin translation layer that works well for constrained read-heavy workloads. s3fs-fuse fills a niche for non-AWS environments but shouldn’t be your first choice on EC2.

The benchmark data makes the performance case clear: 5x faster cached reads, 20x faster small file operations, and functional write support versus complete write failures. The cost question depends on your workload. For hot data with frequent access, S3 Files caching pays for itself in reduced S3 API calls alone. For cold data you scan once, Mountpoint’s zero overhead is the right tradeoff.

Start with your access pattern. If you write files, the choice is already made.

Related Articles

Storage Install Ceph 18 (Reef) Storage Cluster on Ubuntu 22.04|20.04 Cloud Install XCP-ng 8.2 Virtualization Platform – Step by Step with Screenshots AWS Extend EBS boot disk on AWS without an instance reboot AWS Change Server Hostname in EC2 or OpenStack or DigitalOcean or Azure Instance

Leave a Comment

Press ESC to close