AlmaLinux

Install and Use sshfs on Rocky 10 / Debian 13 / Ubuntu 24.04

sshfs mounts a directory from a remote server as if it were a local filesystem, tunnelled entirely over SSH. No extra daemon, no extra port, no NFS or Samba share. If you can SSH into the box, you can mount its filesystem. For ad-hoc work where spinning up a full file-sharing stack would be overkill, sshfs is the right tool, and it’s a nice middle ground between one-shot rsync and a permanent NFS mount.

Original content from computingforgeeks.com - post 1550

The FUSE-based sshfs binary is packaged for every major distro. This guide installs it on Rocky Linux 10, Debian 13, and Ubuntu 24.04, walks through the first mount, shows the resilient flags you’ll want in real life, and finishes with an /etc/fstab entry that mounts on boot.

Tested April 2026 on Rocky Linux 10.1 (sshfs 3.7.3, FUSE 3.16.2) and Debian 13 trixie (sshfs 3.7.3, FUSE 3.17.2)

Step 1: Install sshfs

On Debian 13 and Ubuntu 24.04 the package is simply sshfs and lives in the default repositories:

sudo apt update
sudo apt install -y sshfs

On Rocky Linux 10, AlmaLinux 10, and RHEL 10, the sshfs package lives in EPEL as fuse-sshfs. Enable EPEL first if you haven’t already:

sudo /usr/bin/crb enable
sudo dnf install -y epel-release
sudo dnf install -y fuse-sshfs

Verify the install on either family with:

sshfs --version

All three tools report, including the FUSE kernel interface version:

fusermount3 version: 3.16.2
SSHFS version 3.7.3
FUSE library version 3.16.2
using FUSE kernel interface version 7.38

No kernel module needs to be loaded manually. FUSE is built into every modern Linux kernel, and the userspace fusermount3 helper takes care of the mount/unmount syscalls.

Step 2: Make sure the SSH side works first

sshfs is literally SSH underneath. If a plain ssh user@host from the client to the server needs a password, sshfs will too, and it won’t prompt as cleanly as the SSH client does. Set up key-based auth now so the rest of this guide is smooth (our SSH keys on Debian guide covers the full workflow if you need a refresher). From the client:

[ -f ~/.ssh/id_ed25519 ] || ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519
ssh-copy-id [email protected]
ssh [email protected] 'echo connected as $(whoami) on $(hostname)'

The last line should return something like connected as debian on cfg-debian13-lab without prompting for a password. If you still see a password prompt, re-check the remote ~/.ssh/authorized_keys file and its mode (600), and the owning directory’s mode (700).

Step 3: Mount a remote directory

Pick a local mountpoint. sshfs, unlike NFS, prefers a directory inside your home folder because FUSE mounts are owned by the user who ran the mount, not root:

mkdir -p ~/remote-files
sshfs [email protected]:/tmp/remote-share ~/remote-files

No output means success. Check mount to confirm the FUSE mount is active and owned by your UID:

mount | grep sshfs

The line confirms the filesystem type as fuse.sshfs and the mount is owned by your user:

[email protected]:/tmp/remote-share on /home/rocky/remote-files type fuse.sshfs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)

List the contents and cat a file to make sure reads work:

ls -la ~/remote-files/
cat ~/remote-files/hello.txt

Both the directory listing and the file contents come straight from the remote host:

total 12
drwxrwxr-x. 1 rocky rocky   60 Apr 11 03:45 .
drwx------. 5 rocky rocky 4096 Apr 11 03:45 ..
-rw-rw-r--. 1 rocky rocky   20 Apr 11 03:45 hello.txt
hello-from-debian13

Writes work exactly the same way. Create a file through the mount and SSH to the server to prove it landed on disk:

echo 'hello-from-rocky10-client' > ~/remote-files/client-wrote-this.txt
ssh [email protected] 'ls -la /tmp/remote-share/; cat /tmp/remote-share/client-wrote-this.txt'

The file created through the mount is visible on the server side too:

total 8
drwxrwxr-x  2 debian debian  80 Apr 11 00:45 .
drwxrwxrwt 12 root   root   320 Apr 11 00:45 ..
-rw-rw-r--  1 debian debian  20 Apr 11 00:45 hello.txt
-rw-r--r--  1 debian debian  26 Apr 11 00:45 client-wrote-this.txt
hello-from-rocky10-client

The file is owned by the remote user (debian) on the server side even though the client user (rocky) created it. This is because sshfs is really just writing through an SSH session owned by the remote user.

Step 4: Check free space and inode usage

df on a sshfs mount reports the remote filesystem’s free space, which is more useful than the local view would be:

df -hT ~/remote-files

The filesystem type shows as fuse.sshfs and the size/used columns reflect the remote disk:

Filesystem                            Type        Size  Used Avail Use% Mounted on
[email protected]:/tmp/remote-share    fuse.sshfs  2.0G  547M  1.4G  28% /home/rocky/remote-files

Step 5: Unmount cleanly

Because FUSE mounts are owned by the user who made them, regular umount will fail with a permission denied error. Use fusermount3 instead (or fusermount on older FUSE installs):

fusermount3 -u ~/remote-files

If a process is still holding the mount open, the command will complain about the mount being busy. Find the culprit with lsof +f -- ~/remote-files and either let it exit or force-unmount with fusermount3 -uz.

Step 6: Options worth using in production

The defaults are fine for a quick test but real-world network conditions will drop your sshfs connection if nothing hints at keeping it alive. These options make sshfs reconnect automatically and handle brief outages gracefully:

sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,compression=yes \
  [email protected]:/tmp/remote-share ~/remote-files

What each flag does:

  • reconnect: automatically re-establish the SSH channel if the connection drops instead of leaving a dead mount.
  • ServerAliveInterval=15: send a keepalive every 15 seconds so firewalls and NAT boxes don’t reap the connection as idle.
  • ServerAliveCountMax=3: declare the session dead after three missed keepalives.
  • compression=yes: enable SSH-level gzip. Worth it on slow links, skip it on LAN where CPU is the bottleneck.

Other useful options: allow_other to let other local users read the mount (requires user_allow_other in /etc/fuse.conf), default_permissions to enforce local permission checks, and IdentityFile=~/.ssh/id_ed25519 if you keep multiple keys and want to force a specific one.

Step 7: Mount at boot via /etc/fstab

For a mount you always want available, add a line to /etc/fstab. The trick is that root runs fstab mounts at boot, which means the SSH key needs to live somewhere root can read it, and the mount has to use an unattended options set:

sudo cp /home/rocky/.ssh/id_ed25519 /root/.ssh/sshfs_key
sudo chmod 600 /root/.ssh/sshfs_key

Then add this to /etc/fstab (all on one line):

[email protected]:/tmp/remote-share  /mnt/remote-files  fuse.sshfs  _netdev,allow_other,IdentityFile=/root/.ssh/sshfs_key,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,StrictHostKeyChecking=accept-new  0  0

The _netdev flag tells systemd-fstab-generator that this mount depends on the network, so it waits for networking to come up before attempting it. Without it, the mount fires too early at boot and fails. Create the mountpoint, enable user_allow_other so the mount is visible to non-root users, and trigger the mount:

sudo mkdir -p /mnt/remote-files
echo 'user_allow_other' | sudo tee -a /etc/fuse.conf
sudo mount /mnt/remote-files
mount | grep sshfs

Troubleshooting

Error: “read: Connection reset by peer”

The SSH transport handshake failed. In most cases this means key authentication isn’t in place, which is exactly what step 2 exists to prevent. Run the plain ssh command first and confirm it connects without a password prompt. Once bare SSH works, sshfs will work.

Error: “fusermount3: entry not found in /etc/mtab”

You’re trying to unmount a directory that was never mounted. This is common after a failed mount attempt. It’s a harmless complaint, not an error state.

Writes succeed but files appear as owned by a different user

Not a bug. The remote side owns whatever the remote user owns (the user you SSH in as). If you need the local user’s UID preserved on the remote end, use the idmap=user option, or mount as the same username on both ends.

Alternatives worth knowing

sshfs is great for single-user, one-off access. For multi-client, high-throughput file sharing, it’s not the right tool. If you’re sharing a directory from a Linux NAS to several clients, NFS is faster and doesn’t tunnel every read through SSH overhead. See our guides on NFS on Rocky Linux 10, the NFS client side, and Samba on Ubuntu 24.04 for mixed Windows and Linux workloads. For modern rsync-based backups driven by systemd timers, see our production rsync backup reference.

Related Articles

AlmaLinux Configure FreeIPA Server Replica on Rocky / AlmaLinux Debian Install and Use Websploit on Ubuntu | Debian CentOS Manage Rocky Linux 8 Server using Cockpit Web Console Ubuntu Configure OpenLDAP Server on Ubuntu 24.04 / 22.04

Leave a Comment

Press ESC to close