Tailscale is a WireGuard-based mesh VPN that makes it dead simple to connect your machines into a private network – no port forwarding, no complex firewall rules, no VPN gateway to maintain. It uses the WireGuard protocol under the hood but handles all the key exchange, NAT traversal, and peer discovery automatically. You install it, authenticate, and your machines can talk to each other over encrypted tunnels. That is it.

This guide walks through installing and configuring Tailscale VPN on RHEL 10, Rocky Linux 10, and AlmaLinux 10. We will cover everything from basic installation to advanced features like exit nodes, subnet routing, MagicDNS, and Tailscale SSH.

What Tailscale Actually Does

Traditional VPNs route all traffic through a central gateway – a single point of failure and a bandwidth bottleneck. Tailscale takes a different approach. It creates a mesh network where every node can communicate directly with every other node using WireGuard tunnels. The coordination server (hosted by Tailscale or self-hosted via Headscale) only handles authentication and key distribution. Actual traffic flows peer-to-peer.

Key things to know:

  • Built on WireGuard – fast, modern, audited crypto
  • Zero-config networking – no manual key management
  • NAT traversal works out of the box (DERP relay servers as fallback)
  • Identity-based access control tied to your SSO provider
  • Works across Linux, macOS, Windows, iOS, Android
  • Free tier supports up to 100 devices

Prerequisites

Before you start, make sure you have the following in place:

  • A running instance of RHEL 10, Rocky Linux 10, or AlmaLinux 10 (minimal or server install)
  • Root or sudo access to the system
  • Internet connectivity (Tailscale needs to reach its coordination server)
  • A Tailscale account – sign up at login.tailscale.com using Google, Microsoft, GitHub, or any OIDC provider
  • System packages up to date: sudo dnf update -y

Step 1 – Add the Official Tailscale Repository

Tailscale provides an official RPM repository for RHEL-based distributions. Add it using their install script or manually configure the repo.

Option A: Using the official install script (recommended)

curl -fsSL https://tailscale.com/install.sh | sh

This script detects your distribution, adds the correct repository, and installs the Tailscale package automatically.

Option B: Manual repository setup

If you prefer to add the repo manually:

sudo dnf config-manager addrepo --from-repofile=https://pkgs.tailscale.com/stable/rhel/10/tailscale.repo

Verify the repository was added:

sudo dnf repolist | grep tailscale

You should see tailscale-stable in the output.

Step 2 – Install Tailscale

Install the Tailscale package using dnf:

sudo dnf install -y tailscale

Verify the installation:

tailscale version

You should see output showing the Tailscale client and daemon versions.

Step 3 – Enable and Start the Tailscale Daemon

The Tailscale daemon (tailscaled) needs to be running before you can connect to your tailnet. Enable it so it starts on boot:

sudo systemctl enable --now tailscaled

Verify the service is running:

sudo systemctl status tailscaled

The output should show active (running). If it fails to start, check the journal:

sudo journalctl -u tailscaled --no-pager -n 50

Step 4 – Authenticate and Connect

Now bring the node up and authenticate it with your Tailscale account:

sudo tailscale up

This command prints an authentication URL. Open that URL in your browser, sign in with your identity provider, and authorize the machine. Once authenticated, the node joins your tailnet.

For headless servers where you cannot open a browser, you can use an auth key instead:

sudo tailscale up --authkey=tskey-auth-XXXXX

Generate auth keys from the Tailscale admin console under Settings > Keys. You can create reusable keys for automated deployments.

Step 5 – Verify Connectivity

Check that your node is connected and see other machines on your tailnet:

tailscale status

Sample output:

100.64.0.1    rhel10-server   [email protected]  linux   -
100.64.0.2    dev-laptop      [email protected]  macOS   -
100.64.0.3    cloud-vm        [email protected]  linux   -

Check your Tailscale IP address:

tailscale ip -4

Test connectivity to another node on your tailnet:

ping -c 4 100.64.0.2

Check the connection details to see if you have a direct or relayed path:

tailscale ping 100.64.0.2

If you see pong from ... via DERP, the connection is being relayed. Direct connections show pong from ... via [IP:port].

Step 6 – Configure as an Exit Node

An exit node routes all internet traffic from other tailnet devices through this machine. This is useful when you want to tunnel traffic through a specific location – for example, routing through a cloud VM in a particular region.

First, enable IP forwarding on the machine that will act as exit node:

echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
sudo sysctl -p /etc/sysctl.d/99-tailscale.conf

Advertise this machine as an exit node:

sudo tailscale set --advertise-exit-node

Now go to the Tailscale admin console and approve the exit node. Click the three dots next to the machine, select Edit route settings, and enable Use as exit node.

From a client device, use the exit node:

sudo tailscale set --exit-node=rhel10-server

Verify your public IP has changed:

curl -s https://ifconfig.me

To stop using the exit node:

sudo tailscale set --exit-node=

Step 7 – Configure as a Subnet Router

Subnet routing lets devices on your tailnet access machines on a local network without installing Tailscale on every device. This is how you give your tailnet access to printers, NAS boxes, IoT devices, or entire on-prem networks.

Enable IP forwarding (if not already done):

echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
sudo sysctl -p /etc/sysctl.d/99-tailscale.conf

Advertise your local subnet (replace with your actual LAN CIDR):

sudo tailscale set --advertise-routes=192.168.1.0/24

For multiple subnets:

sudo tailscale set --advertise-routes=192.168.1.0/24,10.0.0.0/24

Approve the routes in the Tailscale admin console under the machine’s route settings.

From a client device, verify you can reach hosts on the advertised subnet:

ping -c 4 192.168.1.1

Step 8 – Enable MagicDNS

MagicDNS lets you reach machines by hostname instead of IP address. Instead of ping 100.64.0.2, you can just do ping dev-laptop.

MagicDNS is enabled from the Tailscale admin console:

  1. Go to the DNS settings page
  2. Enable MagicDNS
  3. Optionally add a custom search domain (your tailnet name, e.g., tail1234.ts.net)

Once enabled, test it from your node:

ping -c 4 dev-laptop
# or with the full domain
ping -c 4 dev-laptop.tail1234.ts.net

Verify DNS resolution:

tailscale status
dig dev-laptop.tail1234.ts.net

You can also configure global nameservers and split DNS in the admin console to route specific domains to internal DNS servers.

Step 9 – SSH via Tailscale

Tailscale SSH replaces traditional SSH key management. It uses your Tailscale identity for authentication – no SSH keys to distribute, no authorized_keys files to maintain. Connections are encrypted end-to-end through the WireGuard tunnel.

Enable Tailscale SSH on the server:

sudo tailscale set --ssh

From another machine on your tailnet, connect using:

tailscale ssh user@rhel10-server

Or use a regular SSH client – Tailscale SSH intercepts the connection:

ssh user@rhel10-server

Tailscale SSH requires ACL policies to define who can SSH into which machines. A basic policy looks like this (configured in the admin console under Access Controls):

{
  "ssh": [
    {
      "action": "check",
      "src": ["autogroup:member"],
      "dst": ["autogroup:self"],
      "users": ["autogroup:nonroot", "root"]
    }
  ]
}

The "action": "check" option prompts for re-authentication periodically. Use "accept" to allow connections without re-auth.

Verify Tailscale SSH is working:

tailscale status
# Look for the SSH indicator on the machine

Step 10 – ACL Policies Overview

Tailscale Access Control Lists (ACLs) define which machines and users can communicate. By default, all nodes on your tailnet can reach each other. In production, you want to lock this down.

ACLs are managed in the admin console under Access Controls. Here is a practical example:

{
  "acls": [
    // Allow devs to reach dev servers on any port
    {
      "action": "accept",
      "src": ["group:devs"],
      "dst": ["tag:dev-server:*"]
    },
    // Allow monitoring to reach all servers on specific ports
    {
      "action": "accept",
      "src": ["tag:monitoring"],
      "dst": ["tag:server:80,443,9090,9100"]
    },
    // Allow all users to use exit nodes
    {
      "action": "accept",
      "src": ["autogroup:member"],
      "dst": ["autogroup:internet:*"]
    }
  ],
  "tagOwners": {
    "tag:dev-server": ["group:devs"],
    "tag:server": ["group:ops"],
    "tag:monitoring": ["group:ops"]
  },
  "groups": {
    "group:devs": ["[email protected]", "[email protected]"],
    "group:ops": ["[email protected]"]
  }
}

Key concepts:

  • Groups – collections of users (e.g., group:devs)
  • Tags – labels applied to machines (e.g., tag:server)
  • Autogroups – built-in groups like autogroup:member (all users) and autogroup:internet (exit node traffic)
  • Default deny – if there is no matching ACL rule, traffic is blocked

Apply tags when bringing a node up:

sudo tailscale up --advertise-tags=tag:server

Firewall Considerations

Tailscale works with most firewall configurations out of the box, but there are a few things to know.

Tailscale primarily uses UDP port 41641 for direct WireGuard connections. It also needs to reach the coordination server and DERP relay servers over HTTPS (TCP 443). In most cases, outbound internet access is enough and no inbound firewall rules are needed – Tailscale handles NAT traversal.

If you are running firewalld (default on RHEL 10), the Tailscale interface (tailscale0) is automatically added to a trusted zone. Verify this:

sudo firewall-cmd --get-active-zones
sudo firewall-cmd --zone=trusted --list-interfaces

If tailscale0 is not in the trusted zone, add it:

sudo firewall-cmd --zone=trusted --add-interface=tailscale0 --permanent
sudo firewall-cmd --reload

If you need to allow direct connections through a perimeter firewall, open UDP 41641 inbound:

sudo firewall-cmd --permanent --add-port=41641/udp
sudo firewall-cmd --reload

Verify the firewall rules:

sudo firewall-cmd --list-all

For subnet routing or exit node configurations, ensure masquerading is enabled on the outbound interface:

sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --reload

Connecting Multiple Machines

To build out your tailnet, repeat the installation on each machine you want to connect. For large-scale deployments, use auth keys to automate the process.

Generate a reusable, pre-approved auth key from the admin console:

# On each new machine after installing Tailscale:
sudo tailscale up --authkey=tskey-auth-XXXXX --advertise-tags=tag:server

For configuration management tools, here is a minimal Ansible task:

- name: Install and configure Tailscale
  hosts: all
  become: true
  tasks:
    - name: Install Tailscale
      shell: curl -fsSL https://tailscale.com/install.sh | sh

    - name: Enable tailscaled
      systemd:
        name: tailscaled
        enabled: true
        state: started

    - name: Join tailnet
      command: tailscale up --authkey={{ tailscale_authkey }} --advertise-tags=tag:server
      register: result
      changed_when: "'already' not in result.stderr"

Verify all machines are connected from any node:

tailscale status

Troubleshooting

Here are solutions to common issues you may run into.

tailscaled fails to start

Check the service logs:

sudo journalctl -u tailscaled --no-pager -n 100

Common causes: TUN device not available (check ls -la /dev/net/tun), or another VPN holding the TUN device.

Node not appearing in tailnet

Make sure the daemon is running and you have completed authentication:

sudo systemctl status tailscaled
sudo tailscale status
# If status shows "Logged out", re-authenticate:
sudo tailscale up

Connections are relayed instead of direct

Check the connection path:

tailscale ping other-machine

If connections go through DERP relay, it usually means a firewall is blocking UDP. Open UDP 41641 on both ends or ensure outbound UDP is not restricted. Run the built-in network check:

tailscale netcheck

This shows latency to DERP servers, whether UDP is available, and if you have a direct path to peers.

Subnet routes not working

Verify IP forwarding is enabled:

sysctl net.ipv4.ip_forward
# Should return: net.ipv4.ip_forward = 1

Check that routes are approved in the admin console. Unapproved routes will not work. Also verify masquerading:

sudo firewall-cmd --query-masquerade

DNS resolution not working with MagicDNS

Check if Tailscale is managing DNS:

resolvectl status

Look for the tailscale0 interface in the output. If DNS is not resolving, ensure MagicDNS is enabled in the admin console and restart the daemon:

sudo systemctl restart tailscaled

SELinux blocking Tailscale

On RHEL 10 with SELinux enforcing, check for denials:

sudo ausearch -m avc -ts recent | grep tailscale

If you see denials, generate and apply a custom policy module:

sudo ausearch -m avc -ts recent | audit2allow -M tailscale-custom
sudo semodule -i tailscale-custom.pp

Checking overall health

Tailscale has a built-in health check:

tailscale debug doctor

This runs a series of diagnostic checks and reports any issues it finds.

Summary

You now have Tailscale running on RHEL 10, Rocky Linux 10, or AlmaLinux 10. The key steps were: add the official repo, install with dnf, enable the daemon, authenticate with tailscale up, and verify with tailscale status. From there you can configure exit nodes, subnet routers, MagicDNS, Tailscale SSH, and lock down access with ACL policies.

Tailscale fits well into modern infrastructure – it runs quietly in the background, requires minimal maintenance, and gives you encrypted connectivity between machines regardless of where they sit. Whether you are connecting a handful of dev machines or building out a production mesh across multiple data centers, the workflow stays the same.

LEAVE A REPLY

Please enter your comment!
Please enter your name here