Network teaming and bonding combine multiple physical NICs into a single logical interface for redundancy or increased throughput. If one link fails, traffic automatically moves to a surviving link. If your server has two or more NICs connected to the same switch, you should be using one of these.
There is a major version split to be aware of. Red Hat deprecated network teaming (teamd) in RHEL 9 and removed it entirely in RHEL 10. Network bonding is now the only supported method on Rocky Linux 10, AlmaLinux 10, and RHEL 10. Fedora 42 still ships teamd but requires an extra package. This guide covers bonding (works on all versions), teaming (RHEL 8/9 and Fedora only), and the migration path between them.
Tested March 2026 on Rocky Linux 10.1 (NetworkManager 1.54.0, kernel 6.12) and Fedora 42 (NetworkManager 1.52.0, teamd 1.32)
Teaming vs Bonding: Which One?
| Feature | Network Bonding | Network Teaming (teamd) |
|---|---|---|
| Architecture | Kernel-level (no daemon) | Userspace daemon (teamd) + kernel module |
| RHEL 8 | Supported | Supported |
| RHEL 9 / Rocky 9 | Supported (recommended) | Deprecated |
| RHEL 10 / Rocky 10 | Only option | Removed |
| Fedora 42 | Supported | Supported (needs NetworkManager-team) |
| Config tool | nmcli | nmcli + teamdctl |
| Monitoring | /proc/net/bonding/bond0 | teamdctl team0 state |
| Modes | 7 modes (0-6) | 5 runners |
For new deployments, use bonding. It works on every RHEL version, runs in the kernel (no daemon to crash), and has more active upstream development. The rest of this guide starts with bonding, then covers teaming for environments that still use it.
Prerequisites
- Rocky Linux / AlmaLinux / RHEL / RHEL 10|9|8 or Fedora 42/41
- At least two network interfaces (beyond the management NIC)
- Root or sudo access
- Both interfaces connected to the same switch (same broadcast domain)
Identify your network interfaces:
ip link show
Look for the interfaces you want to bond. In this example, ens19 and ens20 are the two ports we will combine:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
4: ens20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
Configure Network Bonding (RHEL 8/9/10, Fedora)
Bonding works on every supported RHEL version and Fedora. All configuration is done through NetworkManager (nmcli).
Create the Bond Interface
Create a bond with active-backup mode (the most common choice for failover):
sudo nmcli connection add type bond con-name bond0 ifname bond0 \
bond.options "mode=active-backup,miimon=100"
The miimon=100 option checks link status every 100 milliseconds. If a link goes down, failover happens within that interval.
Connection 'bond0' (9d819dcf-8781-415c-9f7c-51bd482e070d) successfully added.
Add Port Interfaces
Add the physical interfaces as ports of the bond. On RHEL 9/10, the syntax uses port-type and controller (replacing the older slave-type and master terminology):
sudo nmcli connection add type ethernet port-type bond con-name bond0-port1 \
ifname ens19 controller bond0
sudo nmcli connection add type ethernet port-type bond con-name bond0-port2 \
ifname ens20 controller bond0
Both ports are added successfully:
Connection 'bond0-port1' (ef9cc4be-bbb0-433d-b283-95b104bf58ed) successfully added.
Connection 'bond0-port2' (f29db42e-943f-4028-bbc8-d5dcaf37f87e) successfully added.
On RHEL 8, use the older syntax instead: type bond-slave ... master bond0.
Assign an IP Address
Set a static IP on the bond interface:
sudo nmcli connection modify bond0 ipv4.addresses 10.0.1.100/24
sudo nmcli connection modify bond0 ipv4.gateway 10.0.1.1
sudo nmcli connection modify bond0 ipv4.dns "8.8.8.8 8.8.4.4"
sudo nmcli connection modify bond0 ipv4.method manual
For DHCP, skip these commands. The bond defaults to DHCP when no static address is configured.
Activate the Bond
If the physical interfaces are already assigned to other NetworkManager connections, deactivate those first:
sudo nmcli connection down "Wired connection 1" 2>/dev/null
sudo nmcli connection down "Wired connection 2" 2>/dev/null
Activate the bond and its ports:
sudo nmcli connection up bond0
sudo nmcli connection up bond0-port1
sudo nmcli connection up bond0-port2
Verify the Bond
Check the bond status through the kernel interface:
cat /proc/net/bonding/bond0
A healthy active-backup bond shows both ports up with one active:
Ethernet Channel Bonding Driver: v6.12.0-124.8.1.el10_1.x86_64
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: ens19
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: ens19
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: bc:24:11:c3:11:1a
Slave Interface: ens20
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: bc:24:11:0e:41:da
Verify the IP is assigned to the bond:
ip addr show dev bond0
The bond interface should show the configured IP:
5: bond0: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state UP
link/ether 42:82:e2:ab:c2:72 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.100/24 brd 10.0.1.255 scope global noprefixroute bond0
valid_lft forever preferred_lft forever
Test Failover
Simulate a link failure by bringing down the active port:
sudo ip link set ens19 down
Check the bond status within a few seconds:
grep "Currently Active Slave" /proc/net/bonding/bond0
The active slave switches to the surviving port:
Currently Active Slave: ens20
Restore the port:
sudo ip link set ens19 up
The port rejoins the bond but does not preempt the currently active interface (this is the default active-backup behavior).
Bonding Modes
Linux bonding supports 7 modes. The mode you choose depends on your switch configuration and your goal (failover vs throughput):
| Mode | Name | Requires Switch Config | Use Case |
|---|---|---|---|
| 0 | balance-rr | Yes (port channel) | Throughput, round-robin across ports |
| 1 | active-backup | No | Failover only, most common |
| 2 | balance-xor | Yes | Hash-based load distribution |
| 3 | broadcast | Yes | Send on all ports simultaneously |
| 4 | 802.3ad (LACP) | Yes (LACP) | Dynamic link aggregation, best throughput |
| 5 | balance-tlb | No | Adaptive transmit load balancing |
| 6 | balance-alb | No | Adaptive load balancing (TX + RX) |
Modes 1 (active-backup) and 6 (balance-alb) are the easiest to deploy because they require no switch-side configuration. Mode 4 (802.3ad) provides the best throughput but requires LACP support on the switch.
To create a bond with a different mode, specify it in the options. For round-robin:
sudo nmcli connection add type bond con-name bond0 ifname bond0 \
bond.options "mode=balance-rr,miimon=100"
For 802.3ad LACP (switch must also be configured for LACP on the corresponding ports):
sudo nmcli connection add type bond con-name bond0 ifname bond0 \
bond.options "mode=802.3ad,miimon=100,lacp_rate=fast"
Configure Network Teaming (RHEL 8/9, Fedora)
Teaming is still available on RHEL 8 (fully supported), RHEL 9 (deprecated), and Fedora 42 (requires the NetworkManager-team package). It is not available on RHEL 10 / Rocky 10 / AlmaLinux 10.
Install the required packages:
sudo dnf install -y teamd NetworkManager-team
Restart NetworkManager after installing the team plugin (this is required on Fedora 42 or you will get “NetworkManager plugin for ‘team’ unavailable”):
sudo systemctl restart NetworkManager
Create a team with the activebackup runner:
sudo nmcli connection add type team con-name team0 ifname team0 \
team.runner activebackup
Add the port interfaces:
sudo nmcli connection add type team-slave con-name team0-port1 \
ifname ens19 master team0
sudo nmcli connection add type team-slave con-name team0-port2 \
ifname ens20 master team0
Configure the IP and activate:
sudo nmcli connection modify team0 ipv4.addresses 10.0.1.100/24 ipv4.method manual
sudo nmcli connection up team0
sudo nmcli connection up team0-port1
sudo nmcli connection up team0-port2
Verify with teamdctl:
teamdctl team0 state
A healthy team shows both ports up with one active:
setup:
runner: activebackup
ports:
ens19
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
ens20
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
active port: ens19
Teaming Runners
Teaming runners are the equivalent of bonding modes:
| Teaming Runner | Bonding Equivalent | Description |
|---|---|---|
| activebackup | mode 1 (active-backup) | One active port, failover on link loss |
| roundrobin | mode 0 (balance-rr) | Round-robin across all ports |
| loadbalance | mode 2 (balance-xor) | Hash-based load distribution |
| broadcast | mode 3 (broadcast) | Send on all ports |
| lacp | mode 4 (802.3ad) | 802.3ad Link Aggregation |
Migrate from Teaming to Bonding
If you are running teaming on RHEL 9 and planning to upgrade to RHEL 10, you must migrate to bonding before the upgrade. RHEL 9 includes the team2bond utility that converts the team configuration, but IP addresses need to be reconfigured manually.
The migration process:
- Document the current team configuration (
teamdctl team0 config dump) - Note the IP address, gateway, and DNS settings from
nmcli connection show team0 - Delete the team:
nmcli connection delete team0-port1 team0-port2 team0 - Create the equivalent bond using the commands in the bonding section above
- Map the runner to the corresponding bonding mode using the table above
Delete a Bond or Team
To remove a bond:
sudo nmcli connection down bond0
sudo nmcli connection delete bond0-port1 bond0-port2 bond0
To remove a team:
sudo nmcli connection down team0
sudo nmcli connection delete team0-port1 team0-port2 team0
Version Differences Summary
| Feature | RHEL 8 | RHEL 9 | RHEL 10 | Fedora 42 |
|---|---|---|---|---|
| Network teaming (teamd) | Supported | Deprecated | Removed | Supported |
| Network bonding | Supported | Recommended | Only option | Supported |
| ifcfg config files | Supported | Removed | Removed | Removed |
| nmcli port terminology | master/slave | controller/port-type | controller/port-type | controller/port-type |
team2bond migration tool | N/A | Available | N/A | N/A |
| NetworkManager-team package | Included | Included | N/A | Separate install |
Troubleshooting
Error: “NetworkManager plugin for ‘team’ unavailable”
This occurs on Fedora when the NetworkManager-team package is not installed. The teamd package alone is not enough. Install the plugin and restart NetworkManager:
sudo dnf install -y NetworkManager-team
sudo systemctl restart NetworkManager
Bond shows “Currently Active Slave: None”
The physical interfaces are still claimed by other NetworkManager connections. Deactivate the existing connections on those interfaces before activating the bond ports:
nmcli connection show --active
Look for connections using your target interfaces and bring them down with nmcli connection down "connection name", then activate the bond ports.
Bond or team does not survive reboot
Ensure autoconnect is enabled on all connection profiles:
nmcli connection modify bond0 connection.autoconnect yes
nmcli connection modify bond0-port1 connection.autoconnect yes
nmcli connection modify bond0-port2 connection.autoconnect yes
Also delete or disable the old “Wired connection” profiles that may conflict on boot:
sudo nmcli connection delete "Wired connection 1" "Wired connection 2"
Going Further
- VLANs on bonds: create a VLAN interface on top of the bond with
nmcli connection add type vlan con-name bond0.100 dev bond0 id 100. See the VLAN tagging guide for RHEL/Rocky/Fedora - Bridges on bonds: for KVM virtualization, create a bridge on top of the bond to give VMs access to the bonded link
- Firewall rules: firewalld rules should reference the bond interface (
bond0), not the individual ports - Monitoring: watch bond health with
watch cat /proc/net/bonding/bond0or set up SNMP traps on link state changes