1. Introduction

In Linux networking, maximizing network throughput is a key goal for many system administrators and network engineers. One effective method to achieve this is through the use of link aggregation.

In this tutorial, we’ll explore configuring and utilizing two Ethernet cards for link aggregation in Linux environments, focusing on Debian and Red Hat Enterprise Linux (RHEL) systems. We tested the steps in this tutorial on AlmaLinux 8 and Debian 12.

Link aggregation, also known as bonding or teaming, is a method that allows us to utilize multiple network connections in parallel. This is particularly useful in environments where network demand exceeds the capabilities of a single Ethernet card.

Link aggregation increases bandwidth and enhances network redundancy. It can effectively double the network capacity when we combine two links of the same speed. Moreover, it ensures that network communication remains uninterrupted even if one of the aggregated links fails, making it an excellent choice for critical systems and applications that require high availability.

2.1. Exploring Bonding Modes

Each bonding mode offers distinct behaviors and benefits. Below, we’ll discuss the most commonly used bonding modes and their specific use cases:

  • Balance-RR (Round Robin) – Mode 0: This mode provides load balancing and fault tolerance by transmitting packets in a sequential, round-robin fashion over the available slave interfaces. It’s ideal for situations where both high throughput and load distribution are necessary.
  • Active-Backup – Mode 1: In active-backup mode, only one slave remains active in the bond, and the system holds the others as backups. If the active link fails, one of the backup links automatically assumes control. This mode primarily serves to enhance network reliability over throughput.
  • Balance-XOR – Mode 2: In balance-xor mode, a selectable hashing algorithm determines which slave interface will transmit packets based on the source and destination MAC addresses. This mode offers a balance between load balancing and fault tolerance.
  • Broadcast – Mode 3: In broadcast mode, the system sends all transmissions through all slave interfaces, providing a high level of redundancy. This mode is beneficial in environments requiring consistent and reliable data broadcasting to ensure simultaneous data receipt by all recipients.
  • 802.3ad (LACP) – Mode 4: This mode requires a switch that supports IEEE 802.3ad Dynamic Link Aggregation. Packets are distributed based on a hashing algorithm, similar to balance-xor, but with the added benefit of accommodating more dynamic conditions and configurations.
  • Balance-TLB (Adaptive Transmit Load Balancing) – Mode 5: Balance-TLB dynamically adjusts the distribution of outgoing traffic across the available interfaces based on the current load. Furthermore, it does not require any special switch support.
  • Balance-ALB (Adaptive Load Balancing) – Mode 6: Balance-ALB mode actively balances both transmit and receive loads without needing special switch support. It combines Balance-TLB and Receive Load Balancing (RLB), making it a highly flexible mode ideal for environments that require optimized sending and receiving bandwidths.

Different Linux distributions might require distinct tools and configurations, which we’ll cover in detail. By following the structured guidelines below for both Debian-based and RHEL-based systems, we can effectively create a robust and high-performance network setup tailored to our environment’s needs. This initial preparation is crucial in facilitating a smooth and successful link aggregation configuration.

3.1. Prerequisites

Before setting up link aggregation, let’s ensure we have administrative rights on the Linux system and that both Ethernet cards are installed and recognized by the system. We’ll also need to install the necessary tools for configuring and managing link aggregation, which can vary slightly between distributions.

3.2. Configuration on Debian-Based Systems

On Debian and other Debian-based systems like Ubuntu, the first step is to install the ifenslave package, which is used to configure and manage bonded interfaces. We can install it using the apt command:

$ sudo apt install ifenslave

Next, we need to configure the interfaces we want to bond. This involves editing the /etc/network/interfaces file. Here’s how we might set it up:

$ cat /etc/network/interfaces
auto bond0
iface bond0 inet dhcp
slaves eth0 eth1
bond-mode 0
bond-miimon 100
bond-downdelay 200
bond-updelay 200

In this configuration, eth0 and eth1 are the physical network interfaces being bonded. The bond-mode 0 specifies round-robin mode. Different bond driver options will affect the behavior of the aggregated network interfaces.

Once the configuration file is updated, let’s restart the network service to apply the changes:

$ sudo systemctl restart networking

3.3. Configuration on RHEL-Based Systems

To create a channel bonding interface, firstly, we need to create a file within the /etc/sysconfig/network-scripts/ directory named ifcfg-bondN, where “N” is replaced by the interface number, for example, 0:

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=dhcp
BONDING_OPTS="mode=0 miimon=100 "

Once we’ve created the channel bonding interface, we’ll set up the individual network interfaces that will be bonded. This involves adding the MASTER and SLAVE directives to their respective configuration files. The configuration details for each of the bonded interfaces will be almost identical:

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes

After configuring the files, we’ll restart the network service to apply the new settings:

$ sudo systemctl restart NetworkManager
$ sudo nmcli networking off && sudo nmcli networking on

This concludes the configuration section. Next, let’s test and verify our setup.

4. Testing and Verification

Once link aggregation is configured, it’s crucial that we test the setup to ensure everything is working as expected. This can include checking the bonding status and the throughput capabilities. We can use tools such as ifconfig or ip addr to verify that the bonded interface is active and functioning:

$ ip add
...
1: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 08:00:27:8c:c1:7c brd ff:ff:ff:ff:ff:ff permaddr 08:00:27:d6:2a:dd
2: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 08:00:27:8c:c1:7c brd ff:ff:ff:ff:ff:ff
14: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 08:00:27:8c:c1:7c brd ff:ff:ff:ff:ff:ff
    inet 192.168.68.174/24 brd 192.168.68.255 scope global dynamic noprefixroute bond0
       valid_lft 5041sec preferred_lft 5041sec
    inet6 fe80::a00:27ff:fe8c:c17c/64 scope link
       valid_lft forever preferred_lft forever

Here, we can see that eth0 and eth1 have master bond0 indicating they are part of the link aggregation we created earlier. Furthermore, we can see that both eth0 and eth1 have the keyword SLAVE while the interface bond0 has the keyword MASTER and they all have UP status.

Additionally, we can use ethtool to verify the bonded interface is operating at the intended speed:

$ ethtool bond0
Settings for bond0:
...
    Speed: 2000Mb/s
    Duplex: Full
    Auto-negotiation: off
    Port: Other
    PHYAD: 0
    Transceiver: internal
    Link detected: yes

Here, we can see that the bond interface speed is reported as 2000Mb/s which is the combined speed of our two 1Gbps Ethernet interfaces since we have used mode 0 (Balance-RR) for our bond configuration.

Finally, we can check the content of the file /proc/net/bonding/bond0 for a comprehensive report on the bond status:

$ cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v6.1.0-18-amd64

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Peer Notification Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:47:2e:d6
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:47:62:ca
Slave queue ID: 0

4.1. Troubleshooting Common Issues

When configuring link aggregation, several issues might arise, such as incorrect bonding mode selection or issues with the underlying network configuration. Common troubleshooting steps include:

  • Checking Interface Status: As mentioned above, we can use commands such as cat /proc/net/bonding/bond0 to check the bond status and ensure all slaves are correctly attached and active.
  • Testing Failover: Additionally, we can disconnect one of the network cables to simulate a failure and observe if the traffic seamlessly switches to the remaining active link without dropping packets.
  • Reviewing Log Files: System logs can provide valuable insights into what might be going wrong. We can use dmesg or journalctl to review messages related to network interfaces and bonding.

4.2. A Note on Compatibility and Limitations

When implementing link aggregation, compatibility with existing network infrastructure is a critical consideration. Not all network devices support every mode of link aggregation. For instance, IEEE 802.3ad configurations require hardware compatible with LACP (Link Aggregation Control Protocol).

Moreover, mirroring network interface configurations for aggregation on both the server and switch is essential but can pose challenges in environments with limited control or budgetary constraints for upgrades like LACP. Understanding these compatibility issues and limitations is crucial for effectively planning and deploying link aggregation in any Linux environment.

5. Conclusion

Link aggregation is a powerful feature in Linux that helps enhance network throughput and reliability. In this article, we’ve looked at how to configure two Ethernet cards in a bonded setup. This configuration subsequently ensures better network performance and redundancy.

It’s also particularly beneficial in high-traffic scenarios where single connections might otherwise become bottlenecks. Understanding and implementing link aggregation can significantly enhance network capabilities and resilience, whether in critical enterprise environments or home networks.