Linux up link aggregation for high availability and network performance

100Mbps up-link port is standard nowadays even for a low cost dedicated servers or  server co-location. When your web site visitors increase or you are landing Digg or Yahoo! front page a 100Mbps up-link will simply not cope with the load. If you upstream provider offers 1Gbps ports – go for it if your MRTG shows maxing out (or even 80Mbps or so peak time usage) 100Mbps up-link from time to time.

If your upstream provider doesn’t offer 1Gbps links and your server has two NIC ports, you can set-up link aggregation (network bonding) with IEEE 802.3ad mode and increase your server uplink to 200Mbps or more if you have more available network ports in your server. You can easily do 802.3ad (mode 4) on multiple 1Gbps ports as well – keep in mind that with 802.3ad link channel bonding method all your up-links must be connected through one switch. There are multiple network  channel bonding modes: balance-rr, active-backup, balance-xor, broadcast, 802.3ad, balance-tlb and balance-alb, however we will look at 802.3ad mode as it provides good performance and increases availability (multiple network up-links, however you are limited to one switch, if it fails, the server connectivity will fail).

In our link aggregation tests with mode=4 (802.3ad) we bonded together two network up-links (2x 1Gbps Intel network card ports, HP DL rackmount server with 2Ghz CPU and 512MB RAM) and iperf performance tests (initiated from 6 remote boxes, different VLANs, switches, however one server was connected to the same switch and it had the best speed of roughly 700Mbps) had a top speed of 1590Mbps in TCP traffic mode. Test system had CAT 5e cables and link between up-link Cisco switches and servers was 1Gbps.

Typical Linux set-up for network channel bonding in 802.3ad mode is as follows (executed from command line or you can set-up rc.local to execute the following commands on the system startup):

modprobe bonding mode=4 miimon=1000 downdelay=500 updelay=500
ip addr add 10.10.1.1/24 brd + dev bond0
ip link set dev bond0 up
ifenslave bond0 eth1 eth0
route add default gw 10.10.1.254

We also added:

alias bond0 bonding
options bond0 mode=4 miimon=100

to modprobe.conf file

Please note that we had to reboot the box for the new settings.

When bonding has been set-up, the ifconfig will show a new device called bond0

bond0     Link encap:Ethernet  HWaddr 00:0c:cf:33:99:c0
inet addr:10.10.1.1  Bcast:10.10.1.255  Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
RX packets:100195129 errors:0 dropped:0 overruns:0 frame:0
TX packets:38177579 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2541440597 (2423.7 Mb)  TX bytes:2655446784 (2532.4 Mb)

You can also review the contents of:

cat /proc/net/bonding/bond0

in our case the content is as follows:

Ethernet Channel Bonding Driver: v3.0.3 (March 23, 2006)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 6
Number of ports: 2
Actor Key: 17
Partner Key: 5
Partner Mac Address: 00:0c:cf:33:99:c0

Slave Interface: eth1
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:55:66:66:77:79
Aggregator ID: 6

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:55:66:66:77:77
Aggregator ID: 6

Don’ t forget that changing server IPs may limit server connectivity and you may not be able to access it remotely. I hope this helps you to increase your server performance and will offer better speeds and lower latency for your web visitors as well. Enjoy!


Leave a Reply