Link Aggregation vs IP Multipathing

We introduced Link Aggregation capabilities (based on IEEE 802.3ad) in Solaris as part of the Nemo project (a.k.a GLDv3). I described the Solaris Link Aggregation architecture in a previous blog entry, and it is also documented on and by the dladm(1M) man page. Link aggregations provide high availability and higher throughput by aggregating multiple interfaces at the MAC layer. IP Multipathing (IPMP) provides features such as higher availability at the IP layer. It is described on

Both IPMP and Link Aggregation are based on the grouping of network interfaces, and some of their features overlap, such as higher availability. These technologies are however implemented at different layers of the stack, and have different strengths and weaknesses. The list below is my attempt to compare and contrast Link Aggregation and IPMP.

I should disclaim that I was responsible for designing and implementing Link Aggregation in Solaris, but I made every effort to keep the list below balanced and neutral :-)

  • Link aggregations are created and managed through dladm(1M). Once created, they behave like any other physical NIC to the rest of the system. The grouping of interfaces for IPMP is done using ifconfig.
  • Both link aggregations and IPMP support link-based detection failure, i.e. the health of an interface is determined from the state of the link reported by the driver.
  • The equivalent of IPMP probe-based failure detection is done with LACP (Link Aggregation Control Protocol) in the case of link aggregations. LACP is lighter weight than the ICMP based probe-based failure detection implemented by IPMP, since it is done at the MAC layer, and it doesn't require test addresses.
  • Link aggregations currently don't allow you to have separate standby interfaces that are not used until a failure is detected. If a link is part of an aggregation, it will be used to send and receive traffic if it is healthy. We're looking at providing that feature as part of an RFE.
  • Aggregating links between one host to two different switches through a single aggregation is not supported. Link aggregations form trunks between two end-points, which can be hosts or switches, as per IEEE 802.3ad. They are implemented at the MAC layer and require all the constituent interfaces of an aggregation to use the same MAC address. Since IPMP is implemented at the network layer, it doesn't have that limitation.
  • Link aggregations currently requires the underlying driver to use Nemo/GLDv3 interfaces (the list currently includes bge, e1000g, xge, nge, rge, ixgb).
  • Link aggregations require hardware support, i.e. if an aggregation is created on Solaris, the corresponding ports on the switches need to be aggregated as well. Some of this configuration can be automated using LACP. IPMP does not require such configuration.
  • IPMP is link-layer agnostic, link aggregations are Ethernet-specific.
  • Link aggregations provide finer grain control on the load balancing desired for spreading outbound traffic on aggregated links. E.g. load balance on transport protocol port numbers vs MAC addresses. dladm(1M) allows the inbound and outbound distribution of traffic over the constituent NICs to be easily observed.

It's also worth pointing out that IPMP can be deployed on top of aggregation to maximize performance and availability. Of course these two technologies are still being actively developed, and some of the shortcomings listed above of either technologies will be addressed with time. IPMP for instance is currently undergoing a rearchitecture which is described in this document on Several improvements of Nemo are also in progress, such as making Link Aggregations available to any device on the system, as described by the Nemo Unification design document. Thanks to Peter Memishian for checking my facts on IPMP and for his contributions.

Technorati Tag:
Technorati Tag:


Hello Nicolas, I have been thinking about deploying some sort of HA-like solution at our server but didn't know which would be better in our case. Your article came right on time :-) (thanks !) but I am still not sure which would be better. We use S10 GA + zones + ce NICs. In the future we are going to set up firewall at the server. Which solution would you recommend for us ? przemol

Posted by przemol on May 04, 2006 at 12:12 AM PDT #

przemol: thanks for your comment. Since you're using ce interfaces you can't use the native Solaris link aggregation, at least until the Nemo Unification project integrates. Unless you want to try a different NIC (see the supported list above), you are limited to IPMP or the unbundled SunTrunking software for ce. The best solution for you will depend on your requirements and constraints, it's difficult to say without all the data.

Posted by Nicolas on May 05, 2006 at 09:51 AM PDT #

Thanks Nicolas for your answer ! You even don't know how helpful was it. Really ! First, as you pointed out ce doesn't allow us to use link aggregation. Second I just forgot about Sun Trunking. So I have to compare ST and IPMP. By the way, if you (SUN) offer similar products (IPMP, ST, LA) it would be good to have a page which compare advantages and disadvantages of the products.

Posted by przemol on May 06, 2006 at 02:06 AM PDT #

Post a Comment:
Comments are closed for this entry.



« July 2016