Solaris 11 datalink multipath

The release notes of Solaris 11.1 mention a new network trunking feature that does not require LACP or the setup of ICMP. I finally had time to look a little bit closer at it. The motivation to do trunking or aggregation are availability and load-balancing. The idea is to combine multiple interfaces so that the system survives the failure of a NIC, cable or switch (availability) and to also allow a higher throughput of data than what a single interface could provide (load-balancing). With Solaris 11.1 there are three basis methods to choose from and I want to briefly introduce them. There actually is a pretty good comparison chart in the documentation.

IPMP has is strengths in being able to test and verify network connectivity beyond simply looking at the link state by introducing probe IP addresses. It does not require a special setup on the switch side and it does at least spread outgoing data across all active interfaces. My major pain point with IPMP is that you cannot (easily) do interface things with IPMP like creating VNICs on top of the IPMP group or configure applications with it that need access to a (layer 2) network interface like Sun Ray dedicated networks or RAC cluster private networks (the clusterware expects an interface to configure VIPs and SCAN ips).

The “traditional” link aggregation feature solves that problem by aggregating on a device level and providing a new device aggr1 that can be used just like other interfaces (net0, e1000g0, …). The downside here is that you need to have LACP support on your network for things to work smoothly and it can be a bit of a pain to set this up across seperate switches. This is the method we use in our datacenter.

But there are good news for you with Solaris 11.1 if you cannot or do not want to use LACP. It is called DLMP for datalink multipathing and it is what this blog post is focussing on.

DLMP does not require LACP or any other configuration on the switch, works across multiple switches and is perfectly suited to be used with Solaris 11 network virtualization because you can easily create VNICs on a dlmp aggregate interface. Each VNIC will stick to one physical interface so it does not provide load-balancing for a single VNIC beyond the bandwidth of a single underlying link but when you configure multiple VNICs, the load will be spread across all links.

Unfortunately, as of now, there is no way to influence the binding of a VNIC to an interface. However, tests have shown that this mapping is simply done in the order of creation. So if you create an aggregate of two interface and then create 5 VNICs in top of that, vnic0, vnic2 and vnic4 will be mapped to net0 and the other two to net1. If any of the underlying interfaces fails (we tested this by turning off the port on the switch side) the vnics fail over to the remaining interface without an issue. When we bring the port back up, VNICs will switch back. This was somehow undeterministic though: Sometimes 0,2 and 4 switched back to their original interface and some other times, 1 and 3 switched over to the other interface, thereby swapping the original mapping. In any case, the “group” of VNICs always stayed together.

2 thoughts on “Solaris 11 datalink multipath

  1. Pingback: Solaris 11.1 update (new features) | portrix systems

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>