We are using Cisco AnyConnect to provide VPN for our users (which we authenticate via RADIUS). Recently we needed to provide some users with static IP addresses.
For IPv4 this was easy since the ASA we are using supports RADIUS attribute 8 (Framed-IP-Address). For IPv6 there is an equivalent RADIUS attribute 168 (Framed-IPv6-Address) defined in RFC 6911. Unfortunately the ASA doesn’t support that attribute.
But thanks to the Cisco support we found out that the ASA (starting with version 9.0(1)) does support RFC 3162 (Cisco Bug ID CSCtr65342). What RFC 3162 provides are RADIUS attribute 96 (Framed-Interface-Id) and 97 (Framed-IPv6-Prefix). With these two you can easily provide static IPv6 addresses to your AnyConnect users.
To assign a user the address 2001:db8::42/64 you just set the following attributes in your RADIUS server:
Framed-IPv6-Prefix = 2001:db8:0:0::/64
Framed-Interface-Id = 0:0:0:42
For setting a static IPv6 address for local users one can use vpn-framed-ipv6-address which is documented in the Release Notes and the Command Reference.
Update November 2015:
The support for this is currently broken in ASA 9.1(6) and a couple other versions, there is an open Cisco Bug CSCus34033. In the 9.1 release train (since we are using ASA5520) it should be fixed in ASA 9.1(7) coming up in January.
While working with Oracle VM one thing we stumbled across was that it always created a bond interface for management purposes. While it is perfectly fine to have a bond interface for failover reasons – the interface created is per default in Active Backup mode and cannot be modified according to the documentation. Now in Active Backup mode one interface is basically wasted (because only one interface is used at a time). OVM does offer two other modes though: Dynamic Link Aggregation and Adaptive Load Balancing.
Since we got LACP capable network infrastructure we wanted to use Dynamic Link Aggregation instead of Active Backup but OVM doesn’t offer a way to change the mode of the bond0 interface. That said, this is only true for the web interface. When SSH into the OVM server itself (the actual server where you want to change the bond, not the manager server) you can change the mode of the bond. This is the way we chose.
When doing this you should make sure that there are no virtual machines running on that box at the moment, because it might lead to a reboot (if it doesn’t you want to reboot anyways). Now when connected via SSH you simply execute the following command:
/etc/xen/scripts/ovs-bond-mode bondm=bond0 bondmode=4
This will change the config files for that interface and reconfigure the interface. In 2 out of 3 servers this lead to an instant reboot of the machine. On the one that didn’t reboot the interface came back up right away and was now in Dynamic Link Aggregation mode. The web interface however still showed Active Backup mode but after a reboot it showed the correct mode (that is why you want to reboot anyways). The servers which rebooted instantly came up with the interface in the right mode and the web interface showed the right mode too.
From now on when provisioning new OVM servers this is the very first thing we execute on every server.
UPDATE 2013-06-17 Bjoern Rost: The script does not exist anymore in Version 3.2 of OVM but changing the bond mode is now possible through the OVM manager GUI. It still leads to a reboot everytime I tried (and did not warn you) but other than that it works and looks much more like a supported action than the old way of calling the script directly.
When we upgraded our datacenters to full native IPv6 we also began to enable IPv6 on our Solaris servers.
What works really well on global Solaris zones and on Solaris LDOMs can be a big pain on shared Solaris zones.
So what is the catch with Solaris zones? When you create a new Solaris zone you can add an IPv6 address easily the same way you add an IPv4 address but as you may know IPv6 needs a so called ‘link-local’ address to work properly or else IPv6 Neighbor Discovery won’t work.
We got 2 problems here. When you use an auto configured route on the global zone the shared zone will know about it (cause they share the network stack) but can’t use it since it is an link-local route and the shared zone don’t have an link-local interface. This problem can easily be avoided by setting a route to a global address in the global zone. Now the shared zone knows about the correct gateway but that is where we get to the second problem. The shared zone can neatly resolve the MAC address of the router via ND but the router can not resolve the MAC address of the zone via ND. When you snoop on the interface on the global zone you will see that the “neighbor solicitation” request arrives on the interface but somehow it is not answered.
So how do we get around the second problem? We have to add another IPv6 address to the shared zone – a link-local one. How does this usually look like? The link-local segment is fe80::/10. Usually the link-local address is generated on an per interface basis the following way: fe80::xxxx:xxxx:xxxx:xxxx/10 where xxxx:xxxx:xxxx:xxxx is the modified EUI-64 address. Since the modified EUI-64 address is already used by link-local interface in the global zone we have to come up with another address. In our case I just use the following address for our shared zones: fe80::xxxx/10. Where in this case xxxx is the last 32bit of our global IPv6 address of the zone. Lets assume our global IPv6 address for the zone is 2001:db8:0:113::133/64 this would make our link-local address for that zone fe80::133/10.
This is what a shared config zone with IPv6 global and link-local address could look like:
root@solaris10u9:~# zonecfg -z test info
defrouter not specified
defrouter not specified
defrouter not specified
With this setup we now got fully working IPv6 in Solaris shared zones.
Recently we acquired rack space in a another data center in Munich since we outgrew our capacity in the old one. In the new data center we got uplinks to 2 different internet providers and one to the ALP-IX in contrast to the old one where we had just a redundant uplink to one internet provider. This gives us a lot of room to grow both space and bandwidth wise.
Additionally while working on both our old and new infrastructure we managed to enable IPv6 on our whole network. Which means we are now able to provide every customer native IPv6 support for all of their machines in both data centers.
RIPE NCC Member
Last week portrix Systems became a RIPE NCC LIR. Being a LIR you are able to request IPv4 and IPv6 address space directly from the RIPE NCC. This makes us independent from other service providers and gives us the ability to keep IP addresses even in the case of changing the provider or moving to another data center.
We also got the approval of our first IPv4 address space (a /21 network, which equals 2.048 IP addresses) and our first IPv6 address space (a /32 network, which equals 18.104.22.1684.264.337.593.543.950.336 IP addresses).