We are using Cisco AnyConnect to provide VPN for our users (which we authenticate via RADIUS). Recently we needed to provide some users with static IP addresses.
For IPv4 this was easy since the ASA we are using supports RADIUS attribute 8 (Framed-IP-Address). For IPv6 there is an equivalent RADIUS attribute 168 (Framed-IPv6-Address) defined in RFC 6911. Unfortunately the ASA doesn’t support that attribute.
But thanks to the Cisco support we found out that the ASA (starting with version 9.0(1)) does support RFC 3162 (Cisco Bug ID CSCtr65342). What RFC 3162 provides are RADIUS attribute 96 (Framed-Interface-Id) and 97 (Framed-IPv6-Prefix). With these two you can easily provide static IPv6 addresses to your AnyConnect users.
To assign a user the address 2001:db8::42/64 you just set the following attributes in your RADIUS server:
Framed-IPv6-Prefix = 2001:db8:0:0::/64
Framed-Interface-Id = 0:0:0:42
For setting a static IPv6 address for local users one can use vpn-framed-ipv6-address which is documented in the Release Notes and the Command Reference.
Update November 2015:
The support for this is currently broken in ASA 9.1(6) and a couple other versions, there is an open Cisco Bug CSCus34033. In the 9.1 release train (since we are using ASA5520) it should be fixed in ASA 9.1(7) coming up in January.
I just stumbled across this and could not find it anywhere else on the net. I set up a ZFS Appliance with Oracle VM and their storageconnect plugin according to the documentation pdf (which are pretty easy step-by-step instructions) but in this case the OVM Server and the ZFS Appliance were not in the same network and access is denied by default in the firewall between those nets. So trying to register the appliance as a FC Storage led to this error that just tells me that the connection timed out. Continue reading
I have previously written about my experience with wifi on a plane, today I am at the OUGN conference on a cruiseship between Oslo an Kiel which offers a satellite internet link. Simon Haslam just asked via twitter how the experience was since he was unfortunately not able to join this part of the trip but was curious. Obviously, the connection is good enough to use twitter and also do a bit of work on the internet, but due to the high latency of several seconds, it is not as interactive and usable as one might be used to. It also makes you appreciate websites that deliver as much information as possible with as little extra elements as possible. But there is no reason to complain, there are a bunch of fantastic Oracle Speakers on this boat and the view is awesome. Continue reading
The release notes of Solaris 11.1 mention a new network trunking feature that does not require LACP or the setup of ICMP. I finally had time to look a little bit closer at it. The motivation to do trunking or aggregation are availability and load-balancing. The idea is to combine multiple interfaces so that the system survives the failure of a NIC, cable or switch (availability) and to also allow a higher throughput of data than what a single interface could provide (load-balancing). With Solaris 11.1 there are three basis methods to choose from and I want to briefly introduce them. There actually is a pretty good comparison chart in the documentation. Continue reading
Yesterday, when flying Lufthansa from Munich to San Francisco on an Airbus A340-600 for Oracle OpenWorld I had the pleasure of using the internet for the full duration of the flight. This is not really new but it was the first time this was availaible on any of my flights and I was happy like a little boy in a candy store about not having to be “offline” for the 11+ hours. Instead of dosing off to in-flight movies I spent most of the time nagging people on facebook and twitter and answering all those emails that have piled up in my inbox over the past few days. Continue reading
I get a huge load of comercial mails from network, hardware and software vendors that tell me to check the latest results of some very important benchmark you have never heard of, download a biased whitepaper or attend a useless webcast with sales people. Most of these messaged end up in trash really fast but I did like Cisco’s most recent campaign. It did make me smile, read the whole thing and even watch the video. I am still not signing up for their webcast though.
When we upgraded our datacenters to full native IPv6 we also began to enable IPv6 on our Solaris servers.
What works really well on global Solaris zones and on Solaris LDOMs can be a big pain on shared Solaris zones.
So what is the catch with Solaris zones? When you create a new Solaris zone you can add an IPv6 address easily the same way you add an IPv4 address but as you may know IPv6 needs a so called ‘link-local’ address to work properly or else IPv6 Neighbor Discovery won’t work.
We got 2 problems here. When you use an auto configured route on the global zone the shared zone will know about it (cause they share the network stack) but can’t use it since it is an link-local route and the shared zone don’t have an link-local interface. This problem can easily be avoided by setting a route to a global address in the global zone. Now the shared zone knows about the correct gateway but that is where we get to the second problem. The shared zone can neatly resolve the MAC address of the router via ND but the router can not resolve the MAC address of the zone via ND. When you snoop on the interface on the global zone you will see that the “neighbor solicitation” request arrives on the interface but somehow it is not answered.
So how do we get around the second problem? We have to add another IPv6 address to the shared zone – a link-local one. How does this usually look like? The link-local segment is fe80::/10. Usually the link-local address is generated on an per interface basis the following way: fe80::xxxx:xxxx:xxxx:xxxx/10 where xxxx:xxxx:xxxx:xxxx is the modified EUI-64 address. Since the modified EUI-64 address is already used by link-local interface in the global zone we have to come up with another address. In our case I just use the following address for our shared zones: fe80::xxxx/10. Where in this case xxxx is the last 32bit of our global IPv6 address of the zone. Lets assume our global IPv6 address for the zone is 2001:db8:0:113::133/64 this would make our link-local address for that zone fe80::133/10.
This is what a shared config zone with IPv6 global and link-local address could look like:
root@solaris10u9:~# zonecfg -z test info
defrouter not specified
defrouter not specified
defrouter not specified
With this setup we now got fully working IPv6 in Solaris shared zones.
Recently we acquired rack space in a another data center in Munich since we outgrew our capacity in the old one. In the new data center we got uplinks to 2 different internet providers and one to the ALP-IX in contrast to the old one where we had just a redundant uplink to one internet provider. This gives us a lot of room to grow both space and bandwidth wise.
Additionally while working on both our old and new infrastructure we managed to enable IPv6 on our whole network. Which means we are now able to provide every customer native IPv6 support for all of their machines in both data centers.
RIPE NCC Member
Last week portrix Systems became a RIPE NCC LIR. Being a LIR you are able to request IPv4 and IPv6 address space directly from the RIPE NCC. This makes us independent from other service providers and gives us the ability to keep IP addresses even in the case of changing the provider or moving to another data center.
We also got the approval of our first IPv4 address space (a /21 network, which equals 2.048 IP addresses) and our first IPv6 address space (a /32 network, which equals 22.214.171.1244.264.337.593.543.950.336 IP addresses).