random notes on the Oracle Database Appliance (ODA)

I recently had the chance to work with an Oracle Database Appliance again, my last experience has been a while ago. Which is really a shame since I think this is an excellent system that let’s you focus on rolling out systems and working with data instead of spending a lot of time fiddling with infrastructure.

The evolution of 10GbE network interfaces

The original ODA V1 offered a total of 6 network cards for external connections. A total of six 1000Base-T copper ports and two additional 10GbE SFP+ ports for high speed fiber uplinks.

The next generation, the ODA X3-2 reduced the externally available port count to four but now they all support 10/100/1000/10Gbase-T. The advantage here is that if your network supports 10GbE over copper, you can easily use this without having to change anything. But if you had invested in SFP+ based switches earlier, you were left alone now. And since 10GbE over copper cables require more power than SFP+ can provide, you won’t find adapters to make this work. The interconnect between the nodes is implemented with cross-over cables on an extra PCIe dual port 10GbE card.

The newest ODA X4-2 improved the situation. The PCIe card is now dual-port SFP+ and by default is still used for the interconnect with crossover cables. But customers can choose to use two of the other four ports for the interconnect which then frees up the two SFP+ ports for fiber uplinks.

Streamlined delivery? Everything preinstalled?

This one really annoys me. It takes Oracle several weeks to deliver hardware, apparently because they “streamlined” the process to build-to-order. Just why, oh why can they not at least try to put the latest firmware on the box. Why is it my task to always check for and download updates? The system was delivered in December with release 2.6. The latest release (as of today) is 2.8 (November 2013) and 2.7 was released in August 2013. So the first hour(s) were spent downloading and applying the 2.8 base release. Takes a while but it is also quite impressive to see all the components being patched with just one command: BIOS, iLom, controller firmware, disk firmware, OS and possibly even more.

Re-Imaging for virtual deployments

After having waited for all the patches it was yet time to wait again. This time for the reimaging process to the virtualized ODA environment. This basically reinstalls the base OS with Dom0 and the management tools. I don’t have much to complain here, this works quite well but be prepared to sit in front of a blue screen that says “running post-install scripts” for up to two hours. You can spend that time reading Yury’s analysis of what is happening during that time.

ACFS Cloud Filesystem export for VM disks

ODA shared storage repository

This is a clever new feature that impressed me. In a virtualized ODA deployment, the shared disks are only available from the database VMs because the SAS controllers are mapped directly to those machines. At first, that seems to prevent a shared storage repository for other virtual machines that would be needed for HA (reboot a VM when the Host crashes) or live migration. The way Oracle worked around this is by exporting an ACFS mount via NFS from the database VMs to Dom0. Clever. I’d be interested to benchmark performance on these virtual disks though. From disk through ASM, ADVM, ACFS, NFS, vdisk to some filesystem in another DomU is quite an impressive path.

The virtualized ODA also supports and implements Linux huge pages, even through reconfigurations. This was not supported in Oracle VM 2 and release 3 only has a very basic support for it where you have to hack the setting into the vm config and will loose it again after making changes to the VM in the GUI.

Leave a Reply

Your email address will not be published. Required fields are marked *