While Oracle may – even with the newest release – not be web scale, it sure has flex everything in RAC. These are actually really great features but with the names being so similar (just like flash and flashback in current releases), it is easy to get them confused and I had a very hard time explaining these technologies to Martin Berger and Allan Robertson via twitter. The two features I want to write about today are flex ASM and flex cluster and I was lucky enough to sit in two excellent presentations by Markus Michalewicz and Nitin Vengurlekar at Collobarate 2013.
Flex ASM basically allows a database instance to utilize an ASM instance that is not local to that server node and even fail over to another remote instance if the ASM instance fails. Why is this important? In 11g, the more instances you run on a single server, the more you become dependant on that one ASM instance. And if it fails or has to be brought down for maintenance, the database instances have to shut down aswell. When setting up 12c clusters, DBAs have the option to choose between “normal mode” (required if you want to run 11g databases on your 12c grid) or the new flex mode for ASM. In flex mode, clusters with more than 3 nodes would only run 3 ASM instances, much like SCAN works. In difference to SCAN, a two node cluster would have 2 ASM instances, not 3. The overall architecture has not actually changed dramatically. ASM has never been in the IO path of an instance, the job of the ASM instance was merely to manage diskgroups (add, drop, rebalance disks) and transfer extent maps, basically mapping database extents to physical block locations. These extent maps have always been transferred over the network, but in 10g and 11g that was simply done over the loopback interface. Now this traffic can also traverse the interconnect network between cluster nodes. Administrators have the additional choice of creating yet another network link in the cluster solely for ASM traffic but that feature will not make much sense until a later release of 12c which may extend this functionality even further.
Flex Cluster on the other hand may sound similar, but it is a totally different animal. 12c distinguishes between two different types of cluster nodes. The traditional type (as we know it) is now called a “hub node” and it comes with the full clusterware stack: shared storage with a voting disk, interconnect network, sophisticated fencing and so on. These have been and will be used to run database instances. The new node type is called “leaf nodes” and they run a lightweight version of the clusterware without shared storage, cluster communication is only done over the interconnect network. Those nodes would typically be used to run generic services or applications so those can be managed within the cluster framework aswell. They would also benefit from startup/restart capabilities and a basic failover to a different node if one of the leaf nodes fails. It is definitely worth mentioning that leaf nodes won’t need to be licensed seperately (as long as the hub nodes are under license/support) and that they don’t need to run the same machines as the hub nodes. That makes it possible to run the hub nodes on physical hardware to get the most performance out of running your database while leaf nodes with app servers are run in a virtual environment.