Tuesday May 19, 2009

Open HA Cluster at CommunityOne West June 1-2

In addition to the Cluster Summit on May 31, Open HA Cluster will be well represented at the CommunityOne West conference at Moscone Center in San Francisco.

We'll have an Open HA Cluster demo running the whole week. Come visit us in the Sun Pavillion to see Open HA Cluster running on OpenSolaris.

I'll also be giving a talk, "High Availability with OpenSolaris", as part of the Deploying OpenSolaris in your DataCenter deep dive track on Tuesday, June 2. Contrary to the "official" CommunityOne information you might find elsewhere, this deep dive track is completely free. Just register with the "OSDDT" registration code. The other talks in this track, on ZFS and Zones, should be quite interesting as well.

You can see the entire lineup of the OpenSolaris presence at CommunityOne here, and even more details here. I hope to see you in San Francisco in a couple weeks!

Nick Solter
Tech lead, Open HA Cluster 2009.06

Monday Apr 27, 2009

Open HA Cluster Summit - 31st May 2009

Make High Availability Work for You - Open HA Cluster Summit 31st May 2009

Have you signed up for this event yet?

Join us as we explore the latest trends of High Availability Cluster technologies, as well as key insights from HA Clusters community members, technologists, and users of High Availability and Business Continuity software. Learn how to increase the availability of your favorite applications from blogs to enterprise level infrastructure. If you are a student, you may want to consider the industrial-strength Open HA Cluster software for your thesis research. You will also have the unique opportunity to hear one of the featured guest speakers, Dr. David Cheriton, industry expert and professor at Stanford University.

Come, learn, mingle with HA Clusters Community members, network, enjoy free food, and casino games!

If you are not going to be in San Francisco, you could still participate by following the technical sessions on twitter and ustream.

Jatin Jhala
HA Clusters Community Manager

This event is sponsored by Sun Microsystems, Inc.  Spread the word.  Inform your friends and colleagues.

Tuesday Apr 21, 2009

Solaris Cluster + MySQL Demo at a Exhibition Hall near you


If by any chance you are at the MySQL user conference in Santa Clara conference center, there are some great possibilities to see MySQL and Solaris Cluster in action. We have a demo at the exhibition hall, where you can see MySQL with zone cluster and MySQL with SC Geographic Edition.

I will host a birds of a feather at Wednesday 4/22/09 evening in meeting room 205 at 7 pm. The Title is "Configuring MySQL in Open HA Cluster, an Easy Exercise" check out the details here.  I will give a presentation at Thursday 4/23/09 morning 10:50 in Ballroom G, the title is "Solutions for High Availability and Disaster Recovery with MySQL". Check out the details here

I hope to see you there in person.

-Detlef Ulherr

Availability Products Group

Friday Mar 06, 2009

Cluster Chapter in OpenSolaris Bible

In addition to my day job as an engineer on the Sun Cluster team, I spent most of my nights and weekends last year writing a tutorial and reference book on OpenSolaris. OpenSolaris Bible, as it's titled, was released by Wiley last month and is available from amazon.com and all other major booksellers. At almost 1000 pages, my co-authors Dave, Jerry, and I were able to be fairly comprehensive, covering topics from the bash shell to the xVM Hypervisor, and most everything in between. You can examine the table of contents and index on the book website.

Of particular interest to readers of this blog will be Chapter 16, “Clustering OpenSolaris for High Availability.” (After working on Sun Cluster for more than 8 years, I couldn't write a book like this without a Chapter on HA clusters!) Coming at the end of Part IV, “OpenSolaris Reliability, Availability, and Serviceability”, this chapter is a 70 page tutorial in using Sun Cluster / Open HA Cluster. After the requisite introduction to HA Clustering, the chapter jumps in with instructions for configuring a cluster. Next, it goes through two detailed examples. The first shows how to make Apache highly available in failover mode using a ZFS failover file system. The second demonstrates how to configure Apache in scalable mode using the global file system. Following the two examples, the chapter covers the details of resources, resource types, and resource groups, shows how to use zones as logical nodes, and goes into more detail on network load balancing. After a section on writing your own agents using the SMF Proxy or the GDS, the chapter concludes with an introduction to Geographic Edition.

This chapter should be useful both as a tutorial for novices as well as a reference for more advanced users. I enjoyed writing it (and even learned a thing or two in the process), and hope you find it helpful. Please don't hesitate to give me your feedback!

Nicholas Solter

Monday Mar 02, 2009

Why a logical IP is marked as DEPRECATED?

Reported Firewall Problems

We've had many questions, comments and complaints about IP address "problems" when using highly available services in a Sun Cluster environment. We found out that most, if not all of these were related to configurations where firewalls were configured between the service running on the cluster, and the clients connecting to the cluster.

So, what is the problem? The firewall administrators often make the assumption that a packet sent from a client to the logical IP address of an HA service will generate a response IP packet with exactly the same logical IP address as the source address. So, they configure an appropriate firewall rule and wonder why this rule does not work, i.e., instead there were IP packets coming back from an HA service that did not match this rule.

Then they start researching the network configuration on the cluster node that hosts the HA service and find out that the logical IP address used by that service was set to a state called "DEPRECATED". And they think this is the root cause of their problem - which (we think) is not the case.

How does Address Selection really work?

As address selection can become very complicated in complex network setups, the following will be true for the typical simple network setup found at most installations.

Let's look at the address selection for an outgoing packet a bit more closely. First we must make a distinction between TCP (RFC 793) and UDP (RFC 768). TCP is a connection-oriented protocol, i.e. a connection is established between a client and a service. Using this connection, source and target addresses are always used appropriately; in a Sun Cluster environment the source address of a packet sent by the service to a client will usually be the logical IP address of that HA service - but only if the client used the logical service address to send its request to the service.

So, this will not cause any problems with firewalls, because you know exactly which IP addresses will be used as source addresses for outgoing IP addresses.

Let's look into UDP now. UDP is a connectionless protocol, i.e., there is no established connection between a client and a server (service). A UDP-based service can choose its source address for outgoing packets by binding itself to a fixed address, but most services don't do this. Instead, they accept incoming packets from all network addresses configured. For those readers who are familiar with network programming, the typical code segment has the following lines in it:

struct sockaddr_in address;
address.sin_addr.s_addr = INADDR_ANY;
if (bind (..., (struct sockaddr \*) &address, ...) == 0)

Using this typical piece of code, the UDP service listens on all configured IP addresses, and the outbound source address is set by the IP layer and the choosing algorithm is complex and cannot be influenced. Details can be read in Infodoc 204569 (access on SunSolve for SPECTRUM contract holders only); but we think they are not that relevant here, except for this quote: "IP addresses associated with interfaces marked as DEPRECATED will not normally be used as source addresses by IP unless deprecated interfaces are all that is available, in which case they will be used."


So, now DEPRECATED comes into play. A DEPRECATED address will - normally - not be used as a source address! First, why does Sun Cluster set HA IP addresses, i.e. logical or floating addresses into state DEPRECATED? Because they are floating addresses - there is no guarantee that they will stay on one node. In failure situations an HA IP address will float to another node together with its service. Or if the administrator decides to migrate a service; or when the service is stopped, the logical IP address might disappear on one node.

Let's have a look at services where IP communication is initiated from a cluster node. E.g. a cluster node might try to mount an external NFS share on this node temporarily. Whether this is UDP or TCP based NFS would not matter in this case! The IP layer would choose a source address; it could be the logical IP address of an HA service that happens to run on the same system - if it were not DEPRECATED. Now, imagine the NFS mount is successful, is using the logical IP address and NFS transfers work fine. Now, the HA service that owns the HA IP address is switched to another node in the cluster; its IP address would also switch. What would happen to the NFS traffic between this node and the external NFS server? It would fail. Packets coming from the NFS server would reach a different node now; namely that of the HA service that switched, taking its IP address with it. (And the NFS client on the cluster node would fail as well.....)

So, that is the reason for setting the DEPRECATED flag on HA IP addresses; remember the quote above: "...marked as DEPRECATED will not normally be used...". Although not setting the DEPRECATED flag would improve the probability that the address potentially be used by the IP layer as a source address, there is no guarantee and at the end, this would not help. But the DEPRECATED flag helps to prevent major problems on cluster nodes.

The Solution

Back to the original question: how can I make my firewall rules work? There are 4 possibilities - in prioritized order, best practice first:

  1. change your firewall rules to accept all possible addresses from the nodes where packets could be originating from;
  2. change your service, by binding only to the HA service IP address - which is only possible if its configuration lets you do this or if you have access to the source code;
  3. move your HA service into a Solaris 10 container, that only uses the logical IP address; in this configuration the logical IP address will always be used as source address, even though it is in state DEPRECATED;
  4. try to manipulate the decision process of the IP layer - which is a very bad idea.

To summarize

Sun Cluster sets the DEPRECATED flag on HA service IP addresses by design and it is a good thing, as it prevents strange problems with IP based clients on cluster nodes to happen. Not setting it, would not solve the problems reported.

Hartmut Streppel
Principal Field Technologist
Systems Practice

Oracle Solaris Cluster Engineering Blog


« November 2015