Friday Mar 06, 2009

Cluster Chapter in OpenSolaris Bible

In addition to my day job as an engineer on the Sun Cluster team, I spent most of my nights and weekends last year writing a tutorial and reference book on OpenSolaris. OpenSolaris Bible, as it's titled, was released by Wiley last month and is available from amazon.com and all other major booksellers. At almost 1000 pages, my co-authors Dave, Jerry, and I were able to be fairly comprehensive, covering topics from the bash shell to the xVM Hypervisor, and most everything in between. You can examine the table of contents and index on the book website.

Of particular interest to readers of this blog will be Chapter 16, “Clustering OpenSolaris for High Availability.” (After working on Sun Cluster for more than 8 years, I couldn't write a book like this without a Chapter on HA clusters!) Coming at the end of Part IV, “OpenSolaris Reliability, Availability, and Serviceability”, this chapter is a 70 page tutorial in using Sun Cluster / Open HA Cluster. After the requisite introduction to HA Clustering, the chapter jumps in with instructions for configuring a cluster. Next, it goes through two detailed examples. The first shows how to make Apache highly available in failover mode using a ZFS failover file system. The second demonstrates how to configure Apache in scalable mode using the global file system. Following the two examples, the chapter covers the details of resources, resource types, and resource groups, shows how to use zones as logical nodes, and goes into more detail on network load balancing. After a section on writing your own agents using the SMF Proxy or the GDS, the chapter concludes with an introduction to Geographic Edition.

This chapter should be useful both as a tutorial for novices as well as a reference for more advanced users. I enjoyed writing it (and even learned a thing or two in the process), and hope you find it helpful. Please don't hesitate to give me your feedback!

Nicholas Solter

Monday Mar 02, 2009

Why a logical IP is marked as DEPRECATED?

Reported Firewall Problems

We've had many questions, comments and complaints about IP address "problems" when using highly available services in a Sun Cluster environment. We found out that most, if not all of these were related to configurations where firewalls were configured between the service running on the cluster, and the clients connecting to the cluster.

So, what is the problem? The firewall administrators often make the assumption that a packet sent from a client to the logical IP address of an HA service will generate a response IP packet with exactly the same logical IP address as the source address. So, they configure an appropriate firewall rule and wonder why this rule does not work, i.e., instead there were IP packets coming back from an HA service that did not match this rule.

Then they start researching the network configuration on the cluster node that hosts the HA service and find out that the logical IP address used by that service was set to a state called "DEPRECATED". And they think this is the root cause of their problem - which (we think) is not the case.

How does Address Selection really work?

As address selection can become very complicated in complex network setups, the following will be true for the typical simple network setup found at most installations.

Let's look at the address selection for an outgoing packet a bit more closely. First we must make a distinction between TCP (RFC 793) and UDP (RFC 768). TCP is a connection-oriented protocol, i.e. a connection is established between a client and a service. Using this connection, source and target addresses are always used appropriately; in a Sun Cluster environment the source address of a packet sent by the service to a client will usually be the logical IP address of that HA service - but only if the client used the logical service address to send its request to the service.

So, this will not cause any problems with firewalls, because you know exactly which IP addresses will be used as source addresses for outgoing IP addresses.

Let's look into UDP now. UDP is a connectionless protocol, i.e., there is no established connection between a client and a server (service). A UDP-based service can choose its source address for outgoing packets by binding itself to a fixed address, but most services don't do this. Instead, they accept incoming packets from all network addresses configured. For those readers who are familiar with network programming, the typical code segment has the following lines in it:


struct sockaddr_in address;
...
address.sin_addr.s_addr = INADDR_ANY;
...
if (bind (..., (struct sockaddr \*) &address, ...) == 0)

Using this typical piece of code, the UDP service listens on all configured IP addresses, and the outbound source address is set by the IP layer and the choosing algorithm is complex and cannot be influenced. Details can be read in Infodoc 204569 (access on SunSolve for SPECTRUM contract holders only); but we think they are not that relevant here, except for this quote: "IP addresses associated with interfaces marked as DEPRECATED will not normally be used as source addresses by IP unless deprecated interfaces are all that is available, in which case they will be used."

The DEPRECATED flag

So, now DEPRECATED comes into play. A DEPRECATED address will - normally - not be used as a source address! First, why does Sun Cluster set HA IP addresses, i.e. logical or floating addresses into state DEPRECATED? Because they are floating addresses - there is no guarantee that they will stay on one node. In failure situations an HA IP address will float to another node together with its service. Or if the administrator decides to migrate a service; or when the service is stopped, the logical IP address might disappear on one node.

Let's have a look at services where IP communication is initiated from a cluster node. E.g. a cluster node might try to mount an external NFS share on this node temporarily. Whether this is UDP or TCP based NFS would not matter in this case! The IP layer would choose a source address; it could be the logical IP address of an HA service that happens to run on the same system - if it were not DEPRECATED. Now, imagine the NFS mount is successful, is using the logical IP address and NFS transfers work fine. Now, the HA service that owns the HA IP address is switched to another node in the cluster; its IP address would also switch. What would happen to the NFS traffic between this node and the external NFS server? It would fail. Packets coming from the NFS server would reach a different node now; namely that of the HA service that switched, taking its IP address with it. (And the NFS client on the cluster node would fail as well.....)

So, that is the reason for setting the DEPRECATED flag on HA IP addresses; remember the quote above: "...marked as DEPRECATED will not normally be used...". Although not setting the DEPRECATED flag would improve the probability that the address potentially be used by the IP layer as a source address, there is no guarantee and at the end, this would not help. But the DEPRECATED flag helps to prevent major problems on cluster nodes.

The Solution

Back to the original question: how can I make my firewall rules work? There are 4 possibilities - in prioritized order, best practice first:

  1. change your firewall rules to accept all possible addresses from the nodes where packets could be originating from;
  2. change your service, by binding only to the HA service IP address - which is only possible if its configuration lets you do this or if you have access to the source code;
  3. move your HA service into a Solaris 10 container, that only uses the logical IP address; in this configuration the logical IP address will always be used as source address, even though it is in state DEPRECATED;
  4. try to manipulate the decision process of the IP layer - which is a very bad idea.

To summarize

Sun Cluster sets the DEPRECATED flag on HA service IP addresses by design and it is a good thing, as it prevents strange problems with IP based clients on cluster nodes to happen. Not setting it, would not solve the problems reported.

Hartmut Streppel
Principal Field Technologist
Systems Practice

Wednesday Dec 17, 2008

Availability of SCX 12/08

We are pleased to announce the availability of the latest Solaris Cluster Express (SCX)!  You can download the software  here.

What's new?

\*This release runs on SXCE build 101a.  The version of the shared components are upgraded to be compatible with the Solaris Express Community Edition (SXCE) version.

\*A lot of bug fixes have gone into this release for the new features that were available with SCX 9/08 and SXCE 97. 

Meanwhile, we have made considerable progress on Project Colorado.  Do take time to visit the site and familiarize yourself with the objectives and the progress.

Enjoy clustering your systems till the next announcement!

SCX Release Team

Wednesday Dec 03, 2008

Editing Cluster Properties on the Fly

Ajax-based in-page editing is fast becoming a standard feature in various web applications nowadays. It is a very useful feature when one wants to quickly edit data that is displayed on a web page.

We have pages in Sun Cluster Manager (SCM) that list all the properties corresponding to related resource groups and resources. To edit any of these properties, the latest performance-enhanced SCM version for Solaris Cluster 3.2 2/08 provides the option of using in-page editing. The idea is to just click on the data and you are ready to edit!
Compared with the earlier, tedious way of invoking the Edit Properties Wizard for the same task, in-page editing is surely a quick way to get around things.



As with the Edit Properties Wizard, the benefits of using in-page Editing include not having to remember the exact command you need to change a property. But for the ever-curious ones, this feature also displays the command executed and errors (ifany) while editing a property.

Care is taken to permit users to edit only editable properties; properties like Resource Group Name cannot be modified. Properties that can be edited are differentiated by the use of a rectangular “sweet spot” around the property value. On clicking this space, a popup appears that prompts the user to provide a modified value. Lastly, the modification is done by clicking the OK button on the popup, which then executes the corresponding command on the cluster.



To edit more than one property at once, which is not allowed through this feature, the best way is to use the Edit Properties wizard, which is invoked by clicking the Edit Properties button near the top of the Properties page. This wizard would also perform some extra validation checks before the command execution.


Happy Editing...
Madhur Bansal
Solaris Cluster Engineering

Monday Aug 18, 2008

Solaris Cluster at Oracle Open World - September 22-25 at Moscone Center

Perhaps you've noticed the usual stream of Solaris Cluster blogs has slowed a bit in recent weeks. In case you're wondering, we're not blogged out, and we haven't "shot our blog" and run out of things to tell you about. In fact we've been very busy with lots of big things -- so watch for an upcoming "flog" (flood of blogs) as we release several new Agents and tools over the next several weeks.

And be sure to MARK YOUR CALENDARS for September 22 - 25 -- this is your chance to come to Oracle Open World in San Francisco's Moscone Center and head straight to Sun's booth where you'll be able to see a great magic show and -- better yet -- a live demo of Solaris Cluster Geographic Edition managing several Oracle 11g databases in disaster-tolerant configurations. And one of those configurations will be a joint demo with one of Sun's key storage partners .. this is the good stuff!

But wait -- there's more! -- besides the demo you'll have a chance to meet some of Solaris Cluster's lead developers and architects, and learn about the benefits of using the Solaris Cluster product suite with applications such as Oracle RAC. See you there!

Burt Clouse
Solaris Engineering Manager

About

mkb

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today