Tuesday Aug 18, 2009

Discussing High Availability with a Puppet

Last month at OSCON I had the pleasure of discussing High Availability and Open HA Cluster with "Jack Adams." Here's the video for your viewing pleasure:

Nick Solter
Solaris Cluster / Open HA Cluster developer and author of OpenSolaris Bible

Monday Jun 01, 2009

Announcing Open HA Cluster 2009.06

We are pleased to announce the release of High Availability Cluster software for OpenSolaris 2009.06! If you've been following along, this release is the fruit of project Colorado. Open HA Cluster 2009.06 is based on Solaris Cluster 3.2, including many of the features from the most recent update. Additionally, Open HA Cluster 2009.06 contains the following new features:

  • The ability to use Crossbow VNICs as endpoints for the cluster private interconnects. You can even send the cluster traffic over the public network and secure it with IPsec.
  • Support for exporting locally attached storage as iSCSI targets with COMSTAR iSCSI. You can obtain redundant “shared storage” without true shared storage by creating a mirrored zpool out of iSCSI-accessible local disks on two different nodes of the cluster.

Taken together, these features contribute to “hardware minimization,” allowing you to form a cluster with fewer physical hardware requirements.

This release runs on both SPARC and x86/x64 systems and includes the following agents:

  • Apache Webserver
  • Apache Tomcat
  • MySQL
  • GlassFish
  • NFS
  • DHCP
  • NFS
  • Kerberos
  • Samba
  • Solaris Containers (for ipkg Zones)

Open HA Cluster 2009.06 is distributed as IPS packages from the https://pkg.sun.com/opensolaris/ha-cluster repository. In order to obtain access, accept the license agreement at https://pkg.sun.com to obtain a certificate and key. Follow the instructions given at registration to configure your system's access to the ha-cluster publisher.

To install the complete cluster, including agents, install the “ha-cluster-full” package. To install a minimal cluster, without agents and other optional components, install the “ha-cluster-minimal” package instead. You can then install the individual agents and other optional components.

Open HA Cluster 2009.06 is free to use, with production level support offerings available for two-node clusters. This release runs on OpenSolaris 2009.06 only.

For more information, see the documentation landing page and the OpenSolaris Availability page. If you don't have physical hardware available to create a cluster, try it out on VirtualBox! (PDF link).

Please direct your questions and comments to ha-clusters-discuss@opensolaris.org

The Colorado Team

Tuesday May 19, 2009

Open HA Cluster at CommunityOne West June 1-2

In addition to the Cluster Summit on May 31, Open HA Cluster will be well represented at the CommunityOne West conference at Moscone Center in San Francisco.

We'll have an Open HA Cluster demo running the whole week. Come visit us in the Sun Pavillion to see Open HA Cluster running on OpenSolaris.

I'll also be giving a talk, "High Availability with OpenSolaris", as part of the Deploying OpenSolaris in your DataCenter deep dive track on Tuesday, June 2. Contrary to the "official" CommunityOne information you might find elsewhere, this deep dive track is completely free. Just register with the "OSDDT" registration code. The other talks in this track, on ZFS and Zones, should be quite interesting as well.

You can see the entire lineup of the OpenSolaris presence at CommunityOne here, and even more details here. I hope to see you in San Francisco in a couple weeks!

Nick Solter
Tech lead, Open HA Cluster 2009.06

Monday Apr 27, 2009

Open HA Cluster Summit - 31st May 2009

Make High Availability Work for You - Open HA Cluster Summit 31st May 2009

Have you signed up for this event yet?

Join us as we explore the latest trends of High Availability Cluster technologies, as well as key insights from HA Clusters community members, technologists, and users of High Availability and Business Continuity software. Learn how to increase the availability of your favorite applications from blogs to enterprise level infrastructure. If you are a student, you may want to consider the industrial-strength Open HA Cluster software for your thesis research. You will also have the unique opportunity to hear one of the featured guest speakers, Dr. David Cheriton, industry expert and professor at Stanford University.

Come, learn, mingle with HA Clusters Community members, network, enjoy free food, and casino games!

If you are not going to be in San Francisco, you could still participate by following the technical sessions on twitter and ustream.

Jatin Jhala
HA Clusters Community Manager

This event is sponsored by Sun Microsystems, Inc.  Spread the word.  Inform your friends and colleagues.

Friday Dec 05, 2008

pconsole now available in Solaris Express Community Edition build 103

pconsole, an open source parallel console tool, is now available in Solaris Express Community Edition starting with build 103 (it will also be available soon in OpenSolaris -- stay tuned for updates!). It is provided as an alternative to the existing tool cconsole (and its associated programs  crlogin, cssh, ctelnet), which is in the package SUNWccon (Sun Cluster Console), which is included in the Sun Cluster product.  pconsole provides the same basic functionality as cconsole but has a different interface.  It is being included because it is more familiar to some open source users and some have requested it. It is contained in the Solaris package SUNWpconsole.  This package is contained in the following metaclusters:

  •  SUNWCall  (Entire distribution)
  •  SUNWCprog (Developer)
  •  SUNWCuser (End-User)

The full description of pconsole is in the pconsole man page.

 After installing the package, you can run the program /usr/bin/pconsole.  This is actually a shell script which invokes xterm and /usr/bin/pconsole-bin. It also uses ssh to establish the remote session.  These defaults can be changed by setting environment variables before running it.  Here are the environment variables you can set:


By default, pconsole uses xterm(1) to create a  window  to  the  remote  system.  You can specify  another  command  by  setting the environment  variable  P_TERM  to the chosen command.


By default, pconsole uses the options  "-geometry  80x24  -fn  10x20"  to pass to the command that is specified by P_TERM,  or  to xterm(1)  if P_TERM is unspecified.  You can specify different options by setting the environment  variable  P_TERM_OPTIONS to the chosen options.


By default, pconsole uses  ssh(1) to make connections.  You  can  use a different command, such  as  rlogin(1),  by  setting  the environment  variable  P_CONNECT_CMD  to the chosen command.

If you are satisfied with the default you do not need to set any variables.  pconsole is designed to be run as root.  Here is an example of how to use pconsole.  Let's assume you have a 3-node cluster called ¨oolong¨, with host nodes oolong1, oolong2, oolong3. Then you can type the following:

# pconsole oolong1 oolong2 oolong3

Four windows will then come up: three ordinary terminal windows, one for each of the named hosts, and a fourth special multiplexing window.  This window appears smaller than the others, and any input typed here gets sent to all the other windows.  This is extremely useful when you want to perform the same administrative actions on all the nodes at once.

The multiplexing window also supports a "command" mode, which is entered by typing CTRL-A.  This results in a ">>> " prompt.  In command mode several commands are available.  The "help" commands lists all of the commands:

>>> help
 help           Give help about the available commands
 ?              short-cut for 'help'
 version        Display version information
 echo           Turn echo on or off
 attach         Attach to a tty device
 detach         Detach from a tty device
 list           Show devices currently attached to
 connect        Leave command mode
 quit           Exit pconsole
 exit           Exit pconsole


Otherwise, the usage of pconsole should be straightforward.  Again, consult the man page for additional information.

Achut Reddy
Solaris Cluster Engineering


Oracle Solaris Cluster Engineering Blog


« February 2017