Monday Dec 19, 2011

More robust control of zfs in Solaris Cluster 3.x

In some situations there is a possibility that a zpool will not be exported correctly if controlled by SUNW.HAStoragePlus resource. Please refer to the details in Document 1364018.1: Potential Data Integrity Issues After Switching Over a Solaris Cluster High Availability Resource Group With Zpools

I like to mention this because zfs is used more and more in Solaris Cluster environments. Therefore I highly recommend to install following patches to get a more reliable Solaris Cluster environment in combination with zpools on SC3.3 and SC3.2. So, if you already running such a setup, start planning NOW to install the following patch revision (or higher) for your environment...

Solaris Cluster 3.3:
145333-10 Oracle Solaris Cluster 3.3: Core Patch for Oracle Solaris 10
145334-10 Oracle Solaris Cluster 3.3_x86: Core Patch for Oracle Solaris 10_x86

Solaris Cluster 3.2
144221-07 Solaris Cluster 3.2: CORE patch for Solaris 10
144222-07 Solaris Cluster 3.2: CORE patch for Solaris 10_x86

Tuesday Aug 17, 2010

New numbers of Solaris Cluster 3.2 core patches

There was a rejuvenation of the Solaris Cluster 3.2 core patch. The new patches are

144220 Solaris Cluster 3.2: CORE patch for Solaris 9
144221 Solaris Cluster 3.2: CORE patch for Solaris 10
144222 Solaris Cluster 3.2: CORE patch for Solaris 10_x86
At this time these patches does NOT have the requirement to be installed in non-cluster-single-user-mode. They can be installed in order when cluster is running, but requires a reboot.

Beware the new patches requires the previous version -42 of the SC 3.2 core patch.
126105-42 Sun Cluster 3.2: CORE patch for Solaris 9
126106-42 Sun Cluster 3.2: CORE patch for Solaris 10
126107-42 Sun Cluster 3.2: CORE patch for Solaris 10_x86
And the -42 still have the requirement to be installed in non-cluster-single-user-mode. Furthermore carefully study the special install instructions and some entries of this blog.

The advantage is, when -42 is already applied then the patching of Solaris Cluster 3.2 becomes more easy.

Certainly, it's possible to apply the new SC core patch at the same time as the -42 core patch in non-cluster-single-user-mode.

Friday Mar 26, 2010

Summary of install instructions for 126106-40 and 126107-40

My last blog describe some issues around these patches (please read)
126106-40 Sun Cluster 3.2: CORE patch for Solaris 10
126107-40 Sun Cluster 3.2: CORE patch for Solaris 10_x86
This is a follow up with a summary of best practices 'How to install?' these patches. There is a difference between new installations, 'normal' patching and live upgrade patching.
Important: The mentioned instructions are working if already Solaris Cluster 3.2 1/09 update2 (or Solaris Cluster 3.2 core patch revision -27(sparc) / -28(x86) ) or higher is installed. If running lower version of the Solaris Cluster 3.2 core patch then additional needs are necessary. Please refer to special install instructions of the patches for the additional needs.
Update: 28.Apr 2010
This also apply to the already released -41 and -42 SC core patches, when -40 is not already active

A) In case of new installations:

Install the SC core patch -40 immediately after the installation of the Solaris Cluster 3.2 software.
In brief:
    1.) Install Solaris Cluster 3.2 via JES installer
    2.) Install the SC core patch -40
    3.) Run scinstall
    4.) Do the reboot
Note: Do NOT do a reboot between 1.) and 2.). Follow the EIS Solaris Cluster 3.2 checklist which also has a note for this issue. If not available follow the standard installation process of Sun Cluster 3.2


B) In case of 'normal' patching

It is vital to use the following/right approach in case of patching. Because if you not use the following approach then the Solaris Cluster 3.2 can not boot anymore:
    0.) Only if using AVS 4.0
    # patchadd 12324[67]-05 (Follow Special Install Instructions)
    1.) # boot in non-cluster mode
    2.) # svcadm disable svc:/system/cluster/loaddid
    3.) # svccfg delete svc:/system/cluster/loaddid
    4.) # patchadd 12610[67]-40
    5.) # init 6


C) In case of LU (Live Upgrade feature) to install patches
    1.) Create ABE:
    For zfs root within the same root pool:
    # lucreate -n patchroot
    For ufs on different root drive:
    # prtvtoc /dev/rdsk/c1t3d0s2 | fmthard -s - /dev/rdsk/c1t2d0s2
    # lucreate -c "c1t3d0s0-root" -m /:/dev/dsk/c1t2d0s0:ufs -m /global/.devices/node@2:/dev/dsk/c1t2d0s6:ufs -n "c1t2d0s0-patchroot"
    2.) Install patch into ABE ( patch is already unpacked in /var/tmp )
    # luupgrade -t -n c1t2d0s0-patchroot -s /var/tmp 126106-40
    3.) Activate ABE
    # luactivate patchroot
    4.) # init 6
    # Some errors comes up at this point
    (dependency cycle & ORB error - Please look to example below)
    5.) # init 6 (the second reboot fix the problem) Bug 6938144





My personal recommendation to minimize the risk of the installation for the SC core patch -40 is:
Step 1) Upgrade the Solaris Cluster to
a) Solaris 10 10/09 update 8 and Solaris Cluster 3.2 11/09 update3.
or
b) EIS Baseline 26JAN10 which include the Solaris kernel update 14144[45]-09 and SC core patch -39. If EIS baseline not available use other patchset which include the mentioned patches.
Step 2) After the successful upgrade do a single patch install of the SC core patch -40 by using the installation instruction B) which is mentioned above. In this software state the -40 can be applied 'rolling' to the cluster.

Note: 'Rolling' means: Boot node1 in non-cluster-mode -> install -40 (see B) -> boot node1 back into cluster -> boot node2 in non-cluster-mode -> install -40 (see B) -> boot node2 back into cluster.

Wednesday Mar 03, 2010

Oracle Solaris Cluster core patch 126106-40 and 126107-40

This is a notify because there are some troubles around with the following Sun Cluster 3.2 -40 core patches:
126106-40 Sun Cluster 3.2: CORE patch for Solaris 10
126107-40 Sun Cluster 3.2: CORE patch for Solaris 10_x86
Before installing the patch carefully read the Special Install Instructions.
Update: 28.Apr 2010
This also apply to the already released -41 and -42 SC core patches, when -40 is not already active

Two new notes where added to these patches:


NOTE 16: Remove the loaddid SMF service by running the following
commands before installing this patch, if current patch level
(before installing this patch) is less than -40:
svcadm disable svc:/system/cluster/loaddid
svccfg delete svc:/system/cluster/loaddid


      So, the right approach is:
      # boot in non-cluster mode
      # svcadm disable svc:/system/cluster/loaddid
      # svccfg delete svc:/system/cluster/loaddid
      # patchadd 126106-40
      # init 6


NOTE 17:
Installing this patch on a machine with Availability Suite
software installed will cause the machine to fail to boot with
dependency errors due to BugId 6896134 (AVS does not wait for
did devices to startup in a cluster).
Please contact your Sun
Service Representative for relief before installing this patch.


The solution for Bug 6896134 is now available, please follow the right approach below for installation...
123246-05 Sun StorEdge Availability Suite 4.0: Patch for Solaris 10
123247-05 Sun StorEdge Availability Suite 4.0: Patch for Solaris 10_x86
       # patchadd 12324[67]-05 (Follow Special Install Instructions)
       # boot in non-cluster mode
       # svcadm disable svc:/system/cluster/loaddid
       # svccfg delete svc:/system/cluster/loaddid
       # patchadd 12610[67]-40
       # init 6



Important to know: These 2 issues only come up if using Solaris 10 10/09 Update8 or the Kernel patch 141444-09 or higher. There are changes in the startup of the iSCSI initiator (is now a SMF service) - please refer to 6888193 for details.

Hint: If using LU (live upgrade) for patching please refer to my blog Summary of installation instructions for 126106-40 and 126107-40

ATTENTION: the NOTE 16 is valid for removal of the patch - carefully read 'Special Removal Instructions'.


Additional information around NOTE 16
1) What happen if forgot to delete the loaddid service before the patch installation?

The following error (or similar) come up within patch installation on the console of the server:
Mar 2 12:01:46 svc.startd[7]: Transitioning svc:/system/cluster/loaddid:default to maintenance because it completes a dependency cycle (see svcs -xv for details):
svc:/network/iscsi/initiator:default
svc:/network/service
svc:/network/service:default
svc:/network/rpc/nisplus
svc:/network/rpc/nisplus:default
svc:/network/rpc/keyserv
svc:/network/rpc/keyserv:default
svc:/network/rpc/bind
svc:/network/rpc/bind:default
svc:/system/sysidtool:net
svc:/milestone/single-user:default
svc:/system/cluster/loaddid
svc:/system/cluster/loaddid:default
Mar 2 12:01:46 svc.startd[7]: system/cluster/loaddid:default transitioned to maintenance by request (see 'svcs -xv' for details)

But this should NOT be problem because the patch 126106-40 will be installed in non-cluster-mode. This means after the next boot into cluster mode the error should disappear. This is reported in Bug 6911030.

But to be sure that the system is booting correctly do:
- check the log file /var/svc/log/system-cluster-loaddid:default.log
- that one of the last lines is: [ Mar 16 10:31:15 Rereading configuration. ]
- if not go to 'Recovery procedure below'


2) What happen if doing the loaddid delete after the patch installation?

Maybe you the see the problem mentioned in 1). If disable and delete the 'svc:/system/cluster/loaddid' service after the patch installation then the system will no longer joining the cluster. The following error come up:
...
Mar 16 11:42:54 Cluster.Framework: Could not initialize the ORB. Exiting.
Mar 16 11:42:54 : problem waiting the deamon, read errno 0
Mar 16 11:42:54 Cluster.Framework: Could not initialize the ORB. Exiting.
Mar 16 11:42:54 svc.startd[8]: svc:/system/cluster/sc_failfast:default: Method "/usr/cluster/lib/svc/method/sc_failfast start" failed with exit status 1.
Mar 16 11:42:54 svc.startd[8]: svc:/system/cluster/cl_execd:default: Method "/usr/cluster/lib/svc/method/sc_cl_execd start" failed with exit status 1.
Mar 16 11:42:54 Cluster.Framework: Could not initialize the ORB. Exiting.
Mar 16 11:42:54 : problem waiting the deamon, read errno 0
Configuring devices.
Mar 16 11:42:54 svc.startd[8]: svc:/system/cluster/sc_failfast:default: Method "/usr/cluster/lib/svc/method/sc_failfast start" failed with exit status 1.
Mar 16 11:42:54 Cluster.Framework: Could not initialize the ORB. Exiting.
Mar 16 11:42:54 svc.startd[8]: svc:/system/cluster/cl_execd:default: Method "/usr/cluster/lib/svc/method/sc_cl_execd start" failed with exit status 1.
Mar 16 11:42:54 Cluster.Framework: Could not initialize the ORB. Exiting.
Mar 16 11:42:54 : problem waiting the deamon, read errno 0
Mar 16 11:42:54 svc.startd[8]: svc:/system/cluster/sc_failfast:default: Method "/usr/cluster/lib/svc/method/sc_failfast start" failed with exit status 1.
Mar 16 11:42:54 svc.startd[8]: system/cluster/sc_failfast:default failed: transitioned to maintenance (see 'svcs -xv' for details)
Mar 16 11:42:54 Cluster.Framework: Could not initialize the ORB. Exiting.
Mar 16 11:42:55 svc.startd[8]: svc:/system/cluster/cl_execd:default: Method "/usr/cluster/lib/svc/method/sc_cl_execd start" failed with exit status 1.
Mar 16 11:42:55 svc.startd[8]: system/cluster/cl_execd:default failed: transitioned to maintenance (see 'svcs -xv' for details)

If seeing this error then refer to Recovering procedure


Recovering procedure

1) boot in non-cluster-mode if not able to login
2) bring the files loaddid and loaddid.xml in place (normally using the files from SC core patch -40)
ONLY in case of trouble with the files from SC core patch -40 use the old files!
Note: If restore old file without the dependency to iSCSI initiator then there can be problems if trying to use iSCSI storage within Sun Cluster.
3) Repair loaddid service
# svcadm disable svc:/system/cluster/loaddid
# svccfg delete svc:/system/cluster/loaddid
# svccfg import /var/svc/manifest/system/cluster/loaddid.xml
# svcadm restart svc:/system/cluster/loaddid:default
4) check the log file /var/svc/log/system-cluster-loaddid:default.log
# tail /var/svc/log/system-cluster-loaddid:default.log
for the following line (which should be on the end of the log file)
[ Mar 16 11:43:06 Rereading configuration. ]
Note: Rereading configuration is necessary before booting!
5) reboot the system
# init 6


Additional information: Difference details of loaddid files after installation of SC core patch -40.

A) /var/svc/manifest/system/cluster/loaddid.xml
The SC core patch -40 delivers a new version with the following changes:
<       ident "@(#)loaddid.xml 1.3 06/05/12 SMI"
---
>       ident "@(#)loaddid.xml 1.5 09/11/04 SMI"
56,61c79,92
<       <dependent
<              name='loaddid_single-user'
<              grouping='optional_all'
<             restart_on='none'>
<             <service_fmri value='svc:/milestone/single-user:default' />
<       </dependent>
---
>       <!--
>              The following dependency is for did drivers to get loaded
>              properly for iSCSI based quorum and data devices. We want to
>              start loaddid service after the time when iSCSI connections
>              can be made.
>        -->
>       <dependency
>              name='cl_iscsi_initiator'
>              grouping='optional_all'
>              restart_on='none'
>              type='service'>
>              <service_fmri
>              value='svc:/network/iscsi/initiator:default' />
>       </dependency>


Before patch -40 is applied:
node1 # svcs -d loaddid:default
STATE STIME FMRI
online 11:29:45 svc:/system/cluster/cl_boot_check:default
online 11:29:49 svc:/system/coreadm:default
online 11:30:52 svc:/milestone/devices:default

node1 # svcs -D svc:/system/cluster/loaddid:default
STATE STIME FMRI
online 15:34:41 svc:/system/cluster/bootcluster:default
online 15:34:46 svc:/milestone/single-user:default


After patch -40 is applied:
node1 # svcs -d loaddid:default
STATE STIME FMRI
online 12:09:18 svc:/system/coreadm:default
online 12:09:20 svc:/system/cluster/cl_boot_check:default
online 12:09:21 svc:/network/iscsi/initiator:default
online 12:10:21 svc:/milestone/devices:default

node1 # svcs -D svc:/system/cluster/loaddid:default
STATE STIME FMRI
online 16:08:19 svc:/system/cluster/bootcluster:default


B) /usr/cluster/lib/svc/method/loaddid
The SC core patch -40 delivers a new version with the following changes:
< #ident "@(#)loaddid 1.7 06/08/07 SMI"
---
> #ident "@(#)loaddid 1.9 09/11/04 SMI"
15,16c36,44
<        svcprop -q -p system/reconfigure system/svc/restarter:default 2>/dev/null
<        if [ $? -eq 0 ] && [ `svcprop -p system/reconfigure system/svc/restarter:default` = "true" ]
---
>        # The property "reconfigure" is used to store whether the boot is
>        # a reconfiguration boot or not. The property "system/reconfigure"
>        # of the "system/svc/restarter" Solaris SMF service can be used
>        # for this purpose as well. However the system/reconfigure
>        # property is reset at the single-user milestone. SC requires this
>        # property for use by service after the single-user milestone as
>        # well.
>        svcprop -q -p clusterdata/reconfigure system/cluster/cl_boot_check 2>/dev/null
>        if [ $? -eq 0 ] && [ `svcprop -p clusterdata/reconfigure system/cluster/cl_boot_check` = "true" ]


Sunday Dec 06, 2009

Sun Cluster 3.2 11/09 Update3 Patches

The Sun Cluster 3.2 11/09 Update3 is released. Click here for further information.

The package version of the Sun Cluster 3.2 11/09 Update3 are the same for the core framework and the agents as for Sun Cluster 3.2, Sun Cluster 3.2 2/08 Update1 and Sun Cluster 3.2 1/09 Update2. Therefore it's possible to patch up an existing Sun Cluster 3.2, Sun Cluster 3.2 2/08 Update1 or Sun Cluster 3.2 1/09 Update2.

The package version of the Sun Cluster Geographic Edition 3.2 11/09 Update3 are NOT the same as Sun Cluster Geographic Edition 3.2. But it's possible to upgrade the Geographic Edition 3.2 without interruption of the service. See documentation for details.

The following patches (with the mentioned revision) are included/updated in Sun Cluster 3.2 11/09 Update3. If these patches are installed on Sun Cluster 3.2, Sun Cluster 3.2 2/08 Update1 or Sun Cluster 3.2 1/09 Update2 release, then the features for framework & agents are identical with Sun Cluster 3.2 11/09 Update3. It's always necessary to read the "Special Install Instructions of the patch" but I made a note behind some patches where it's very important to read the "Special Install Instructions of the patch" (Using shortcut SIIOTP).

Included/updated patch revisions of Sun Cluster 3.2 11/09 Update3 for Solaris 10 05/09 Update7 or higher
126106-38 Sun Cluster 3.2: CORE patch for Solaris 10 Note: Please read SIIOTP
125992-05 Sun Cluster 3.2: SC Checks patch for Solaris 10
126017-03 Sun Cluster 3.2: HA-DNS Patch for Solaris 10
126032-09 Sun Cluster 3.2: Ha-MYSQL Patch for Solaris 10 Note: Please read SIIOTP
126035-06 Sun Cluster 3.2: HA-NFS Patch for Solaris 10
126044-06 Sun Cluster 3.2: HA-PostgreSQL Patch for Solaris 10 Note: Please read SIIOTP
126047-12 Sun Cluster 3.2: Ha-Oracle patch for Solaris 10 Note: Please read SIIOTP
126050-04 Sun Cluster 3.2: HA-Oracle E-business suite Patch for Solaris 10 (-04 not yet on SunSolve)
126059-05 Sun Cluster 3.2: HA-SAPDB Patch for Solaris 10
126071-02 Sun Cluster 3.2: HA-Tomcat Patch for Solaris 10
126080-04 Sun Cluster 3.2: HA-Sun Java Systems App Server Patch for Solaris 10
126083-04 Sun Cluster 3.2: HA-Sun Java Message Queue Patch for Solaris 10 Note: Please read SIIOTP
126095-06 Sun Cluster 3.2: Localization patch for Solaris 9 sparc and Solaris 10 sparc
128556-04 Sun Cluster 3.2: Man Pages Patch for Solaris 9 and Solaris 10, sparc
137931-02 Sun Cluster 3.2: Sun Cluster 3.2: HA-Informix patch for Solaris 10


Included/updated patch revisions of Sun Cluster 3.2 11/09 Update3 for Solaris 10 x86 05/09 Update7 or higher
126107-38 Sun Cluster 3.2: CORE patch for Solaris 10_x86 Note: Please read SIIOTP
125993-05 Sun Cluster 3.2: Sun Cluster 3.2: SC Checks patch for Solaris 10_x86
126018-05 Sun Cluster 3.2: HA-DNS Patch for Solaris 10_x86
126033-10 Sun Cluster 3.2: Ha-MYSQL Patch for Solaris 10_x86 Note: Please read SIIOTP
126036-07 Sun Cluster 3.2: HA-NFS Patch for Solaris 10_x86
126045-07 Sun Cluster 3.2: HA-PostgreSQL Patch for Solaris 10_x86 Note: Please read SIIOTP
126048-12 Sun Cluster 3.2: Ha-Oracle patch for Solaris 10_x86 Note: Please read SIIOTP
126060-06 Sun Cluster 3.2: HA-SAPDB Patch for Solaris 10_x86
126072-02 Sun Cluster 3.2: HA-Tomcat Patch for Solaris 10_x86
126081-05 Sun Cluster 3.2: HA-Sun Java Systems App Server Patch for Solaris 10_x86
126084-06 Sun Cluster 3.2: HA-Sun Java Message Queue Patch for Solaris 10_x86 Note: Please read SIIOTP
126096-06 Sun Cluster 3.2: Localization patch for Solaris 10 amd64
128557-04 Sun Cluster 3.2: Man Pages Patch for Solaris 10_x86
137932-02 Sun Cluster 3.2:Sun Cluster 3.2: HA-Informix patch for Solaris 10_x86


Included/updated patch revisions of Sun Cluster 3.2 11/09 Update3 for Solaris 9 5/09 or higher
126105-38 Sun Cluster 3.2: CORE patch for Solaris 9 Note: Please read SIIOTP
125991-05 Sun Cluster 3.2: Sun Cluster 3.2: SC Checks patch for Solaris 9
126016-03 Sun Cluster 3.2: HA-DNS Patch for Solaris 9
126031-09 Sun Cluster 3.2: Ha-MYSQL Patch for Solaris 9 Note: Please read SIIOTP
126034-06 Sun Cluster 3.2: HA-NFS Patch for Solaris 9
126043-06 Sun Cluster 3.2: HA-PostgreSQL Patch for Solaris 9 Note: Please read SIIOTP
126046-12 Sun Cluster 3.2: HA-Oracle patch for Solaris 9 Note: Please read SIIOTP
126049-04 Sun Cluster 3.2: HA-Oracle E-business suite Patch for Solaris 9 (-04 not yet on SunSolve)
126058-05 Sun Cluster 3.2: HA-SAPDB Patch for Solaris 9
126070-02 Sun Cluster 3.2: HA-Tomcat Patch for Solaris 9
126079-04 Sun Cluster 3.2: HA-Sun Java Systems App Server Patch for Solaris 9
126082-04 Sun Cluster 3.2: HA-Sun Java Message Queue Patch for Solaris 9 Note: Please read SIIOTP
126095-06 Sun Cluster 3.2: Localization patch for Solaris 9 sparc and Solaris 10 sparc
128556-04 Sun Cluster 3.2: Man Pages Patch for Solaris 9 and Solaris 10, sparc


The quorum server is an alternative to the traditional quorum disk. The quorum server is outside of the Sun Cluster and will be accessed through the public network. Therefore the quorum server can be a different architecture.

Included/updated patch revisions in Sun Cluster 3.2 11/09 Update3 for quorum server feature:
127404-03 Sun Cluster 3.2: Quorum Server Patch for Solaris 9
127405-04 Sun Cluster 3.2: Quorum Server Patch for Solaris 10
127406-04 Sun Cluster 3.2: Quorum Server Patch for Solaris 10_x86
Please beware of the following note which in the Special Install Instructions of Sun Cluster 3.2 core patch -38 and higher:
NOTE 17: Quorum server patch 127406-04 (or greater) needs to be installed on quorum server host first, before installing 126107-37 (or greater) Core Patch on cluster nodes.
127408-02 Sun Cluster 3.2: Quorum Man Pages Patch for Solaris 9 and Solaris 10, sparc
127409-02 Sun Cluster 3.2: Quorum Man Pages Patch for Solaris 10_x86


If some patches must be applied when the node is in noncluster mode, you can apply them in a rolling fashion, one node at a time, unless a patch's instructions require that you shut down the entire cluster. Follow procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and boot it into noncluster mode. For ease of installation, consider applying all patches at once to a node that you place in noncluster mode.

Friday Oct 30, 2009

Kernel patch 141444-09 or 141445-09 with Sun Cluster 3.2

As stated in my last blog the following kernel patches are included in Solaris 10 10/09 Update8.
141444-09 SunOS 5.10: kernel patch or
141445-09 SunOS 5.10_x86: kernel patch

Update 10.Dec.2009:
Support of Solaris 10 10/09 Update8 with Sun Cluster 3.2 1/09 Update2 is now announced. The recommendation is to use the 126106-39 (sparc) / 126107-39 (x86) with Solaris 10 10/09 Update8. Note: The -39 Sun Cluster core patch is a feature patch because the -38 Sun Cluster core patch is part of Sun Cluster 3.2 11/09 Update3 which is already released.
For new installations/upgrades with Solaris 10 10/09 Update8 use:
\* Sun Cluster 3.2 11/09 Update3 with Sun Cluster core patch -39 (fix problem 1)
\* Use the patches 142900-02 / 142901-02 (fix problem 2)
\* Add "set nautopush=64" to /etc/system (workaround for problem 3)

For patch updates to 141444-09/141445-09 use:
\* Sun Cluster core patch -39 (fix problem 1)
\* Also use patches 142900-02 / 142901-02 (fix problem 2)
\* Add "set nautopush=64" to /etc/system (workaround for problem 3)


It's time to notify that there are some issues with these kernel patches in combination with Sun Cluster 3.2

1.) The patch breaks the zpool cachefile feature if using SUNW.HAStoragePlus

a.) If the kernel patch 141444-09 (sparc) / 141445-09 (x86) is installed on a Sun Cluster 3.2 system where the Sun Cluster core patch 126106-33 (sparc) / 126107-33 (x86) is already installed then hastorageplus_prenet_start will fail with the following error message:
...
Oct 26 17:51:45 nodeA SC[,SUNW.HAStoragePlus:6,rg1,rs1,hastorageplus_prenet_start]: Started searching for devices in '/dev/dsk' to find the importable pools.
Oct 26 17:51:53 nodeA SC[,SUNW.HAStoragePlus:6,rg1,rs1,hastorageplus_prenet_start]: Completed searching the devices in '/dev/dsk' to find the importable pools.
Oct 26 17:51:54 nodeA zfs: [ID 427000 kern.warning] WARNING: pool 'zpool1' could not be loaded as it was last accessed by another system (host: nodeB hostid: 0x8516ced4). See: http://www.sun.com/msg/ZFS-8000-EY
...


b.) If the kernel patch 141444-09 (sparc) / 141445-09 (x86) is installed on a Sun Cluster 3.2 system where the Sun Cluster core patch 126106-35 (sparc) / 126107-35 (x86) is already installed then hastorageplus_prenet_start will work but the zpool cachefile feature of SUNW.HAStoragePlus is disabled. Without the zpool cachefile feature the time of zpool import increases because the import will scan all available disks. The messages look like:
...
Oct 30 15:37:45 nodeA SC[,SUNW.HAStoragePlus:8,nfs-rg,zpool1-rs,hastorageplus_validate]: [ID 148650 daemon.notice] Started searching for devices in '/dev/dsk' to find the importable pools.
Oct 30 15:37:45 nodeA SC[,SUNW.HAStoragePlus:8,nfs-rg,zpool1-rs,hastorageplus_validate]: [ID 148650 daemon.notice] Started searching for devices in '/dev/dsk' to find the importable pools.
Oct 30 15:37:49 nodeA SC[,SUNW.HAStoragePlus:8,nfs-rg,zpool1-rs,hastorageplus_validate]: [ID 547433 daemon.notice] Completed searching the devices in '/dev/dsk' to find the importable pools.
Oct 30 15:37:49 nodeA SC[,SUNW.HAStoragePlus:8,nfs-rg,zpool1-rs,hastorageplus_validate]: [ID 547433 daemon.notice] Completed searching the devices in '/dev/dsk' to find the importable pools.
Oct 30 15:37:49 nodeA SC[,SUNW.HAStoragePlus:8,nfs-rg,zpool1-rs,hastorageplus_validate]: [ID 792255 daemon.warning] Failed to update the cachefile contents in /var/cluster/run/HAStoragePlus/zfs/zpool1.cachefile to CCR table zpool1.cachefile for pool zpool1 : file /var/cluster/run/HAStoragePlus/zfs/zpool1.cachefile open failed: No such file or directory.
Oct 30 15:37:49 nodeA SC[,SUNW.HAStoragePlus:8,nfs-rg,zpool1-rs,hastorageplus_validate]: [ID 792255 daemon.warning] Failed to update the cachefile contents in /var/cluster/run/HAStoragePlus/zfs/zpool1.cachefile to CCR table zpool1.cachefile for pool zpool1 : file /var/cluster/run/HAStoragePlus/zfs/zpool1.cachefile open failed: No such file or directory.
Oct 30 15:37:49 nodeA SC[,SUNW.HAStoragePlus:8,nfs-rg,zpool1-rs,hastorageplus_validate]: [ID 205754 daemon.info] All specified device services validated successfully.
...


If the ZFS cachefile feature is not required AND the above kernel patches are installed, problem a.) is resolved by installing Sun Cluster core patch 126106-35 (sparc) / 126107-35 (x86).
Solution for a) and b):
126106-39 Sun Cluster 3.2: CORE patch for Solaris 10
126107-39 Sun Cluster 3.2: CORE patch for Solaris 10_x86

Alert 1021629.1: A Solaris Kernel Change Stops Sun Cluster Using "zpool.cachefiles" to Import zpools Resulting in ZFS pool Import Performance Degradation or Failure to Import the zpools

2.) The patch breaks probe-based IPMP if more than one interface is in the same IPMP group

After installing the already mentioned kernel patch:
141444-09 SunOS 5.10: kernel patch or
141445-09 SunOS 5.10_x86: kernel patch
then the probe-based IPMP group feature is broken if the system is using more than one interface in the same IPMP group. This means all Solaris 10 systems which are using more than one interface in the same probe-based IPMP group are affected!

After installing this kernel patch the following errors will be sent to the system console after a reboot:
...
nodeA console login: Oct 26 19:34:41 in.mpathd[210]: NIC failure detected on bge0 of group ipmp0
Oct 26 19:34:41 in.mpathd[210]: Successfully failed over from NIC bge0 to NIC e1000g0
...

Workarounds:
a) Use link-based IPMP instead of probe-based IPMP
b) Use only one interface in the same IPMP group if using probe-based IPMP
See the blog "Tips to configure IPMP with Sun Cluster 3.x" for more details if you like to change the configuration.
c) Do not install the listed kernel patch above. Note: Fix is already in progress and can be reached via a service request. I will update this blog when the general fix is available.

Solution:
142900-02 SunOS 5.10: kernel patch
142901-02 SunOS 5.10_x86: kernel patch

Alert 1021262.1 : Solaris 10 Kernel Patches 141444-09 and 141445-09 May Cause Interface Failure in IP Multipathing (IPMP)
This is reported in Bug 6888928

3.) When applying the patch Sun Cluster can hang on reboot

After installing the already mentioned kernel patch:
141444-09 SunOS 5.10: kernel patch or
141511-05 SunOS 5.10_x86: ehci, ohci, uhci patch
the Sun Cluster nodes can hang within boot because the Sun Cluster nodes has exhausted the default number of autopush structures. When clhbsndr module is loaded, it causes a lot more autopushes to occur than would otherwise happen on a non-clustered system. By default, we only allocate nautopush=32 of these structures.

Workarounds:
a) Do not use the mentioned kernel patch with Sun Cluster
b) Boot in non-cluster-mode and add the following to /etc/system
set nautopush=64

Solution:
126106-42 Sun Cluster 3.2: CORE patch for Solaris 10
126107-42 Sun Cluster 3.2: CORE patch for Solaris 10_x86
for Sun Cluster 3.1 the issue is fixed in:
120500-26 Sun Cluster 3.1: Core Patch for Solaris 10

Alert 1021684.1: Solaris autopush(1M) Changes (with patches 141444-09/141511-04) May Cause Sun Cluster 3.1 and 3.2 Nodes to Hang During Boot
This is reported in Bug 6879232

Monday Feb 16, 2009

Sun Cluster 3.2 1/09 Update2 Patches

The Sun Cluster 3.2 1/09 Update2 is released. Click here for further information.

The package version of the Sun Cluster 3.2 1/09 Update2 are the same for the core framework and the agents as for Sun Cluster 3.2 and Sun Cluster 3.2 2/08 Update1. Therefore it's possible to patch up an existing Sun Cluster 3.2 or Sun Cluster 3.2 2/08 Update1.

The package version of the Sun Cluster Geographic Edition 3.2 1/09 Update2 are NOT the same as Sun Cluster Geographic Edition 3.2. Therefore an upgrade is necessary for the Geographic Edition.
But don't worry about that, because unlike core Sun Cluster 3.2 the Geographic Edition framework does not create updates through patches. The update can be done without interruption of the service. Click here for details.

The following patches (with the mentioned revision) are included in Sun Cluster 3.2 1/09 Update2. So the complete list is a combination of Sun Cluster 3.2 2/08 Update1 patches and this list. If these patches are installed on Sun Cluster 3.2 or Sun Cluster 3.2 2/08 Update1 release, then the features for framework & agents are identical. It's always necessary to read the "Special Install Instructions of the patch" but I made a note behind some patches where it's very important to read the "Special Install Instructions of the patch" (Using shortcut SIIOTP). Furthermore I made a note when a new resource type comes with the patch.

New additional included patch revisions of Sun Cluster 3.2 1/09 Update2 for Solaris 10 05/08 update5 or higher
126106-27 Sun Cluster 3.2: CORE patch for Solaris 10
Note:
Delivers SUNW.rac_udlm:3, SUNW.rac_framework:4, SUNW.crs_framework:2, SUNW.ScalMountPoint:3, SUNW.ScalDeviceGroup:3, SUNW.rac_svm:3, SUNW.rac_cvm:3 and SUNW.LogicalHostname:3 (but LogicalHostname was introduced in revision -17). Please read SIIOTP
125514-05 Sun Cluster 3.2: Solaris Volume Manager (Mediator) Patch
125992-03 Sun Cluster 3.2: SC Checks patch for Solaris 10
126008-02 Sun Cluster 3.2: HA-DB Patch for Solaris 10
126014-05 Sun Cluster 3.2: Ha-Apache Patch for Solaris 10
126017-02 Sun Cluster 3.2: HA-DNS Patch for Solaris 10
126020-04 Sun Cluster 3.2: HA-Containers Patch for Solaris 10 Note: Please read SIIOTP
126023-03 Sun Cluster 3.2: Sun Cluster HA for Java Web Server, Patch for Solaris 10
126026-02 Sun Cluster 3.2: HA-Kerberos Patch for Solaris 10
126032-05 Sun Cluster 3.2: Ha-MYSQL Patch for Solaris 10 Note: Please read SIIOTP
126035-05 Sun Cluster 3.2: HA-NFS Patch for Solaris 10
126044-04 Sun Cluster 3.2: HA-PostgreSQL Patch for Solaris 10 Note: Please read SIIOTP
126047-10 Sun Cluster 3.2: Ha-Oracle patch for Solaris 10 Note: Please read SIIOTP
126050-03 Sun Cluster 3.2: HA-Oracle E-business suite Patch for Solaris 10
126059-04 Sun Cluster 3.2: HA-SAPDB Patch for Solaris 10
126062-06 Sun Cluster 3.2: HA-SAP-WEB-AS Patch for Solaris 10
126068-05 Sun Cluster 3.2: HA-Sybase Patch for Solaris 10 Note: Please read SIIOTP
126080-03 Sun Cluster 3.2: HA-Sun Java Systems App Server Patch for Solaris 10
126083-02 Sun Cluster 3.2: HA-Sun Java Message Queue Patch for Solaris 10
126092-03 Sun Cluster 3.2: HA-Websphere MQ Patch Note: Please read SIIOTP
126095-05 Sun Cluster 3.2: Localization patch for Solaris 9 sparc and Solaris 10 sparc
128556-03 Sun Cluster 3.2: Man Pages Patch for Solaris 9 and Solaris 10, sparc
139921-02 Sun Cluster 3.2: JFreeChart patch for Solaris 10


New additional included patch revisions of Sun Cluster 3.2 1/09 Update2 for Solaris 10 x86 05/08 update5 or higher
126107-28 Sun Cluster 3.2: CORE patch for Solaris 10_x86
Note:
Delivers SUNW.rac_framework:4, SUNW.crs_framework:2, SUNW.ScalMountPoint:3, SUNW.ScalDeviceGroup:3, SUNW.rac_svm:3 and SUNW.LogicalHostname:3 (but LogicalHostname was introduced in revision -17). Please read SIIOTP
125515-05 Sun Cluster 3.2: Solaris Volume Manager (Mediator) Patch
125993-03 Sun Cluster 3.2: Sun Cluster 3.2: SC Checks patch for Solaris 10_x86
126009-04 Sun Cluster 3.2: HA-DB Patch for Solaris 10_x86
126015-06 Sun Cluster 3.2: HA-Apache Patch for Solaris 10_x86
126018-04 Sun Cluster 3.2: HA-DNS Patch for Solaris 10_x86
126021-04 Sun Cluster 3.2: HA-Containers Patch for Solaris 10_x86 Note: Please read SIIOTP
126024-04 Sun Cluster 3.2: Sun Cluster HA for Java Web Server, Patch for Solaris 10_x86
126027-04 Sun Cluster 3.2: HA-Kerberos Patch for Solaris 10_x86
126033-06 Sun Cluster 3.2: Ha-MYSQL Patch for Solaris 10_x86 Note: Please read SIIOTP
126036-06 Sun Cluster 3.2: HA-NFS Patch for Solaris 10_x86
126045-05 Sun Cluster 3.2: HA-PostgreSQL Patch for Solaris 10_x86 Note: Please read SIIOTP
126048-10 Sun Cluster 3.2: Ha-Oracle patch for Solaris 10_x86 Note: Please read SIIOTP
126060-05 Sun Cluster 3.2: HA-SAPDB Patch for Solaris 10_x86
126063-07 Sun Cluster 3.2: HA-SAP-WEB-AS Patch for Solaris 10_x86
126069-04 Sun Cluster 3.2: HA_Sybase Patch for Solaris 10_x86 Note: Please read SIIOTP
126081-04 Sun Cluster 3.2: HA-Sun Java Systems App Server Patch for Solaris 10_x86
126084-04 Sun Cluster 3.2: HA-Sun Java Message Queue Patch for Solaris 10_x86
126093-05 Sun Cluster 3.2: HA-Websphere MQ Patch for Solaris 10_x86 Note: Please read SIIOTP
126096-05 Sun Cluster 3.2: Localization patch for Solaris 10 amd64 ??
128557-03 Sun Cluster 3.2: Man Pages Patch for Solaris 10_x86
139922-02 Sun Cluster 3.2: JFreeChart patch for Solaris 10_x86


New additional included patch revisions of Sun Cluster 3.2 1/09 Update2 for Solaris 9 8/05 update8 or higher
126105-26 Sun Cluster 3.2: CORE patch for Solaris 9
Note:
Delivers SUNW.rac_udlm:3, SUNW.rac_framework:4, SUNW.crs_framework:2, SUNW.ScalMountPoint:3, SUNW.ScalDeviceGroup:3, SUNW.rac_svm:3, SUNW.rac_cvm:3 and SUNW.LogicalHostname:3 (but LogicalHostname was introduced in revision -18). Please read SIIOTP
125513-04 Sun Cluster 3.2: Solaris Volume Manager (Mediator) Patch
125991-03 Sun Cluster 3.2: Sun Cluster 3.2: SC Checks patch for Solaris 9
126007-02 Sun Cluster 3.2: HA-DB Patch for Solaris 9
126013-05 Sun Cluster 3.2: HA-Apache Patch for Solaris 9
126016-02 Sun Cluster 3.2: HA-DNS Patch for Solaris 9
126022-03 Sun Cluster 3.2: Sun Cluster HA for Java Web Server, Patch for Solaris 9
126031-05 Sun Cluster 3.2: Ha-MYSQL Patch for Solaris 9 Note: Please read SIIOTP
126034-05 Sun Cluster 3.2: HA-NFS Patch for Solaris 9
126043-04 Sun Cluster 3.2: HA-PostgreSQL Patch for Solaris 9 Note: Please read SIIOTP
126046-10 Sun Cluster 3.2: HA-Oracle patch for Solaris 9 Note: Please read SIIOTP
126049-03 Sun Cluster 3.2: HA-Oracle E-business suite Patch for Solaris 9
126058-04 Sun Cluster 3.2: HA-SAPDB Patch for Solaris 9
126061-06 Sun Cluster 3.2: HA-SAP-WEB-AS Patch for Solaris 9
126067-05 Sun Cluster 3.2: HA-Sybase Patch for Solaris 9 Note: Please read SIIOTP
126079-03 Sun Cluster 3.2: HA-Sun Java Systems App Server Patch for Solaris 9
126082-02 Sun Cluster 3.2: HA-Sun Java Message Queue Patch for Solaris 9
126091-03 Sun Cluster 3.2: HA-Websphere MQ Patch Note: Please read SIIOTP
126095-05 Sun Cluster 3.2: Localization patch for Solaris 9 sparc and Solaris 10 sparc
128556-03 Sun Cluster 3.2: Man Pages Patch for Solaris 9 and Solaris 10, sparc
139920-02 Sun Cluster 3.2: JFreeChart patch for Solaris 9


The quorum server is an alternative to the traditional quorum disk. The quorum server is outside of the Sun Cluster and will be accessed through the public network. Therefore the quorum server can be a different architecture.

Included patch revisions in Sun Cluster 3.2 1/09 Update2 for quorum server feature:
127404-02 Sun Cluster 3.2: Quorum Server Patch for Solaris 9
127405-03 Sun Cluster 3.2: Quorum Server Patch for Solaris 10
127406-03 Sun Cluster 3.2: Quorum Server Patch for Solaris 10_x86


If some patches must be applied when the node is in noncluster mode, you can apply them in a rolling fashion, one node at a time, unless a patch's instructions require that you shut down the entire cluster. Follow procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and boot it into noncluster mode. For ease of installation, consider applying all patches at once to a node that you place in noncluster mode.

Information about patch management available at Oracle Enterprise Manager Ops Center.

Wednesday Jan 28, 2009

private interconnect and patch 138888/138889


In specific Sun Cluster 3.x configurations the cluster node can not join. Most of the time this issue comes up after the installation of kernel update patch
138888-01 until 139555-08 or higher SunOS 5.10: Kernel Patch OR
138889-01 until 139556-08 or higher SunOS 5.10_x86: Kernel Patch
AND
Sun Cluster 3.x using an Ethernet switch (with VLAN) for the private interconnect
AND
Sun Cluster 3.x using e1000g, nxge, bge or ixgb (GLDv3) interfaces for the private interconnect.

The issue looks similar to the following messages during the boot up of the cluster node.
...
Jan 25 15:46:14 node1 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #2 completed.
Jan 25 15:46:15 node1 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter e1000g1 constructed
Jan 25 15:46:15 node1 ip: [ID 856290 kern.notice] ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
Jan 25 15:46:16 node1 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter e1000g3 constructed
Jan 25 15:47:15 node1 genunix: [ID 604153 kern.notice] NOTICE: clcomm: Path node1:e1000g1 - node2:e1000g1 errors during initiation
Jan 25 15:47:15 node1 genunix: [ID 618107 kern.warning] WARNING: Path node1:e1000g1 - node2:e1000g1 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
Jan 25 15:47:16 node1 genunix: [ID 604153 kern.notice] NOTICE: clcomm: Path node1:e1000g3 - node2:e1000g3 errors during initiation
Jan 25 15:47:16 node1 genunix: [ID 618107 kern.warning] WARNING: Path node1:e1000g3 - node2:e1000g3 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
...
Jan 25 16:33:51 node1 genunix: [ID 224783 kern.notice] NOTICE: clcomm: Path node1:e1000g1 - node2:e1000g1 has been deleted
Jan 25 16:33:51 node1 genunix: [ID 638544 kern.notice] NOTICE: clcomm: Adapter e1000g1 has been disabled
Jan 25 16:33:51 node1 genunix: [ID 224783 kern.notice] NOTICE: clcomm: Path node1:e1000g3 - node2:e1000g3 has been deleted
Jan 25 16:33:51 node1 genunix: [ID 638544 kern.notice] NOTICE: clcomm: Adapter e1000g3 has been disabled
Jan 25 16:33:51 node1 ip: [ID 856290 kern.notice] ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast

Update 6.Mar.2009:
Available now:
Alert 1020193.1 Kernel Patches/Changes may Stop Sun Cluster Nodes From Joining the Cluster

Update 26.Jun.2009:
The issue is fixed in the patches
141414-01 or higher SunOS 5.10: kernel patch OR
137104-02 or higher SunOS 5.10_x86: dls patch

Both patches require the 13955[56]-08 kernel update patch which is included in Solaris 10 5/09 update7. If using Solaris 10 5/09 update7 then Sun Cluster 3.2 requires the Sun Cluster core patch in revision -33 or higher. So, to get this one fixed it's recommended to use Solaris 10 5/09 update7 (patch 13955[56]-8 or higher & 141414-01(sparc) or 137104-02(x86)) with the Sun Cluster 3.2 core patch -33 or higher.


Choose one of the corrective actions (if not install the patch with the fix):
  • Before install the mention patches configure VLAN tagging on the Sun interface and on the switch. This makes VLAN tagged packets expected and prevents drops. This means the interface name moves to e.g. e1000g810000. After configuration change to e.g. e1000g810000 it's recommend to reboot the Sun Cluster hosts. Configuration details.

  • If using the above mentioned kernel update patch enable QoS (Quality of Service) on the Ethernet switch. The switch should be able to handle priority tagging. Please refer to the switch documentation because each switch is different.

  • Do not install the above mentioned kernel update patch if using VLAN in Sun Cluster 3.x private interconnect.

The mentioned kernel update patch delivers some new features in the GLDv3 architecture. It makes packets 802.1q standard compliant by including priority tagging. Therefore the following Sun Cluster 3.x configuration should not be affected.
\* Sun Cluster 3.x which use ce, ge, hme, qfe, ipge or ixge network interfaces.
\* Sun Cluster 3.x which have back-to-back connections for the private interconnect.
\* Sun Cluster 3.x on Solaris 8 or Solaris 9.

Thursday Oct 02, 2008

Sun Cluster 3.2 and VxVM 5.0 patch 124361-06

There are some issues around after you have installed the

Patch-ID# 124361-06
Synopsis: VRTSvxvm 5.0_MP1_RP5: Rolling Patch 5 for Volume Manager 5.0 MP1


This patch changes the handling of VxVM devices which is leading to disputes with Sun Cluster 3.2.

Seen errors:
a)
host0 # scswitch -z -D testdg -h host1
Sep 26 10:20:17 host0 Cluster.CCR: build_devlink_list: readlink failed for /dev/vx/dsk//global/.devices/node@1/dev/vx/dsk/testdg: No such file or directory

Sep 26 10:23:41 host0 SC[SUNW.HAStoragePlus:6,test-rg,test-hastp-rs,hastorageplus_prenet_start]: Failed to analyze the device special file associated with file system mount point /test/data/AB: No such file or directory


b)
host0 # clrg create test-rg
host0 # clresource create -g test-rg -t SUNW.HAStoragePlus -p FileSystemMountPoints="/testdata" test-rs
clresource: host1 - Failed to analyze the device special file associated with file system mount point /testdata: No such file or directory.

clresource: (C189917) VALIDATE on resource test-rs, resource group test-rg, exited with non-zero exit status.
clresource: (C720144) Validation of resource test-rs in resource group test-rg on node node1 failed.
clresource: (C891200) Failed to create resource "test-rs".


On the other node:
host1# Sep 26 14:27:38 host1 SC[SUNW.HAStoragePlus:4,test-rg,test-rs,hastorageplus_validate]: Failed to analyze the device special file associated with file system mount point /testdata: No such file or directory.
Sep 26 14:27:38 host1 Cluster.RGM.rgmd: VALIDATE failed on resource , resource group , time used: 0% of timeout <1800, seconds>

Workaround:
Do not install patch 124361-06 use patch 124361-05.
Important if patch is already installed: Before backing out 124361-06 ensure that Solaris 10 patch 125731-02 is installed to avoid bug 6622037.


Update 10.Oct.2008: New patch 122058-11 is released which fix the problem and obsoletes 124361-06
Patch-ID# 122058-11
Synopsis: VRTSvxvm 5.0MP3: Maintenance Patch for Volume Manager 5.0


Update 24.Oct.2008:
Basically the problems all arise when 124361-06 is installed and a VxVM volume is created on a Sun Cluster configuration. With patch 124361-06, when a vxvm volume is created it creates the special device under /devices and then we have symbolic links under /dev/vx/[r]dsk// that point to the /devices entries. This behaviour does not happen when 122058-11 is installed. The special files are created under /dev/vx/[r]dsk/< dg >/ and NOT /devices.

Check if the devices are correct. It's quite important that NO links in the mentioned directory of a device group. Two workarounds available if the wrong links exist. Besure that 122058-11 is already installed and volumes are inactive.

Workaround1:
node1# cd /global/.devices/node@1/dev/vx/dsk/testdg
node1# ls -l
total 4
lrwxrwxrwx 1 root root 46 Oct 15 16:27 vol01 -> /devices/pseudo/vxio@0:testdg,vol01,59000,blk
lrwxrwxrwx 1 root root 46 Oct 15 16:27 vol02 -> /devices/pseudo/vxio@0:testdg,vol02,59001,blk
node1# rm vol01 vol02
node1# cd /global/.devices/node@1/dev/vx/rdsk/testdg
node1# ls -l
total 4
lrwxrwxrwx 1 root root 46 Oct 15 16:27 vol01 -> /devices/pseudo/vxio@0:testdg,vol01,59000,raw
lrwxrwxrwx 1 root root 46 Oct 15 16:27 vol02 -> /devices/pseudo/vxio@0:testdg,vol02,59001,raw
node1# rm vol01 vol02
node1#
node1# cldg sync testdg
node1#
node1# ls -l /global/.devices/node@1/dev/vx/dsk/testdg
total 0
brw------- 1 root root 282, 59000 Oct 15 16:32 vol01
brw------- 1 root root 282, 59001 Oct 15 16:32 vol02
node1# ls -l /global/.devices/node@1/dev/vx/rdsk/testdg
total 0
crw------- 1 root root 282, 59000 Oct 15 16:32 vol01
crw------- 1 root root 282, 59001 Oct 15 16:32 vol02

Workaround2:
If symlink exist remove the symbolic links
node1# rm /dev/vx/[r]dsk/testdg/symlink
and then recreate the special files using the cluster command
node1# /usr/cluster/lib/dcs/scvxvmlg

Afterwards for security purposes a reconfiguation boot of all nodes is recommended.

Update 30.Oct.2008:
Available now:
Alert 1019694.1 Sun Cluster Resource "HAstoragePlus" May Fail if Veritas Volume Manager Patch 124361-06 is Installed

Wednesday Aug 06, 2008

Sun SPARC Enterprise Mx000 with active bge interface

The Sun SPARC Enterprise Server M4000, M5000, M8000 or M9000 can sporadically hang at boot time
a) if the system is part of Sun Cluster
and
b) if the system have a configured bge network interface


Example of boot hang:
...
Booting as part of a cluster
NOTICE: CMM: Node node1 (nodeid = 1) with votecount = 1 added.
NOTICE: CMM: Node node2 (nodeid = 2) with votecount = 1 added.
NOTICE: CMM: Quorum device 2 (/dev/did/rdsk/d7s2) added; votecount = 5, bitmask of nodes with configured paths = 0x3f.
NOTICE: clcomm: Adapter bge3 constructed
... now the system hang at this point ...


Solution: Install 138042-02 (or higher) of SunOS 5.10: MAC patch

Tuesday May 27, 2008

Missing preremove script in Sun Cluster 3.2 core patch revision 12 and higher.

In my last blog I stated that Sun Cluster 3.2 GA release with the -12 Sun Cluster core patch is the same as Sun Cluster 3.2 2/08 aka Update1. This is still true but the preremove script of the SUNWscr package is missing in the Sun Cluster 3.2 core patch revision -12 and higher. This is documented as internal bug 6676771. Therefore it's NOT possible to remove the SUNWscr package when the revision -12 or higher of the Sun Cluster 3.2 core patch is installed. (Lower revisions of the core patches are NOT affected.) The remove of the SUNWscr package is necessary in case of an upgrade by using the command "scinstall -u update".


The fastest workaround is described the special install instructions of the Sun Cluster core patch revision -12:
NOTE 5: After removing this patch, remove the SunCluster smf service for
        service tag.
        svcadm disable /system/cluster/sc_svtag:default
        svccfg delete /system/cluster/sc_svtag:default
Execute these commands before the start of Sun Cluster upgrade.


To fix the issue immediately it's possible to change the preremove script of SUNWscr package. At the moment the preremove script of SUNWscr will NOT be delivered with the Sun Cluster core patch. Therefore the workaround is persistent.

Add the following to the /var/sadm/pkg/SUNWscr/install/preremove script (version 1.3):

1.) New subroutine (before the main part)
remove_svtag()
{
      STCLIENT=${BASEDIR}/usr/bin/stclient
      CL_URN_FILE=${BASEDIR}/var/sadm/servicetag/cl.urn
      if [ -f ${CL_URN_FILE} ]; then
         # read the urn from the file
         URN=`cat ${CL_URN_FILE}`
         if [ -f ${STCLIENT} ]; then
         ${STCLIENT} -d -i ${URN} >/dev/null 2>&1
         fi
         rm -f ${CL_URN_FILE}
      fi
      return 0
}


2.) In the part of SVCADMD="/usr/sbin/svcadm disable -s" add
$SVCADMD svc:/system/cluster/sc_svtag:default

3.) Near the end of main routine before the line "if [ ${errs} -ne 0 ]; then" add
# Remove service tag for cluster
remove_svtag || errs=`expr ${errs} + 1`


Or download the new preremove version 1.5 script for SUNWscr package and replace the 1.3 version.
# cd /var/sadm/pkg/SUNWscr/install/
# cp premove premove.old
# cp preremove_version1.5_SUNWscr premove

Sunday Mar 09, 2008

The nscd does not cache hosts for Sun Cluster

After installation of patch 120011-14 (Solaris 10 sparc) or the patch 120012-14 (Solaris 10 x86) the nscd does not cache hosts for Sun Cluster configuration. The Solaris 10 8/07 Update4 is also affected because the mentioned patches are bundled.


By default 'cluster' database is the first entry to hosts and netmasks in file /etc/nsswitch.conf. The _nscd_get_smf_state() does not recognise 'cluster' as a backend thus the cluster nss entry is not invoked by the nscd process and hence it is not cached by nscd. This issue is due to the new interface between the nss switch engine and backend layers which was introduced by the Sparks project.

Update 2.Mar 2009 - Start -
The issue is fixed in the following patches for sparc:
126106-27 Sun Cluster 3.2: CORE patch for Solaris 10 (included in Sun Cluster 3.2 1/09 update2)
138263-03 SunOS 5.10: nscd patch (Solaris 10 10/08 Update6 only include 138263-02)
or for x86:
126107-28 Sun Cluster 3.2: CORE patch for Solaris 10_x86 (included in Sun Cluster 3.2 1/09 update2)
138264-03 SunOS 5.10_x86: nscd patch (Solaris 10 10/08 Update6 only include 138264-02)
This means if you have the mentioned patches installed then 'cluster' database should be the first entry to hosts and netmasks in file /etc/nsswitch.conf.
Update 2.Mar 2009 - End -


Known issues which occur:
- slow name lookups
- Applications linked with '-library=stlport4' abort on gethostbyname

There is no general approach to identify the issue. But if you suspect a hostname resolution issue, turn on nscd debug mode
  % more /etc/nscd.conf
  [...]
       logfile         /var/adm/nscd.log    <---- uncomment
  #    enable-cache    hosts           no
       debug-level     9                    <---- set debug level here
  [...]

  Then stop and restart nscd.
  # svcadm restart svc:/system/name-service-cache:default


Workarounds:

A) Example of a default configuration for a 2-node Sun Cluster 3.x

Add private cluster interconnect addresses to hosts, netmasks and remove 'cluster' from nsswitch.conf on ALL nodes.
Add to /etc/hosts:
172.16.0.129   clusternode1-priv-physical1
172.16.1.1     clusternode1-priv-physical2
172.16.4.1     clusternode1-priv
172.16.0.130   clusternode2-priv-physical1
172.16.1.2     clusternode2-priv-physical2
172.16.4.2     clusternode2-priv
Add to /etc/netmasks:
172.16.0.128       255.255.255.128
172.16.1.0         255.255.255.128
172.16.4.0         255.255.254.0
Remove 'cluster' entry for hosts and netmasks in /etc/nsswitch.conf e.g:
hosts: files <any other hosts database>
netmasks: files <any other netmasks database>

For non-default installations or greater than 2-node configurations look to the next workaround. Or identify the values with the commands 'ifconfig' and 'scconf -pvv|grep -i private' on all nodes.


B) For all individual Sun Cluster configurations.


 1) Backup configuration files.
   # cp /etc/nsswitch.conf /etc/nsswitch.conf.cluster
   # cp /etc/inet/hosts /etc/inet/hosts.cluster
   # cp /etc/netmasks /etc/netmasks.cluster

 2) Add Private Cluster interconnect addresses to each cluster node's local /etc/hosts file.
  NOTE: Make sure the 'cluster' is still in nsswitch.conf for the hosts entry whilst performing the following.
   # getent hosts clusternode1-priv-physical1 >>/etc/hosts
   # getent hosts clusternode1-priv-physical2 >>/etc/hosts
   # getent hosts clusternode1-priv >>/etc/hosts
   # getent hosts clusternode2-priv-physical1 >>/etc/hosts
   # getent hosts clusternode2-priv-physical2 >>/etc/hosts
   # getent hosts clusternode2-priv >>/etc/hosts
   The above is an example for a default two node cluster with two private interconnects. If you have more nodes, more interconnects or non-default hostnames then identify them by using 'ifconfig' and 'scconf -pvv|grep -i private' on all nodes.

 3) Add private cluster interconnect netmasks to netmasks.
   Note that the netmasks file should contain the network number in the first column and the corresponding netmask in the second column.
   The following script will collect these from 'cluster' before you remove it in the next step:
   # ifconfig -a | nawk '/flags/&&!/PRIVATE/{p=0}/flags/&&/PRIVATE/{p=1} \\
   p==1&&$3 ~ /netmask/{d=0;h=tolower($4);j=length(h); \\
   for (i=1;i <= j; i++) {
     d=d\*16+index("123456789abcdef",substr(h,i,1));
     if (!(i%2)){n[(i/2)]=d;d=0}
   } al=split($2,a,".");bl=split($6,b,"."); \\
   for (i=1; i<5; i++) if(n[i] != 255) a[i]=b[i] - (255 - n[i]);
     printf("%d.%d.%d.%d\\t%d.%d.%d.%d\\n",
     a[1],a[2],a[3],a[4],n[1],n[2],n[3],n[4]);
   }' > /tmp/netmasks
   # cat /tmp/netmasks
   172.16.0.128      255.255.255.128
   172.16.1.0        255.255.255.128
   172.16.4.0        255.255.254.0
  Check the output for errors. Depending on the configuration maybe it's necessary to remove some entries.
   Append new entries to netmasks:
   # cat /tmp/netmasks >> /etc/netmasks

 4) Edit nsswitch.conf to remove 'cluster' entries.
  In the following example we see the hosts and netmask entries before, change the relevant lines with nawk, verify the changes and updated nsswitch.conf:
   # egrep -n '\^hosts|\^netmasks' /etc/nsswitch.conf
   hosts: cluster files dns nisplus
   netmasks: cluster files nisplus
   # nawk '/\^(hosts|netmasks)/&&/cluster/ \\
   {gsub(/cluster/, "")}{print}' /etc/nsswitch.conf > /tmp/nsswitch.conf
   # diff /etc/nsswitch.conf /tmp/nsswitch.conf
   28c28< hosts: cluster files dns nisplus
   ---> hosts: files dns nisplus
   42c42< netmasks: cluster files nisplus
   ---> netmasks: files nisplus
   # cat /tmp/nsswitch.conf > /etc/nsswitch.conf


There are 3 internal bugs with address this issue.
Bug 6632298: nscd doesn't cache hosts for cluster after sparks project (120011-14)
Bug 6634592: nss_cluster mods to match new nsswitch API in s10u4
Bug 6644077: nscd rejects - foreign nsswitch backend
A fix for Solaris 10 is expected by the end of this year.

Tuesday Nov 13, 2007

nxge driver and patch 120011-14

This information is about different nxge packages which used for network interface cards (NIC's). At the moment, after installation of patch 120011-14 the system get the following panic, within the next boot, when the unbundled nxge package of Solaris 10 11/06 Update3 is installed.

panic[cpu1]/thread=2a100aa9cc0: BAD TRAP: type=31 rp=2a100aa8f90 addr=0 mmu_fsr=0 occurred in module "genunix" due to a NULL pointer dereference
sched: trap type = 0x31
pid=0, pc=0x1200598, sp=0x2a100aa8831, tstate=0x80001601, context=0x0
g1-g7: 7ae2d074, 1, 0, 7ae2d000, 7ae2cfd4, 10, 2a100aa9cc0


For the patch 120011-14 the bundled package version of Solaris 10 8/07 is necessary. Get the package SUNWnxge.u (sun4u architecture) or SUNWnxge.v (sun4v architecture) from the Solaris 10 8/07 Update4 distribution and higher or download the packages here.


The NIC's which use nxge driver are:
Sun Dual 10Gbe Fibre PCIe x8 Low Profile Card (X1027A-z)
Dual 10GbE XFP PCIe x8 ExpressModule for Blade Servers (X1028A-z)
Sun Quad Gigabit Ethernet UTP PCIe x8 Card (X4447A-z and X7287A-z)


Unbundled package version for Solaris 10 11/06 Update3.

t2000# pkginfo -l SUNWnxge
   PKGINST: SUNWnxge
      NAME: Sun x8 10G/1G Ethernet Adapter Driver
  CATEGORY: system
      ARCH: sparc.sun4u
   VERSION: 1.0,REV=2007.01.12.10.0
   BASEDIR: /
    VENDOR: Sun Microsystems, Inc.
      DESC: Sun x8 10G/1G Ethernet Adapter Driver
    PSTAMP: miro20070112193338
...

Bundled package version in Solaris 10 8/07 Update4.

t2000# pkginfo -l SUNWnxge
   PKGINST: SUNWnxge
      NAME: Sun NIU leaf driver
  CATEGORY: system
      ARCH: sparc.sun4u
   VERSION: 11.10.0,REV=2007.07.08.17.44
   BASEDIR: /
    VENDOR: Sun Microsystems, Inc.
      DESC: Sun NIU 10Gb/1Gb driver
    PSTAMP: on10ptchfeat20070708174804
...


Keep in mind: (currently)
Solaris 10 11/06 Update3 is only working with the unbundled version 1.0,REV=2007.01.12.10.0
Solaris 10 11/06 Update3 with 120011-14 is only working with the bundled version 11.10.0,REV=2007.07.08.17.44
Solaris 10 08/07 Update4 is only working with the bundled version 11.10.0,REV=2007.07.08.17.44


Workarounds:

1) If already suffered the mentioned panic
  a) Boot the system from net or DVD.
  b) Mount the / filesystem to /a.
     Also mount /var to /a/var if var is separate partition
     NOTE: If you have a root mirror - take care of it. Addtional steps are necessary.
  c) Remove the unbundled SUNWnxge package with
     # pkgrm -R /a SUNWnxge
  d) Add the bundled package. Use SUNWnxge.v for sun4v architecture
     or SUNWnxge.u for sun4u architecture. e.g: for sun4v
     # pkgadd -R /a -d . SUNWnxge.v
  e) Install the nxge patch for bundled nxge package 127741-01
     # patchadd -R /a 127741-01
  f) umount the filesystems and reboot the system
2) Prevent the panic (tested)
  a) Remove the unbundled package SUNWnxge with
     # pkgrm SUNWnxge
  b) Add the bundled package. Use SUNWnxge.v for sun4v architecture
     or SUNWnxge.u for sun4u architecture. e.g: for sun4v
     # pkgadd -d . SUNWnxge.v
     Note: Do NOT reboot now. The system will panic if 120011-14 is not installed!
  c) Install nxge patch for bundled nxge package 127741-01
     # patchadd 127741-01
  d) Install patch 120011-14 on Solaris 10 11/06 Update3
     Note: The patch 120011-14 requires a lot of other patches. Maybe you require a patch management tool!
  e) Reboot the system
Attention: What's NOT working
It's NOT possible to install the patch 120011-14 befor the replacement of the package SUNWnxge. After the installation of 120011-14 the system is blocked for package operation. The message is:
# pkgrm SUNWnxge pkgrm: ERROR: unable to remove any package from the system until it is rebooted. One or more patches have updated the system but these changes are not yet enabled. Additional package operations are not permitted until the system is rebooted.



TAKE CARE ABOUT ALERT 1000628.1: Solaris 10 Systems May Fail to Come up if Patches Are Applied After Kernel Patches 120011-14 (SPARC) and 120012-14 (x86) and Before the Reboot. Workaround: To avoid this issue, reboot the system immediately after installing patch 120011-14 or 120012-14.


Additional information for i386 architecture.
The kernel update patch is 120012-14.
In Solaris 10 8/07 the bundled package name is SUNWnxge.i with version 11.10.0,REV=2007.07.08.17.21
The nxge patch for bundled Solaris 10 8/07 Update4 nxge package is 127742-01.

Tuesday Jul 17, 2007

Patchlist for Sun Cluster 3.2 Geographic Edition

The first patches for Sun Cluster 3.2 Geographic Edition are released. If you have a contract you can download the required patches.


Patchlist for Solaris 9 8/05 update8 or higher & Solaris 10 11/06 update3 or higher
126607-02 Sun Cluster Geographic Edition: Core, Utilities and Man Pages Patch
126611-02 Sun Cluster Geographic Edition: Availability Suite Data Replication Patch
126613-02 Sun Cluster Geographic Edition: TrueCopy Data Replication Patch
126746-02 Sun Cluster Geographic Edition: SRDF Data Replication Patch


Patchlist for Solaris 10 x86 11/06 update3 or higher
126608-02 Sun Cluster Geographic Edition_x86: Core, Utilities and Man Pages Patch
126612-02 Sun Cluster Geographic Edition_x86: Availability Suite Data Replication Patch


Friday Jun 15, 2007

Patchlist for Sun Cluster 3.2

The first patches for Sun Cluster 3.2 are released. It's highly recommended to install the core patch and the necessary agent patches before production. If you have an contract you can download the required patches.
Update 31.Aug.2007: Added new Sun Cluster 3.2 core patches 126105, 126106 and 126107.
Update 7.Jan.2008: Added new Sun Cluster 3.2 HA-Oracle E-business suite patches 126049 and 126050.


Patchlist for Solaris 10 11/06 update3 or higher
125511-02 Sun Cluster 3.2: Core Patch for Solaris 10
126106-01 Sun Cluster 3.2: CORE patch for Solaris 10
125508-03 Sun Cluster 3.2: Manageability and Serviceability Agent
125514-01 Sun Cluster 3.2: Solaris Volume Manager (Mediator) Patch
125517-02 Sun Cluster 3.2: OPS Core Patch for Solaris 10
126011-01 Sun Cluster 3.2: HA-DHCP Patch for Solaris 10
126014-01 Sun Cluster 3.2: HA-Apache Patch
126032-01 Sun Cluster 3.2: HA-MySQL Patch
126047-01 Sun Cluster 3.2: HA-Oracle Patch
126050-02 Sun Cluster 3.2: HA-Oracle E-business suite Patch for Solaris 10
126062-01 Sun Cluster 3.2: HA-SAPWebAS Patch
126065-01 Sun Cluster 3.2: HA-Siebel Patch
126068-01 Sun Cluster 3.2: HA-Sybase Patch for Solaris 10
126092-01 Sun Cluster 3.2: HA-Websphere MQ Patch


Patchlist for Solaris 10 x86 11/06 update3 or higher
125512-02 Sun Cluster 3.2_x86: Core Patch for Solaris 10_x86
126107-01 Sun Cluster 3.2_x86: CORE patch for Solaris 10_x86
125509-03 Sun Cluster 3.2_x86: Manageability and Serviceability Agent
125515-01 Sun Cluster 3.2_x86: Solaris Volume Manager (Mediator) Patch
125518-02 Sun Cluster 3.2_x86: OPS Core Patch for Solaris 10_x86
126012-01 Sun Cluster 3.2_x86: HA-DHCP Patch for Solaris 10
126015-01 Sun Cluster 3.2_x86: HA-Apache Patch
126033-01 Sun Cluster 3.2_x86: HA-MySQL Patch
126048-01 Sun Cluster 3.2_x86: HA-Oracle Patch
126063-01 Sun Cluster 3.2_x86: HA-SAPWebAS Patch
126093-01 Sun Cluster 3.2_x86: HA-Websphere MQ Patch


Patchlist for Solaris 9 8/05 update8 or higher
125510-02 Sun Cluster 3.2: Core Patch for Solaris 9
126105-01 Sun Cluster 3.2: CORE patch for Solaris 9
125507-03 Sun Cluster 3.2: Manageability and Serviceability Agent
125513-01 Sun Cluster 3.2: Solaris Volume Manager (Mediator) Patch
125516-02 Sun Cluster 3.2: OPS Core Patch for Solaris 9
126010-01 Sun Cluster 3.2: HA-DHCP Patch for Solaris 9
126013-01 Sun Cluster 3.2: HA-Apache Patch
126031-01 Sun Cluster 3.2: HA-MySQL Patch
126046-01 Sun Cluster 3.2: HA-Oracle Patch
126049-02 Sun Cluster 3.2: HA-Oracle E-business suite Patch for Solaris 9
126061-01 Sun Cluster 3.2: HA-SAPWebAS Patch
126064-01 Sun Cluster 3.2: HA-Siebel Patch
126067-01 Sun Cluster 3.2: HA-Sybase Patch for Solaris 9
126085-01 Sun Cluster 3.2: HA-SWIFTAlliance Access Patch
126091-01 Sun Cluster 3.2: HA-Websphere MQ Patch


In case of localized version you need addtional:
126095-01 Sun Cluster 3.2: Localization patch for Solaris 9 sparc and Solaris 10 sparc
126096-01 Sun Cluster 3.2_x86: Localization patch for Solaris 10 amd64


In case of cacao 2.x for Sun Cluster Manager GUI:
123893-04 SunOS 5.10: Common Agent Container (cacao) runtime 2.1 upgrade patch
123894-03 SunOS 5.10: Common Agent Container (cacao) secure web server 2.1 upgrade patch
123895-03 SunOS 5.10: Common Agent Container (cacao) monitoring 2.1 upgrade patch
123896-04 SunOS 5.10_x86: Common Agent Container (cacao) runtime 2.1 upgrade patch
123897-03 SunOS 5.10_x86: Common Agent Container (cacao) secure web server 2.1 upgrade patch
123898-03 SunOS 5.10_x86: Common Agent Container (cacao) monitoring 2.1 upgrade patch


Wednesday Feb 07, 2007

Start with Sun Cluster 3.2


The next powerful version of Sun Cluster is available. Sun Cluster 3.2 is part of the Solaris Cluster product suite which includes Sun Cluster, Sun Cluster Geographic Edition, developer tools and support for commercial and open-source applications through agents. Some resources I like to mention:

For the installation of Sun Cluster 3.2 you should know:
--> Solaris 10 11/06 (Update3) is required. No earlier release of Solaris 10 is supported.
--> Addtional the following patches are highly recommended. (The table lists minimum revision of the patches)

SPARC:
118833-36 SunOS 5.10: kernel patch
124918-02 SunOS 5.10: devfsadm, devlinks, drvconfig patch
124916-01 SunOS 5.10: sd, ssd drivers patch
121010-04 SunOS 5.10: rpc.metad patch
120986-10 SunOS 5.10: mkfs and newfs patch
X86:
118855-36 SunOS 5.10_x86: kernel patch
124920-02 SunOS 5.10_x86: Solaris boot filelist.ramdisk patch
124919-02 SunOS 5.10_x86: devfsadm patch support
124917-02 SunOS 5.10_x86: sd driver patch
121011-04 SunOS 5.10_x86: rpc.metad patch
120987-10 SunOS 5.10_x86: mkfs, newfs, other ufs utils patch

Note: Sun Cluster 3.2 can also run on Solaris 9 8/05 (Update8). Certainly all Solaris 10 specific features are not available in Solaris 9.

Sunday May 07, 2006

Patching to Sun Cluster 3.1 Update4

Patching a Sun Cluster 3.1 on Solaris 8 or Solaris 9 is not always straight forward. Especially if you install the Sun Cluster core patch which is equivalent to Sun Cluster 3.1 8/05 aka Update4 release.

The following patches are equivalent to Sun Cluster 3.1 8/05 aka Update4:
Solaris 9: 117949-12 or higher
Solaris 9_x86: 117909-12 or higher
Solaris 8: 117950-12 or higher

If you install a mentioned SC core patch without to shutdown the whole cluster then patching a node is similar to rolling upgrade. It is also called "Rebooting Patch (Node) procedure". For these procedures it is a requirement to have the -11 revision of the SC core patch installed and active. Active means you need to install -11 revision of the patch and reboot the nodes. If the -11 SC core patch is not installed AND active then you will hit the bug 6210440 and your whole cluster is going down within patch installation. The issue is documented on several places but still many Sun Cluster 3.1 will get affected. Therefore never forget to read the special install instructions of these patches.

Beware if you use TLP (Traffic Light Patchmanagement) only one revision of the SC core patch is in the patchset. If the patchset include the revision -12 or higher then you should have an active -11 SC core patch befor you install the TLP patchset.

It is not necessary to look to the Solaris 10 SC core patches because Sun Cluster 3.1 8/05 is the first release which support Solaris 10. But I like to mention, it is highly recommended to use Solaris 10 1/06 aka Update1 for Sun Cluster 3.1 8/05 on Solaris 10.

About

I'm still mostly blogging around Solaris Cluster and support. Independently if for Sun Microsystems or Oracle. :-)

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today