Monday Mar 31, 2014

Enhanced Oracle Solaris Cluster Security Framework

Enhanced Oracle Solaris Cluster Security Framework

Besides providing high availability (HA) & reliability to the applications, Oracle Solaris Cluster data services (agents) strive to provide a very secure HA environment by leveraging some of the salient security features implanted in the Oracle Solaris Cluster software. Oracle Solaris Cluster Resource Group Manager (RGM) callback methods such as Start, Stop or Validate execute with a high level of privilege and must be protected against modification by a non-privileged user. These callback methods in turn might execute application programs that often do not require elevated privilege. If an application program is to be executed with elevated privilege, it must similarly be protected against modification by an unprivileged user.

In summary, an agent developer needs to do ownership/permission checks on programs to be executed with elevated privilege; and we want to facilitate the reduction of privilege for executing such programs. For these purposes, several enhancements were introduced in the Oracle Solaris Cluster 4.1 software:

1) New checks on the ownership and permission of RGM callback method executable files.

2) A new Application_user standard property to provide a standard hook for the end user to specify a non-root username for execution of application programs.

3) A new scha_check_app_user command, which implements ownership and permission checks on application executable files.

4) A new cluster property, resource_security, to control the new ownership and permission checks.

This blog will brief you on these enhancement features.

Invoking applications with limited privilege using data services in Oracle Solaris Cluster software

The security enhancement features mentioned above allow Oracle Solaris Cluster software to follow the principle of least privilege and limit any damage that might happen due to accidental or deliberate misuse of unlimited access right bestowed to superuser (root). Oracle Solaris Cluster data services adhere to the principle of least privilege while invoking the applications running in HA mode on Oracle Solaris Cluster software. Since the resource methods that are responsible for interacting with the applications always run with superuser privilege, it is highly recommended from a security perspective that this privilege level be dropped down to the minimum while executing the application program.

It is therefore crucial that the Oracle Solaris Cluster agent methods should run all external programs using a wrapper, to ensure that the external program is executed with the correct username and privileges.

Oracle Solaris Cluster software implements this concept by providing the scha_check_app_user command and properties like Application_user and resource_security. Agent developers can use the Application_user property and the scha_check_app_user command to enforce least-privilege for their own applications. The behaviour of the scha_check_app_user command highly depends on the values of the two properties and The entire picture will be clearer when we visualize this with the help of a simple example described in further text.

For fine details about these features, refer to the man pages for scha_check_app_user(1HA), r_properties(5) and resource_security in cluster(1cl). Meanwhile let us take a quick tour to explore these features by creating a very simple application (referred as "My Example App" in this article) and a minimal skeleton of an agent to bind this application with the Oracle Solaris Cluster HA environment. However, the complexity of agent code is much higher when all the perspectives of an HA environment are taken into consideration. Our goal here is just to understand how to use these security features and hence, only the minimal modules required to bind an application to a resource and start/stop the application/program have been described.

Application_user example: My Example App

Oracle Solaris Cluster software supports myriad applications with its high availability environment and many of them are pretty complex, too. ‘My Example App' on the other hand is a concise sample shell script and will serve as an example of the usage of the Application_user property and the scha_check_app_user command.

My Example App performs the job of sending mail notification to a cluster administrator in scenarios of failover/switchover. Although short, this application can be effectively used to permit a cluster administrator to check emails on a smartphone and see whether a necessary action might be required if a critical resource or resource group goes up/down. The admin can put the My Example App resource in the same resource group of another important resource that needs to be monitored for this purpose.

Please note that this agent doesn't cover all the modules and it's not recommended to run this on a production box.

My Example App modules

- Online module (executes when resource goes up)

/opt/myexapp/online.ksh (Executable permissions: 755 owned by user app_user)

- Offline module (executes when resource goes down)

/opt/myexapp/offline.ksh (Executable permissions: 755 owned by user app_user)

Note: app_user is the UNIX user expected to execute the application. There should be a valid mapping of app_user on all the nodes on which the resource is planned to be configured.

Agent development for My Example App


To create a minimally functional agent for My Example App, we will at least require the Resource Type Registration file, start script and stop script.

 - myexapp RTR file

/opt/myexapp/myexapp.rtr is a Resource Type Registration (RTR) file for this agent. All resources of type myexapp will directly inherit the properties from this file.  

 - Start script

myexapp_start will serve as the start script and will be executed every time the resource goes up.

/opt/myexapp/bin/myexapp_start

- Stop script

myexapp_stop will serve as the stop script and will be executed every time the configured resource goes down.

/opt/myexapp/bin/myexapp_stop

Resource configuration for My Example App

- Register the RTR file:

# /usr/cluster/bin/clresourcetype register -f /opt/myexapp/myexapp.rtr myexapp

- Create the resource group and the resource :
# /usr/cluster/bin/clresourcegroup create myexappRG

# /usr/cluster/bin/clresource create -g myexappRG -t myexapp myexappRS

This completes the configuration phase of the application to be run on Oracle Solaris Cluster software. Now, let us dive deeper to understand the behaviour of the scha_check_app_user command when it is invoked in different scenarios. Refer to scha_check_app_user(1HA) man page for more details.

Behaviour of scha_check_app_user in various scenarios

The following are the descriptions of the most prominent scenarios.

Note: The following guidelines apply to all the example scenarios:

- The user specified as Application_user should be present on all the participating nodes of the cluster.

- The user specified with the -U option is taken as application user ignoring the configured Application_user, file owner or resource_security value.

This completes a comprehensive case study for use of the scha_check_app_user command in a sample application running on Oracle Solaris Cluster software. Now, let us explore one more important feature that helps in storing sensitive information across an Oracle Solaris Cluster configuration.

Handling passwords and sensitive information across Oracle Solaris Cluster

The Oracle Solaris Cluster 4.1 software provides a secure and easy to use mechanism for storing passwords and other sensitive information through private strings. A private string is identified with a unique name, and has an encoded value that can only be obtained by using the scha_cluster_get command.

The clpstring command manages Oracle Solaris Cluster private strings. Private strings are used by resources to store and retrieve private values securely. A typical use might be for an internal password used by an agent. The clps command is the short form of the clpstring command.

Let’s say our agent uses a password string. In order to harness the private strings feature of the Oracle Solaris Cluster security framework for this agent, we are required to bind the private strings with the data service resource.

# /usr/cluster/bin/clpstring create -b RS1 -v pw_str

Enter string value:

Enter string value again:

Private string pw_str is created for the global cluster and is bound to the resource RS1.

Private strings are never exposed to non-privileged users in either obfuscated or clear text form. However, privileged users can then retrieve the private strings by using the scha_cluster_get query as follows:

# /usr/cluster/bin/scha_cluster_get -O PSTRING pw_str

For more detailed information on the clpstring command, refer clpstring(1 CL) man page.

Although the Oracle Solaris Cluster 4.1 software supports these security enhancements, it’s important to note that not all agents are currently using these new features. Some existing agents might execute application programs with elevated privileges, for example, they might be executed as root. So it’s judicious to take necessary steps in validating the contents of such programs installed on a cluster, and to make sure that they are installed with ownership & permissions that prevent a non-privileged user from modifying them.

Posted By:
Tapan Avasthi
Data Services, Availability Engineering, Oracle Solaris Cluster

Tuesday Oct 08, 2013

solaris10 Brand Zone Clusters

solaris10 Brand Zone Clusters

Introduction

The solaris10 brand zone cluster, released in Oracle Solaris Cluster version 4.1 software, provides a virtualized Oracle Solaris 10 cluster environment in an Oracle Solaris 11 configuration. Using this feature enables customers to run or migrate cluster applications that are deployed on the Oracle Solaris 10 operating system, without any modification to the application.

The following diagram depicts the coexistence of Oracle Solaris 11 and Oracle Solaris 10 cluster applications, which are isolated by using the zone cluster feature.

Overview of Creating a solaris10 Brand Zone Cluster

You perform the following tasks to create a solaris10 brand zone cluster:

    1. Pick a zone image to migrate or install

    2. Create an archive

    3. Configure the zone cluster

    4. Install the zone image for the zone cluster

    5. Install the cluster software

    6. Boot the zone cluster

    7. Log into zone and usage

The following procedure describes these tasks in detail.

How to Create a solaris10 Brand Zone Cluster

Before You Begin - Ensure that all requirements in Planning the Oracle Solaris Cluster Environment in Oracle Solaris Cluster Software Installation Guide are met.

1. Pick a zone image to migrate or install.

The archive types that are supported for installing a zone cluster are the following:

  • native brand zone on an Oracle Solaris10 system

  • cluster brand zone on an Oracle Solaris Cluster node with proper patch level, archive derived from a physical system installed with Oracle Solaris 10 software

  • solaris10 brand zone, archive derived from an installed solaris10 brand zone

  • A n Oracle Solaris 10 physical system

  • An Oracle Solaris 10 physical cluster node

For more details about support for configuring and installing a solaris10 brand zone cluster, see the Oracle Solaris Cluster 4.1 Release Notes and How to Create a Zone Cluster in Oracle Solaris Cluster Software Installation Guide .

In this example, an archive is derived from a physical node that is installed with Oracle Solaris 10 software is used with no Oracle Solaris Cluster software installed.

2. Create an archive and store it in a shared location.

For more details about creating archives, see Assessing an Oracle Solaris 10 System and Creating an Archive in System Administration Guide: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management .

# flarcreate -S -n s10-system -L cpio /net/mysharehost/share/s10-system.flar

This archiver format is NOT VALID for flash installation of ZFS root pool.
This format is useful for installing the system image into a zone.
Reissue command without -L option to produce an archive for root pool install.
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Creating the archive...
6917057 blocks
Archive creation complete.
Running postcreation scripts...
Postcreation scripts done.

Running pre-exit scripts...
Pre-exit scripts done.

3. Configure the zone cluster.

Create and configure the zone cluster, named s10-zc in this example, on the global cluster.

The main differences between the solaris and solaris10 brand zone cluster are setting the brand to solaris10 and adding the sysid configuration.


4. Install the zone image for the zone cluster.

The zone image is obtained in Step 3.

# clzonecluster install -a /net/mysharehost/share/s10-system.flar s10-zc

5. Install the cluster software.

Perform this step only if the archive does not contain cluster software in the image.

a. Boot the zone cluster into Offline/Running mode.

# clzonecluster boot -o s10-zc

b. Access the zone on all nodes of zone-cluster and make sure that system configuration is complete.

# zlogin -C s10-zc

If configuration is not complete, finish any pending system configuration.

c. From the global zone, check the zone cluster status.

# clzonecluster status s10-zc

=== Zone Clusters ===

--- Zone Cluster Status ---

Name

----

Brand

-----

Node Name

---------

Zone Host Name

--------------

Status

------

Zone Status

-----------

s10-zc

solaris10

phys-host-1

zc-host-1

Offline

Running



phys-host-2

zc-host-2

Offline

Running

d. Install the zone cluster software.

# clzonecluster install-cluster -d /net/mysharehost.com/osc-dir/ \

-p patchdir=/net/mysharehost/osc-dir,patchlistfile=plist-sparc \

-s all s10-zc

-p patchdir

Specifies the location of the patches to be installed along with the cluster software.

patchlistfile

Specifies the file that contains the list of patches to be installed inside the zone cluster along with the cluster software.In this example, the contents of the file plist-sparc are as follows:

# cat /net/mysharehost/osc-dir/plist-sparc

145333-15

Note - Both the patchdir and patchlistfile locations must be accessible to all nodes of the cluster.

-s

Specifies the agent packages that should be installed along with core cluster software. In this example, all is specified to install all the agent packages.

6. Boot the zone cluster.

a. Reboot the zone cluster to boot the zone into Online/Running mode.

You might have to wait for some time to get the status to Online/Running.

# clzonecluster reboot s10-zc

b. From the global zone, check the zone cluster status.

The status of zone cluster will now be in Online/Running mode.

# clzonecluster status s10-zc

=== Zone Clusters ===

--- Zone Cluster Status ---

Name

----

Brand

-----

Node Name

---------

Zone Host Name

--------------

Status

------

Zone Status

------------

s10-zc

solaris10

phys-host-1

zc-host-1

Online

Running



phys-host-2

zc-host-2

Online

Running

7. Log in to the zone and verify the status.

# zlogin s10-zc

[Connected to zone 's10-zc' pts/2]

Last login: Mon Nov 5 21:20:31 on pts/2

# /usr/cluster/bin/clnode status

=== Cluster Nodes ===

--- Node Status ---

Node Name

---------

Status

------

zc-host-1

Online

zc-host-2

Online

Next Steps

The zone cluster configuration is now complete. You can now install and bring up any Oracle Solaris 10 applications and make them highly available by creating the necessary resources and resource groups.

-- Mahesh Subramanya

Wednesday Mar 13, 2013

Announcing Oracle Solaris Custer 3.3 3/13 !

We are pleased to announce Oracle Solaris Cluster 3.3 3/13 today! This  release is an important milestone for our Oracle Solaris 10 customers as it delivers to them the same advanced Oracle Solaris Cluster 4.1 features. 



As more and more customers deploy Oracle Solaris Cluster on Oracle Solaris 11, some enterprises still have various factors to consider before moving onto a different OS version. However, their needs for the latest and greatest HA technologies should not be compromised. Oracle Solaris Cluster 3.3 3/13 is bridging that gap. Customers will also become familiar with Oracle Solaris Cluster 4.x features when they are ready to move on to Oracle Solaris 11.


Let's look at some of the highlights in this release:


Best for Oracle Environments


·       Disaster recovery with ZFS Storage Appliance replication 


·      Increased availability for Oracle applications such as Oracle Web-Tier for Fusion Middleware and PeopleSoft Job Scheduler with new agents


·       Multi-cluster dependencies management for Oracle database 11g through Oracle External Proxy


·       Automated setup and configuration for PeopleSoft and WebLogic Server through configuration wizards 


Built for Cloud Infrastructure


·       Faster deployment of virtualized HA configurations through Zone cluster configuration wizard


·       Multi-level, labeled security in Zone clusters using Oracle Solaris Trusted Extensions


Best Availability for Enterprise Solutions

·      Faster failure detection and failfast for storage devices, reducing
storage failure detection time and client time out from minutes to
seconds)


·       Improved per node dependencies management, decreasing client timeout to seconds


·       Increased availability for enterprise applications such as new application versions for SAP NetWeaver


Oracle Solaris Cluster tested all applications that we support together with  Oracle Sun systems, Oracle and 3rd party storage and networking technologies. Out-of-the-box and in a few steps, your application is made ready to take your business-critical missions and run in local, campus, metropolitan, and worldwide clusters in physical and virtual environments.








 -Amour



Friday Oct 26, 2012

Announcing Release of Oracle Solaris Cluster 4.1!


Tuesday Sep 25, 2012

Oracle Solaris Cluster at Oracle OpenWorld 2012

Once again Oracle OpenWorld is taking over San Francisco's Moscone Center.  Once Again Oracle Solaris Cluster will be present at the event. Please come and visit us in the Oracle DEMOgrounds in Moscone South.  Take the time to stop by at the Oracle Solaris Cluster demo pod (S-116): you will meet some of our architects, tech leads and product managers... And if you are interested in sessions showing the use of Oracle Solaris Cluster check our Focus On document.


Have a great show and hope to meet you there.




About

mkb

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today