Thursday Aug 22, 2013

Hadoop Cluster with Oracle Solaris Hands on Lab at Oracle Open WORLD 2013

If you want to learn how-to build a Hadoop cluster using Solaris 11 technologies please join me at the following Oracle Open WORLD 2013 lab.
How to Set Up a Hadoop Cluster with Oracle Solaris [HOL10182]

In this Hands-on-Lab we will preset and demonstrate using exercises how to set up a Hadoop cluster Using Oracle Solaris 11 technologies like: Zones, ZFS, DTrace  and Network Virtualization.
Key topics include the Hadoop Distributed File System and MapReduce.
We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes.
In addition how we can combine the Oracle Solaris 11 technologies for better scalability and data security.
During the lab users will learn how to load data into the Hadoop cluster and run Map-Reduce job.
This hands-on training lab is for system administrators and others responsible for managing Apache Hadoop clusters in production or development environments.

This Lab will cover the following topics:

    1. How to install Hadoop

    2. Edit the Hadoop configuration files

    3. Configure the Network Time Protocol

    4. Create the Virtual Network Interfaces

    5. Create the NameNode and the Secondary NameNode Zones

    6. Configure the NameNode

    7. Set Up SSH between the Hadoop cluster member

    8. Format the HDFS File System

    9. Start the Hadoop Cluster

   10. Run a MapReduce Job

   11. How to secure data at rest using ZFS encryption

   12. Performance monitoring using Solaris DTrace

Register Now

Monday Jul 08, 2013

Oracle SPARC Software on Silicon

This past week (1-July-2013) Oracle ISV Engineering participated in the Oracle Technology Day, one of the largest IT event in Israel with over 1,000 participants.
During the event Oracle showed the latest technology including the Oracle Database 12c and the new SPARC T5 CPU

Angelo Rajadurai presented the new SPARC T5 CPU and covered the latest features of this technology.

The topics that Angelo presented were:

The SPARC T5 CPU architecture is unique in terms of how it can handle multi-thread workload in addition to very good single thread performance.
When Oracle switched from 40 nanometer technology to a 28 nanometer, the T5 performance doubled; for example :
16 cores versus 8 cores on the T4 – doubled the throughput of this CPU.

Doubled the number of memory links, from 2 in the T4 into 4 on the T5, Each memory link is 12 lanes southbound and 12 lanes northbound and operates at 12.8 Gb/sec.

The I/O subsystem use PCI Express Rev 3 vs Rev 2 on the previous model, which means that we doubled the I/O bandwidth also!

New coherency protocol (e.g. directory-based) which allows near linear fashion from one to eight sockets as well there are seven coherence links, each with 12 lanes in each direction running at 153.6 Gb/sec.

Sixteen cryptography units per SPARC T5 processor.

In addition we increased the CPU clock from 3GHz to 3.6 GHz which improves single thread performance by ~20% without any code modification.

Another CPU capability of the SPARC T5 CPU is the ability to change the core behavior based on the workload.
For example: if the Solaris operating system recognizes that the workload is single threaded, it can change automatically the core characteristic for a single thread performance.
If needed the user can change the core behavior manually.

SPARC T5 brings all of these great features with the same pricing model so the new SPARC T5 servers double your price/performance metric.
No wonder that this server won so many world records

Oracle is the only vendor that published Public CPU roadmap for their future CPUs, and delivered it before time !

Oracle used the SPARC T5 servers as a building block for the Oracle SuperCluster T5-8 which is Oracle’s fastest engineered system.
Combining powerful virtualization and unique Exadata and Exalogic optimizations

You can use the Oracle SuperCluster T5-8 in order to run the most demanding Enterprise applications.

Although the SPARC T5 doubled the performance, we're approaching the limit of physics and we need to think about new approaches for CPU performance acceleration.

The technology that will allow us to keep doubling the performance every two years is the “Software on Silicon” CPU technology.
This technology will run CPU intensive instruction inside the CPU versus in software so it can accelerate the workload performance by order of magnitude.

The first implementation of the “Software on Silicon” is the the Encryption Accelerator.
This intrinsic CPU capability allows us to accelerate the most common bulk encryption ciphers like AES and DES. 
SPARC T5 also supports asymmetric key exchange with RSA and ECC and authentication or hash functions like SHA and MD5.

This built in encryption capability provides end-to-end data center encryption without the performance penalty usually associated with a multi layer data protection.

During our performance encryption benchmarks,we saw negligibly performance overhead when running the same workload using the CPU encryption accelerator (<5%). 

Potentially, we can use the “Software on Silicon” concept and implement it for other CPU intensive tasks such as :

Java acceleration
Database Query
Cluster Interconnect
Application Data Protection

See Angelo's presentation here

Conclusion - In this post, we described Oracle has improved the SPARC T5 performance in the CPU subsystem ,I/O and coherency capabilities.
In addition, we took a look at what are the possible future plans for the “Silicon on software” CPU technology.

Monday Jun 24, 2013

How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones.
The following are benefits of using Oracle Solaris for a MongoDB cluster:

• You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster.
• In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort.
• You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools.
ZFS built-in compression provides optimized disk I/O utilization for better I/O performance.
In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies.

Figure 1 shows the architecture:

Friday Jun 14, 2013

Public Cloud Security Anti Spoofing Protection

This past week (9-Jun-2013) Oracle ISV Engineering participated in the IGT cloud meetup, the largest cloud community in Israel with 4,000 registered members.
During the meetup, ISV Engineering presented two presentations: 

Introduction to Oracle Cloud Infrastructure presented by Frederic Pariente
Use case : Cloud Security Design and Implementation presented by me 
In addition, there was a partner presentation from ECI Telecom
Using Oracle Solaris11 Technologies for Building ECI R&D and Product Private Clouds presented by Mark Markman from ECI Telecom 
The Solaris 11 feature that received the most attention from the audience was the new Solaris 11 network virtualization technology.
The Solaris 11 network virtualization allows us to build any physical network topology inside the Solaris operating system including virtual network cards (VNICs), virtual switches (vSwitch), and more sophisticated network components (e.g. load balancers, routers and fire-walls).
The benefits for using this technology are in reducing infrastructure cost since there is no need to invest in superfluous network equipment. In addition the infrastructure deployment is much faster, since all the network building blocks are based on software and not in hardware. 
One of the key features of this network virtualization technology is the Data Link Protection. With this capability we can provide the flexibility that our partners need in a cloud environment and allow them root account access from inside the Solaris zone. Although we disabled their ability to create spoofing attack  by sending outgoing packets with a different source IP or MAC address and packets which aren't types of IPv4, IPv6, and ARP.

The following example demonstrates how to enable this feature:
Create the virtual VNIC (in a further step, we will associate this VNIC with the Solaris zone):

# dladm create-vnic -l net0 vnic0

Setup the Solaris zone:
# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone:
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> add net
zonecfg:secure-zone:net> set physical=vnic0
zonecfg:secure-zone:net> end
zonecfg:secure-zone> verify
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

Install the zone:
# zoneadm -z secure-zone install
Boot the zone: # zoneadm -z secure-zone boot
Log In to the zone:
# zlogin -C secure-zone

NOTE - During the zone setup select the vnic0 network interface and assign the IP address.

From the global zone enable link protection on vnic0:

We can set different modes: ip-nospoof, dhcp-nospoof, mac-nospoof and restricted.
Any outgoing IP, ARP, or NDP packet must have an address field that matches either a DHCP-configured IP address or one of the addresses listed in the allowed-ips link property.
mac-nospoof: prevents the root user from changing the zone mac address. An outbound packet's source MAC address must match the datalink's configured MAC address.
dhcp-nospoof: prevents Client ID/DUID spoofing for DHCP.
only allows IPv4, IPv6 and ARP protocols. Using this protection type prevents the link from generating potentially harmful L2 control frames.

# dladm set-linkprop -p protection=mac-nospoof,restricted,ip-nospoof vnic0

Specify the IP address as values for the allowed-ips property for the vnic0 link:

# dladm set-linkprop -p allowed-ips= vnic0

Verify the link protection property values:
# dladm show-linkprop -p protection,allowed-ips vnic0

vnic0 protection rw mac-nospoof, -- mac-nospoof,
restricted, restricted,
ip-nospoof ip-nospoof,
vnic0 allowed-ips rw -- --

We can see that is set as allowed ip address.

Log In to the zone

# zlogin secure-zone

After we login into the zone let's try to change the zone's ip address:

root@secure-zone:~# ifconfig vnic0
ifconfig:could not create address: Permission denied

As we can see we can't change the zone's ip address!

Optional - disable the link protection from the global zone:

# dladm reset-linkprop -p protection,allowed-ips vnic0

NOTE - we don't need to reboot the zone in order to disable this property.

Verify the change

# dladm show-linkprop -p protection,allowed-ips vnic0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
vnic0 protection rw -- -- mac-nospoof,
vnic0 allowed-ips rw -- -- --

As we can see we don't have restriction on the allowed-ips property.

Conclusion In this blog I demonstrated how we can leverage the Solaris 11 Data link protection in order to prevent spoofing attacks.

Monday May 20, 2013

How To Protect Public Cloud Using Solaris 11 Technologies

When we meet with our partners, we often ask them, “ What are their main security challenges for public cloud infrastructure.? What worries them in this regard?”
This is what we've gathered from our partners regarding the security challenges:

1.    Protect data at rest in transit and in use using encryption
2.    Prevent denial of service attacks against their infrastructure
3.    Segregate network traffic between different cloud users
4.    Disable hostile code (e.g.’ rootkit’ attacks)
5.    Minimize operating system attack surface
6.    Secure data deletions once we have done with our project
7.    Enable strong authorization and authentication for non secure protocols

Based on these guidelines, we began to design our Oracle Developer Cloud. Our vision was to leverage Solaris 11 technologies in order to meet those security requirements.

First - Our partners would like to encrypt everything from disk up the layers to the application without the performance overhead which is usually associated with this type of technology.
The SPARC T4 (and lately the SPARC T5) integrated cryptographic accelerator allow us to encrypt data using ZFS encryption capability.
We can encrypt all the network traffic using SSL from the client connection to the cloud main portal using the Secure Global Desktop (SGD) technology and also encrypt the network traffic between the application tier to the database tier. In addition to that we can protect our Database tables using Oracle Transparent Data Encryption (TDE).
During our performance tests we saw that the performance impact was very low (less than 5%) when we enabled those encryption technologies.
The following example shows how we created an encrypted file system

# zfs create -o encryption=on rpool/zfs_file_system

Enter passphrase for 'rpool/zfs_file_system':
Enter again:

NOTE - In the above example, we used a passphrase that is interactively requested but we can use SSL or a key repository.
Second  - How we can mitigate denial of service attacks?
The new Solaris 11 network virtualization technology allow us to apply virtualization technologies to  our network by splitting the physical network card into multiple virtual network ‘cards’. in addition, it provides the capability to setup flow which is sophisticated quality of service mechanism.
Flows allow us to limit the network bandwidth for a specific network port on specific network interface.

In the following example we limit the SSL traffic to 100Mb on the vnic0 network interface

# dladm create-vnic vnic0 –l net0
# flowadm add-flow -l vnic0 -a transport=TCP,local_port=443 https-flow
# flowadm set-flowprop -p maxbw=100M https-flow

During any (Denial of Service) DOS attack against this web server, we can minimize the impact on the rest of the infrastructure.
Third -  How can we isolate network traffic between different tenants of the public cloud?
The new Solaris 11 network technology allow us to segregate the network traffic on multiple layers.

For example we can limit the network traffic based on the layer two using VLANs

# dladm create-vnic -l net0  -v 2 vnic1

Also we can be implement firewall rules for layer three separations using the Solaris 11 built-in firewall software.
For an example of Solaris 11 firewall see
In addition to the firewall software, Solaris 11 has built-in load balancer and routing software. In a cloud based environment it means that new functionality can be added promptly since we don't need an extra hardware in order to implement those extra functions.

Fourth - Rootkits have become a serious threat is allowing the insertion of hostile code using custom kernel modules.
The Solaris Zones technology prevents loading or unloading kernel modules (since local zones lack the sys_config privilege).
This way we can limit the attack surface and prevent this type of attack.

In the following example we can see that even the root user is unable to load custom kernel module inside a Solaris zone

# ppriv -De modload -p /tmp/systrace

modload[21174]: missing privilege "ALL" (euid = 0, syscall = 152) needed at modctl+0x52
Insufficient privileges to load a module

Fifth - the Solaris immutable zones technology allows us to minimize the operating system attack surface
For example: disable the ability to install new IPS packages and modify file systems like /etc
We can setup Solaris immutable zones using the zonecfg command.

# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone.
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> set file-mac-profile=fixed-configuration
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

# zoneadm -z secure-zone install

We can combine the ZFS encryption and immutable zones for more examples see:

Sixth - The main challenge of building secure BIG Data solution is the lack of built-in security mechanism for authorization and authentication.
The Integrated Solaris Kerberos allows us to enable strong authorization and authentication for non-secure by default distributed systems like Apache Hadoop.

The following example demonstrates how easy it is to install and setup Kerberos infrastructure on Solaris 11

# pkg install pkg://solaris/system/security/kerberos-5
# kdcmgr -a kws/admin -r EXAMPLE.COM create master

Finally - our partners want to assure that when the projects are finished and complete, all the data is erased without the ability to recover this data by looking at the disk blocks directly bypassing the file system layer.
ZFS assured delete feature allows us to implement this kind of secure deletion.
The following example shows how we can change the ZFS wrapping key to a random data (output of /dev/random) then we unmount the file system and finally destroy it.

# zfs key -c -o  keysource=raw,file:///dev/random rpool/zfs_file_system
# zfs key -u rpool/zfs_file_system
# zfs destroy rpool/zfs_file_system

In this blog entry, I covered how we can leverage the SPARC T4/T5 and the Solaris 11 features in order to build secure cloud infrastructure. Those technologies allow us to build highly protected environments without  the need to invest extra budget on special hardware. They also  allow us to protect our data and network traffic from various threats.
If you would like to hear more about those technologies, please join us at the next IGT cloud meet-up

Wednesday Feb 06, 2013

Building your developer cloud using Solaris 11

Solaris 11 combines all the good stuff that Solaris 10 has in terms
of enterprise scalability, performance and security and now the new cloud

During the last year I had the opportunity to work on one of the
most challenging engineering projects at Oracle,
building developer cloud for our OPN Gold members in order qualify their applications on Solaris 11.

This cloud platform provides an intuitive user interface for VM provisioning by selecting VM images pre-installed with Oracle Database 11g Release 2 or Oracle WebLogic Server 12c , In addition to that simple file upload and file download mechanisms.

We used the following Solaris 11 cloud technologies in order to build this platform, for example:

Oracle Solaris 11 Network Virtualization
Oracle Solaris Zones
ZFS encryption and cloning capability
NFS server inside Solaris 11 zone

You can find the technology building blocks slide deck here .

For more information about this unique cloud offering see .

Thursday Jan 24, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris Zones

This article starts with a brief overview of Hadoop and follows with an
example of setting up a Hadoop cluster with a NameNode, a secondary
NameNode, and three DataNodes using Oracle Solaris Zones.

The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:

· Fast provision of new cluster members using the zone cloning feature

· Very high network throughput between the zones for data node replication

· Optimized disk I/O utilization for better I/O performance with ZFS built-in compression

· Secure data at rest using ZFS encryption

Hadoop use the Distributed File System (HDFS) in order to store data.
HDFS provides high-throughput access to application data and is suitable
for applications that have large data sets.

The Hadoop cluster building blocks are as follows:

· NameNode: The centerpiece of HDFS, which stores file system metadata,
directs the slave DataNode daemons to perform the low-level I/O tasks,
and also runs the JobTracker process.

· Secondary NameNode: Performs internal checks of the NameNode transaction log.

· DataNodes: Nodes that store the data in the HDFS file system, which are also known as slaves and run the TaskTracker process.

In the example presented in this article, all the Hadoop cluster
building blocks will be installed using the Oracle Solaris Zones, ZFS,
and network virtualization technologies.

Monday Dec 24, 2012

Accelerate your Oracle DB startup and shutdown using Solaris 11 vmtasks

last week I co-presented the Solaris 11.1 new features at the Solaris user group.
I would like to thank the organizers and the audience.
You can find the slide deck here.
I presented the following Solaris 11.1 new features:

1. Installation enhancement adding iSCSI disks as boot device and Oracle Configuration Manager (OCM) and Auto Service Request (ASR) registration during operating system installation

2. New SMF tool svcbundle for creation of manifests and the brand new svccfg sub-command delcust

3. The pfedit command for secure system configuration editing

4. New logging daemon rsyslog for better scalability and reliable system logs transport

5. New aggregation option for the mpstat , cpustat and trapstat commands in order to display the data in a more condensed format

6. Edge Virtual Bridging (EVB) which extends network virtualization features into the physical network infrastructure

7. Data Center Bridging (DCB) which provides guaranteed bandwidth and lossless Ethernet transport
for converged network environments where storage protocols share the same fabric as regular network traffic

8. VNIC Migration for better flexibility in network resource allocation

9.  Fast Zone updates for improved system up-time and short system upgrades

10. Zones on shared storage for a faster Solaris Zones mobility

11. File system statistics for Oracle Solaris Zones for better I/O performance observability

12. Oracle Optimized Shared Memory
The Solaris 11 feature that got the most attention was the new memory process called vmtasks.
This process accelerates shared memory operation like : creation locking, and destruction.
Now you can improve your system up-time because your Oracle DB startup and shutdown are much faster.
Any application that needs fast access to shared memory can benefit from this process.
For more information about vmtasts and Solaris 11.1 new features

Tuesday Sep 11, 2012

Oracle Solaris 8 P2V with Oracle database 10.2 and ASM

Background information

In this document I will demonstrate the following scenario:

Migration of Solaris 8 Physical system with Oracle database version with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of Solaris 11 control domain.

In the first example we will preserve the host information.

On the second example we will modify the host name.

Executive summary

In this document I will demonstrate how we managed to leverage the Solaris 8 p2v tool in order to migrate a physical Solaris 8 system with Oracle database with ASM file system into a Solaris 8 branded Zone.

The ASM file system located on a LUN in SAN storage connected via FC HBA.

During the migration we used the same LUN on the source and target servers in order to avoid data migration.

The P2V tool successfully migrated the Solaris 8 physical system into the Solaris 8 branded Zone and the Zone was able to access the ASM file system.

Architecture layout

Source system

Hardware details:
Sun Fire V440 server with 4 UltraSPARC 3i CPUs 1593MHZ and 8GB of RAM

Opreating system Solaris 8 2/04 + latest recommended patch set

Target system

Oracle's SPARC T4-1 server with A single 8-core, 2.85 GHz SPARC T4 processor and 32GB of RAM

Install Solaris 11

Setting up Control Domain

primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
primary# ldm add-vds primary-vds0 primary
primary# ifconfig -a
net0: flags=1000843 mtu 1500 index 2
inet netmask ffffff00 broadcast
ether 0:14:4f:ab:e3:7a
primary# ldm add-vsw net-dev=net0 primary-vsw0 primary
primary# ldm list-services primary
primary-vcc0 primary 5000-5100

primary-vsw0 primary 00:14:4f:fb:44:4d net0 0 switch@0 1 1 1500 on

primary-vds0 primary

primary# ldm set-mau 1 primary
primary# ldm set-vcpu 16 primary
primary# ldm start-reconf primary
primary# ldm set-memory 8G primary
primary# ldm add-config initial
primary# ldm list-config

initial [current]

primary# shutdown -y -g0 -i6

Enable the virtual console service

primary# svcadm enable vntsd

primary# svcs vntsd


online 15:55:10 svc:/ldoms/vntsd:default

Setting Up Guest Domain

primary# ldm add-domain ldg1
primary# ldm add-vcpu 32 ldg1
primary# ldm add-memory 8G ldg1
primary# ldm add-vnet vnet0 primary-vsw0 ldg1

primary# ldm add-vnet vnet1 primary-vsw0 ldg1

primary# ldm add-vdsdev /dev/dsk/c3t1d0s2 vol1@primary-vds0

primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1

primary# ldm set-var auto-boot\?=true ldg1

primary# ldm set-var boot-device=vdisk1 ldg1

primary# ldm bind-domain ldg1

primary# ldm start-domain ldg1

primary# telnet localhost 5000

{0} ok boot net - install

Install solaris 10 update 10 (Solaris 10 08/11)

Verify that all the Solaris services on guest LDom are up & running

guest # svcs -xv

Oracle Solaris Legacy Containers install

The Oracle Solaris Legacy Containers download includes two versions of the product:

- Oracle Solaris Legacy Containers 1.0.1

- For Oracle Solaris 10 10/08 or later

- Oracle Solaris Legacy Containers 1.0

- For Oracle Solaris 10 08/07

- For Oracle Solaris 10 05/08

Both product versions contain identical features. The 1.0.1 product depends

on Solaris packages introduced in Solaris 10 10/08. The 1.0 product delivers

these packages to pre-10/08 versions of Solaris.

We will use the Oracle Solaris Legacy Containers 1.0.1
since our Solaris 10 version is 08/11 To install the Oracle Solaris Legacy Containers 1.0.1 software:

1. Download the Oracle Solaris Legacy Containers software bundle

2. Unarchive and install 1.0.1 software package:
guest # unzip
guset # cd solarislegacycontainers/1.0.1/Product
guest # pkgadd -d `pwd` SUNWs8brandk

Starting the migration

On the source system

sol8# su - oracle

Shutdown the Oracle database and ASM instance
sol8$ sqlplus "/as sysdba"

SQL*Plus: Release - Production on Sun Aug 26 13:19:48 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown immediate
sol8$ export ORACLE_SID=+ASM

sol8$ sqlplus "/as sysdba"

SQL*Plus: Release - Production on Sun Aug 26 13:21:38 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown

Stop the listener

sol8 $ lsnrctl stop

LSNRCTL for Solaris: Version - Production on 26-AUG-2012 13:23:49

Copyright (c) 1991, 2010, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully

Creating the archive

sol8 # flarcreate -S -n s8-system /export/home/s8-system.flar

Copy the archive to the target guest domain

On the target system

Move and connect the SAN storage to the target system

On the control domain add the SAN storage LUN into the guest domain

primary # ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0
primary # ldm add-vdisk oradata oradata@primary-vds0 ldg1

On the guest domain verify that you can access the LUN

guest# format
Searching for disks...done

0. c0d0
1. c0d2

Set up the Oracle Solaris 8 branded zone on the guest domain

The Oracle Solaris 8 branded zone s8-zone  is configured with the zonecfg command.

Here is the output of zonecfg –z s8-zone info command after configuration is completed:

guest# zonecfg -z s8-zone info

zonename: s8-zone

zonepath: /zones/s8-zone

brand: solaris8

autoboot: true



limitpriv: default,proc_priocntl,proc_lock_memory

scheduling-class: FSS

ip-type: exclusive



        address not specified

        physical: vnet1

        defrouter not specified


        match: /dev/rdsk/c0d2s0


        name: machine

        type: string

        value: sun4u

 Install the Solaris 8 zone

guest# zoneadm -z s8-zone install -p -a /export/home/s8-system.flar

Boot the Solaris 8 zone

guest# zoneadm –z s8-zone boot

guest # zlogin –C s8-zone 

sol8_zone# su - oracle

Modify the ASM disk ownership 

sol8_zone# chown oracle:dba /dev/rdsk/c0d2s0

Start the listener

sol8_zone$ lsnrctl start

Start the ASM instance

sol8_zone$ export ORACLE_SID=+ASM
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release - Production on Sun Aug 26 14:36:44 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 130023424 bytes
Fixed Size 2050360 bytes
Variable Size 102807240 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Start the database

sol8_zone$ export ORACLE_SID=ORA10
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release - Production on Sun Aug 26 14:37:13 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2052448 bytes
Variable Size 385879712 bytes
Database Buffers 1207959552 bytes
Redo Buffers 14721024 bytes
Database mounted.
Database opened.

Second example

In this example we will modify the host name.

guest # zoneadm -z s8-zone install -u -a /net/server/s8_image.flar

Boot the Zone

guest # zoneadm -z s8-zone boot

Configure the Zone with a new ip address and new host name

guest # zlogin –C s8-zone

Modify the ASM disk ownership

sol8_zone # chown oracle:dba /dev/rdsk/c0d2s0

sol8_zone # cd $ORACLE_HOME/bin

reconfigure the ASM parameters

sol8_zone # ./localconfig delete

Aug 27 05:17:11 s8-zone last message repeated 3 times
Aug 27 05:17:28 s8-zone root: Oracle CSSD being stopped
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
sol8_zone # ./localconfig add

Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Configuration for local CSS has been initialized

sol8_zone # su - oracle

Start the listener + ASM + Oracle database

Thursday Aug 02, 2012

Solaris 11 Firewall

Oracle Solaris 11 includes software firewall.
In the cloud, this means that the need for expensive network hardware can be reduced while changes to network configurations can be made quickly and easily.
You can use the following script in order to manage the Solaris 11 firewall.
The script runs on Solaris 11 (global zone) and Solaris 11 Zone with exclusive ip stack (the default).

Script usage and examples:

Enable and start the firewall service

# fw.ksh start

Enable and start the firewall service in addition to that it reads the firewall rules from /etc/ipf/ipf.conf.
For more firewall rules examples see here.

Disable and stop the firewall service

# fw.ksh stop

Restart the firewall service after modifying the rules of /etc/ipf/ipf.conf.

# fw.ksh restart

Checking the firewall status

# fw.ksh status

The script will print the firewall status (online,offline) and the active rules.

This section provides the script. The recommendation is to copy the content and paste it into the suggested file name using gedit to create the file on Oracle Solaris 11.

# more fw.ksh

#! /bin/ksh


# FILENAME:    fw.ksh

# Manage Solaris firewall script

# Usage:

# fw.ksh {start|stop|restart|status}

case "$1" in


        /usr/sbin/svcadm enable svc:/network/ipfilter:default

         while [[ $serviceStatus != online && $serviceStatus != maintenance ]] ; do

            sleep 5

            serviceStatus=`/usr/bin/svcs -H -o STATE svc:/network/ipfilter:default`


        /usr/sbin/ipf -Fa -f /etc/ipf/ipf.conf



        $0 stop

        $0 start



        /usr/sbin/svcadm disable svc:/network/ipfilter:default



        serviceStatus=`/usr/bin/svcs -H -o STATE svc:/network/ipfilter:default`

        if [[ $serviceStatus != "online" ]] ; then

            /usr/bin/echo "The Firewall service is offline"


            /usr/bin/echo "\nThe Firewall service is online\n"

            /usr/sbin/ipfstat -io




        /usr/bin/echo "Usage: $0 {start|stop|restart|status}"

        exit 1



exit 0

Thursday Jul 05, 2012

LDoms with Solaris 11

Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page.

Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11.

The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain

In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type.

You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order
to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system.

For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

Monday Jun 04, 2012

Oracle Solaris Zones Physical to virtual (P2V)

This document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.
Using an example and various scenarios, this paper describes how to take advantage of the
Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace.

The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster
Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.

Oracle Solaris Zones
Oracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.
This technology provides an isolated and secure environment for running applications.
A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.
Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.

Oracle Solaris Zones Physical-to-Virtual (P2V)
A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical
system and migrate it into a virtualized operating system environment
There are three main steps using this tool

1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image.
2. Preparing the target system by configuring a new zone that will host the new image.
3. Image installation on the target system using the image we created on step 1.

The host, where the image is built, is referred to as the source system and the host, where the
image is installed, is referred to as the target system.

Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)
Here are some benefits of this new feature:

  •  Simple- easy build process using Oracle Solaris 10 built-in commands.

  •  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.

  •  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page.

    The minimum Solaris version on the target system should be Solaris 10 9/10.
    Refer to the latest Administration Guide for Oracle Solaris 10 for a complete procedure on how to
    download and install Oracle Solaris.

  • NOTE: If the source system that used to build the image is an older version then the target
    system, then during the process, the operating system will be upgraded to Solaris 10 9/10
    (update on attach).
    Creating the Image Used to distribute the software.
    We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine,

    or build it into a NFS shared storage and
    mount the NFS file system from the target machine.
    Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.
    An image is created by using the flarcreate command:
    Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flar
    The command does the following:

  •  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).

  •  -n specifies the image name.

  •  -L specifies the archive format (i.e cpio).

    Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.
    Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flar
    You can see example of the archive identification section in Appendix A: archive identification section.
    We can compress the flar image using the gzip command or adding the -c option to the flarcreate command
    Source # gzip /var/tmp/solaris_10_up9.flar
    An md5 checksum can be created for the image in order to ensure no data tampering
    Source # digest -v -a md5 /var/tmp/solaris_10_up9.flar

    Moving the image into the target system.
    If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.

    Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmp
    Configuring the Zone on the target system
    After copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.
    To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link:  )

    ZFS integration
    A flash archive can be created on a system that is running a UFS or a ZFS root file system.
    NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then by
    default, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.
    This image cannot be used to install a zone. You must create the flar with an explicit cpio or pax
    archive when the system has a ZFS root.
    Use the flarcreate command with the -L archiver option, specifying cpio or pax as the
    method to archive the files. (For example, see Step 1 in the previous section).
    Optionally, on the target system you can create the zone root folder on a ZFS file system in
    order to benefit from the ZFS features (clones, snapshots, etc...).

    Target # zpool create zones c2t2d0

    Create the zone root folder:

    Target # chmod 700 /zones
    Target # zonecfg -z solaris10-up9-zone
    solaris10-up9-zone: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:solaris10-up9-zone> create -b
    zonecfg:solaris10-up9-zone> set zonepath=/zones
    zonecfg:solaris10-up9-zone> set autoboot=true
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone:net> set address=
    zonecfg:solaris10-up9-zone:net> set physical=nxge0
    zonecfg:solaris10-up9-zone:net> end
    zonecfg:solaris10-up9-zone> verify
    zonecfg:solaris10-up9-zone> commit
    zonecfg:solaris10-up9-zone> exit

    Installing the Zone on the target system using the image
    Install the configured zone solaris10-up9-zone by using the zoneadm command with the install -
    a option and the path to the archive.
    The following example shows how to create an Image and sys-unconfig the zone.
    Target # zoneadm -z solaris10-up9-zone install -u -a
    Log File: /var/tmp/solaris10-up9-zone.install_log.AJaGve
    Installing: This may take several minutes...
    The following example shows how we can preserve system identity.
    Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar

    Resource management

    Some applications are sensitive to the number of CPUs on the target Zone. You need to
    match the number of CPUs on the Zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>add dedicated-cpu
    zonecfg:solaris10-up9-zone> set ncpus=16

    DTrace integration
    Some applications might need to be analyzing using DTrace on the target zone, you can
    add DTrace support on the zone using the zonecfg command:

    Exclusive IP

    An Oracle Solaris Container running in Oracle Solaris 10 can have a
    shared IP stack with the global zone, or it can have an exclusive IP
    stack (which was released in Oracle Solaris 10 8/07). An exclusive IP
    stack provides a complete, tunable, manageable and independent
    networking stack to each zone. A zone with an exclusive IP stack can
    configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec.
    For an example of how to configure an Oracle Solaris zone with an
    exclusive IP stack, see the following example

    zonecfg:solaris10-up9-zone set ip-type=exclusive
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone> set physical=nxge0

    When the installation completes, use the zoneadm list -i -v options to list the installed
    zones and verify the status.
    Target # zoneadm list -i -v
    See that the new Zone status is installed
    0 global running / native shared
    - solaris10-up9-zone installed /zones native shared
    Now boot the Zone
    Target # zoneadm -z solaris10-up9-zone boot
    We need to login into the Zone order to complete the zone set up or insert a sysidcfg file before
    booting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg file
    Target # zlogin -C solaris10-up9-zone

    If an installation fails, review the log file. On success, the log file is in /var/log inside the zone. On
    failure, the log file is in /var/tmp in the global zone.
    If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F

    to reset the zone to the configured state.
    Target # zoneadm -z solaris10-up9-zone uninstall -F
    Target # zonecfg -z solaris10-up9-zone delete -F
    Oracle Solaris Zones P2V tool provides the flexibility to build pre-configured
    images with different software configuration for faster deployment and server consolidation.
    In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.

    Appendix A: archive identification section
    We can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access the
    identification section that contains the detailed description.
    Target # head -n 20 /var/tmp/solaris_10_up9.flar
    begin 755 predeployment.cpio.Z

    Appendix B: sysidcfg file section
    Target # cat sysidcfg
    network_interface=primary {hostname= solaris10-up9-zone

    We need to copy this file before booting the zone
    Target # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

  • Wednesday Feb 15, 2012

    Oracle VM for SPARC (LDoms) Live Migration

    This document provides information how we can increase application
    availability by using the Oracle VM Server for SPARC software
    (previously called Sun Logical Domains).

    We tested the Oracle 11gr2 DB on a SPARC T4 while migration the
    guest domain from one SPARC T4 server to another without shutting down the
    Oracle DB.

    In the testing the Swingbench OrderEntry workload was used to
    generate workload, OrderEntry is based on the oe schema that ships
    with Oracle 11g.

    The scenario includes the following characteristic :
    30GB DB disk size with 18GB SGA size, The number of
    workload users were 50 and the time between actions taken by each
    user were 100 ms.

    The paper shows the complete configuration process, including the
    creation and configuration of guest domains using the ldm command.
    In addition to that Storage configuration and layout plus software requirements that
    were used to demonstrate the Live Migration between two T4 systems.

    For more information see Increasing Application Availability by Using the Oracle VM Server for SPARC Live Migration Feature: An Oracle Database Example

    Tuesday Sep 28, 2010

    IT 2020 technology optimism

    Please read the following white paper "IT 2020: Technology Optimism: An Oracle Scenario" which I contributed to the writing.

    The major idea behind the paper is  describing  a scenario of technology optimism, where IT has solved many of the difficult issues we struggle with today and broadens everyone’s horizon.


    This blog covers cloud computing, big data and virtualization technologies


    « July 2016