Sunday Dec 29, 2013

Presentations from Oracle & ilOUG Solaris Forum meeting, Dec 18th, Tel-Aviv

Thank you for attending the Israel Oracle User Group (ilOUG) Solaris Forum meeting on Wednesday. I am posting here presentations from the event. I am also pointing you to the dim_STAT tool, if you are looking for a powerful monitoring solution for Solaris systems.
The event was reasonably well attended with 20 people from various companies.

The meeting was broken up into two sections: presentations about Solaris eco system, Solaris as a big data platform with Hadoop as a use case; and comparison between Oracle Solaris and Linux and a customer use-case of Peta-scale data migration using Solaris ZFS.

I am posting here responses and links for the topics that came up during the evening:
During the meeting the participants asked us, “how can we verify if an application which is running in Solaris 10 will run smoothly in Solaris 11 and how to build an application that will able to leverage the new SPARC CPUs”.
For this task you can use the Oracle Solaris Preflight Applications Checker which enables you to determine the Oracle Solaris 11 “readiness” of an application by analyzing a working application on Oracle Solaris 10.
A successful check with this tool is a strong indicator that an ISV may run a given application without modifications on Oracle Solaris 11.
In addition, you can analyze and improve the application performance during the development phase using Oracle Solaris Studio 12.3

Another useful tool for performance analysis is the Solaris DTrace. You can leverage the DTrace Toolkit which is a collection of ~200 scripts that help troubleshoot problems on a system. You can find the latest version of these scripts from /usr/demo/dtrace in Oracle Solaris 11.
Another topic that came up was, “How to accelerate application deployment time by using pre-built Solaris images”.
For this task, you can use Oracle VM Templates which provide an innovative approach to deploying a fully configured software stack by offering per-installed and per-configured software images.
Use of Oracle VM Templates eliminates the installation and configuration costs, and reduces the ongoing maintenance costs, thus helping organizations achieve faster time to market and lower cost of operations.

During the Solaris vs Linux comparison presentation, the topic that received the most attention from the audience was, “ What are the benefits of Oracle Solaris ZFS versus Linux btrfs?” 
In addition to the link above, here is a link to COMSTAR which is a Solaris storage virtualization technology.
Common Multiprotocol SCSI Target, or COMSTAR, a software framework that enables you to convert any Oracle Solaris 11 host into a SCSI target device that can be accessed over a storage network by initiator hosts.

In the final section of the meeting, Haim Tzadok from Oracle partner Grigale and Avi Avraham from Walla presented together the customer use case of Peta-scale data migration using Oracle Solaris ZFS.
Walla is one of the most popular web portals in Israel with an unlimited storage for email accounts.
Walla uses Solaris ZFS for their Mail server storage, the ZFS feature that allows them to reduce storage cost and improve DISK I/O performance is the ZFS ability to change the block size (4 KB up-to 1 MB) when creating the file system.
Like many Mail systems you have many small files and by using the Solaris ZFS ability to change to the ZFS default block size you can optimize your storage subsystem and align it to application (Mail server).
The final result is DISK I/O performance improvement without the need to invest extra budget in superfluous storage hardware.
For more information about ZFS block size
A big thank-you to the participants for the engaged discussions and to ilOUG for a great event!
Register to ilOUG and get the latest updates.







Tuesday Dec 17, 2013

Performance Analysis in a Multitenant Cloud Environment Using Hadoop Cluster and Oracle Solaris 11

Oracle Solaris 11 comes with a new set of commands that provide the ability to conduct
performance analysis in a virtualized multitenant cloud environment. Performance analysis in a
virtualized multitenant cloud environment with different users running various workloads can be a
challenging task for the following reasons:

Each virtualization software adds an abstraction layer to enable better manageability. Although this makes it much simpler to manage the virtualized resources, it is very difficult to find the physical system resources that are overloaded.

Each Oracle Solaris Zone can have different workload; it can be disk I/O, network I/O, CPU, memory, or combination of these.

In addition, a single Oracle Solaris Zone can overload the entire system resources.It is very difficult to observe the environment; you need to be able to monitor the environment from the top level to see all the virtual instances (non-global zones) in real time with the ability to drill down to specific resources.


The benefits of using Oracle Solaris 11 for virtualized performance analysis are:

Observability. The Oracle Solaris global zone is a fully functioning operating systems, not a propriety hypervisor or a minimized operating system that lacks the ability to observe the entire environment—including the host and the VMs, at the same time. The global zone can see all the non-global zones’ performance metrics.

Integration. All the subsystems are built inside the same operating system. For example, the ZFS file system and the Oracle Solaris Zones virtualization technology are integrated together. This is preferable to mixing many vendors’ technology, which causes a lack of integration between the different operating system (OS) subsystems and makes it very difficult to analyze all the different OS subsystems at the same time.

Virtualization awareness. The built-in Oracle Solaris commands are virtualization-aware,and they can provide performance statistics for the entire system (the Oracle Solaris global zone). In addition to providing the ability to
drill down into every resource (Oracle Solaris non-global zones), these commands provide accurate results during the performance analysis process.

In this article, we are going to explore four examples that show how we can monitor virtualized environment with Oracle Solaris Zones using the built-in Oracle Solaris 11 tools. These tools provide the ability to drill down to specific resources, for example, CPU, memory, disk, and network. In addition, they provide the ability to print statistics per Oracle Solaris Zone and provide information on the running applications.


Read it 
Article: Performance Analysis in a Multitenant Cloud Environment

Tuesday Oct 22, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)


Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab.
This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris
11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System
(HDFS) and the Hadoop MapReduce programming model.
We will also cover the Hadoop installation process and the cluster building blocks:
NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better
scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job.

Summary of Lab Exercises
This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:
    Install Hadoop.
    Edit the Hadoop configuration files.
    Configure the Network Time Protocol.
    Create the virtual network interfaces (VNICs).
    Create the NameNode and the secondary NameNode zones.
    Set up the DataNode zones.
    Configure the NameNode.
    Set up SSH.
    Format HDFS from the NameNode.
    Start the Hadoop cluster.
    Run a MapReduce job.
    Secure data at rest using ZFS encryption.
    Use Oracle Solaris DTrace for performance monitoring.
 

Read it now

Thursday Sep 12, 2013

Solaris 11 Integrated Load Balancer Hands on Lab at Oracle Open World 2013

Oracle Solaris 11 offers a new feature called the Integrated Load Balancer (ILB). To find out more on this capability, and how to set up and configure ILB from scratch join a hands on lab I'm hosting at the Oracle Open World 2013 called  Oracle Solaris Integrated Load Balancer in 60 Minutes [HOL10181]

Open World Logo


The objective of this lab is to demonstrate how Oracle Solaris Integrated Load Balancer (ILB) provides an easy and fast way of deploying a load balancing system. Various Solaris 11 technologies such as Zones, ZFS and Networking Virtualization are used in order to create a virtual "data center " in the box in order to test different load balancing scenarios.

During this session, you are going to configure and use ILB inside an Oracle Solaris Zone, to balance the load towards three Apache Tomcat web servers running a simple Java Server Page (JSP) application specially developed for this lab.

Register Now

Thursday Aug 22, 2013

Hadoop Cluster with Oracle Solaris Hands on Lab at Oracle Open WORLD 2013

If you want to learn how-to build a Hadoop cluster using Solaris 11 technologies please join me at the following Oracle Open WORLD 2013 lab.
How to Set Up a Hadoop Cluster with Oracle Solaris [HOL10182]



In this Hands-on-Lab we will preset and demonstrate using exercises how to set up a Hadoop cluster Using Oracle Solaris 11 technologies like: Zones, ZFS, DTrace  and Network Virtualization.
Key topics include the Hadoop Distributed File System and MapReduce.
We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes.
In addition how we can combine the Oracle Solaris 11 technologies for better scalability and data security.
During the lab users will learn how to load data into the Hadoop cluster and run Map-Reduce job.
This hands-on training lab is for system administrators and others responsible for managing Apache Hadoop clusters in production or development environments.

This Lab will cover the following topics:

    1. How to install Hadoop

    2. Edit the Hadoop configuration files

    3. Configure the Network Time Protocol

    4. Create the Virtual Network Interfaces

    5. Create the NameNode and the Secondary NameNode Zones

    6. Configure the NameNode

    7. Set Up SSH between the Hadoop cluster member

    8. Format the HDFS File System

    9. Start the Hadoop Cluster

   10. Run a MapReduce Job

   11. How to secure data at rest using ZFS encryption

   12. Performance monitoring using Solaris DTrace

Register Now


Monday Jun 24, 2013

How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones.
The following are benefits of using Oracle Solaris for a MongoDB cluster:

• You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster.
• In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort.
• You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools.
ZFS built-in compression provides optimized disk I/O utilization for better I/O performance.
In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies.

Figure 1 shows the architecture:


Thursday Jan 24, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris Zones

This article starts with a brief overview of Hadoop and follows with an
example of setting up a Hadoop cluster with a NameNode, a secondary
NameNode, and three DataNodes using Oracle Solaris Zones.

The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:

· Fast provision of new cluster members using the zone cloning feature

· Very high network throughput between the zones for data node replication

· Optimized disk I/O utilization for better I/O performance with ZFS built-in compression

· Secure data at rest using ZFS encryption






Hadoop use the Distributed File System (HDFS) in order to store data.
HDFS provides high-throughput access to application data and is suitable
for applications that have large data sets.

The Hadoop cluster building blocks are as follows:

· NameNode: The centerpiece of HDFS, which stores file system metadata,
directs the slave DataNode daemons to perform the low-level I/O tasks,
and also runs the JobTracker process.

· Secondary NameNode: Performs internal checks of the NameNode transaction log.

· DataNodes: Nodes that store the data in the HDFS file system, which are also known as slaves and run the TaskTracker process.

In the example presented in this article, all the Hadoop cluster
building blocks will be installed using the Oracle Solaris Zones, ZFS,
and network virtualization technologies.





Monday Dec 24, 2012

Accelerate your Oracle DB startup and shutdown using Solaris 11 vmtasks

last week I co-presented the Solaris 11.1 new features at the Solaris user group
I would like to thank the organizers and the audience
You can find the slide deck here
I presented the following Solaris 11.1 new features:


1. Installation enhancement adding iSCSI disks as boot device and Oracle Configuration Manager (OCM) and Auto Service Request (ASR) registration during operating system installation

2. New SMF tool svcbundle for creation of manifests and the brand new svccfg sub-command delcust

3. The pfedit command for secure system configuration editing

4. New logging daemon rsyslog for better scalability and reliable system logs transport

5. New aggregation option for the mpstat , cpustat and trapstat commands in order to display the data in a more condensed format

6. Edge Virtual Bridging (EVB) which extends network virtualization features into the physical network infrastructure

7. Data Center Bridging (DCB) which provides guaranteed bandwidth and lossless Ethernet transport for
converged network environments where storage protocols share the same
fabric as regular network traffic

8. VNIC Migration for better flexibility in network resource allocation

9.  Fast Zone updates for improved system up-time and short system upgrades

10. Zones on shared storage for a faster Solaris Zones mobility

11. File system statistics for Oracle Solaris Zones for better I/O performance observability

12. Oracle Optimized Shared Memory



The Solaris 11 feature that got the most attention was the new memory process called vmtasks.

This process accelerates shared memory operation like : creation locking, and destruction.
Now you can improve your system up-time because your Oracle DB startup and shutdown are much faster.

Any application that needs fast access to shared memory can benefit from this process.

For more information about vmtasts and Solaris 11.1 new features


9C3P6U4JRNT5

Tuesday Sep 11, 2012

Oracle Solaris 8 P2V with Oracle database 10.2 and ASM

Background information

In this document I will demonstrate the following scenario:

Migration of Solaris 8 Physical system with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of Solaris 11 control domain.


In the first example we will preserve the host information.

On the second example we will modify the host name.

Executive summary

In this document I will demonstrate how we managed to leverage the Solaris 8 p2v tool in order to migrate a physical Solaris 8 system with Oracle database with ASM file system into a Solaris 8 branded Zone.

The ASM file system located on a LUN in SAN storage connected via FC HBA.

During the migration we used the same LUN on the source and target servers in order to avoid data migration.

The P2V tool successfully migrated the Solaris 8 physical system into the Solaris 8 branded Zone and the Zone was able to access the ASM file system.

Architecture layout



Source system

Hardware details:
Sun Fire V440 server with 4 UltraSPARC 3i CPUs 1593MHZ and 8GB of RAM

Opreating system Solaris 8 2/04 + latest recommended patch set


Target system


Oracle's SPARC T4-1 server with A single 8-core, 2.85 GHz SPARC T4 processor and 32GB of RAM

Install Solaris 11


Setting up Control Domain

primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
primary# ldm add-vds primary-vds0 primary
primary# ifconfig -a
net0: flags=1000843 mtu 1500 index 2
inet 10.162.49.45 netmask ffffff00 broadcast 10.162.49.255
ether 0:14:4f:ab:e3:7a
primary# ldm add-vsw net-dev=net0 primary-vsw0 primary
primary# ldm list-services primary
VCC
NAME LDOM PORT-RANGE
primary-vcc0 primary 5000-5100

VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 primary 00:14:4f:fb:44:4d net0 0 switch@0 1 1 1500 on

VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary

primary# ldm set-mau 1 primary
primary# ldm set-vcpu 16 primary
primary# ldm start-reconf primary
primary# ldm set-memory 8G primary
primary# ldm add-config initial
primary# ldm list-config

factory-default
initial [current]
primary-with-clients

primary# shutdown -y -g0 -i6

Enable the virtual console service

primary# svcadm enable vntsd

primary# svcs vntsd

STATE STIME FMRI

online 15:55:10 svc:/ldoms/vntsd:default

Setting Up Guest Domain

primary# ldm add-domain ldg1
primary# ldm add-vcpu 32 ldg1
primary# ldm add-memory 8G ldg1
primary# ldm add-vnet vnet0 primary-vsw0 ldg1

primary# ldm add-vnet vnet1 primary-vsw0 ldg1

primary# ldm add-vdsdev /dev/dsk/c3t1d0s2 vol1@primary-vds0

primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1

primary# ldm set-var auto-boot\?=true ldg1

primary# ldm set-var boot-device=vdisk1 ldg1

primary# ldm bind-domain ldg1

primary# ldm start-domain ldg1

primary# telnet localhost 5000

{0} ok boot net - install

Install solaris 10 update 10 (Solaris 10 08/11)


Verify that all the Solaris services on guest LDom are up & running


guest # svcs -xv

Oracle Solaris Legacy Containers install

The Oracle Solaris Legacy Containers download includes two versions of the product:


- Oracle Solaris Legacy Containers 1.0.1


- For Oracle Solaris 10 10/08 or later


- Oracle Solaris Legacy Containers 1.0


- For Oracle Solaris 10 08/07


- For Oracle Solaris 10 05/08


Both product versions contain identical features. The 1.0.1 product depends


on Solaris packages introduced in Solaris 10 10/08. The 1.0 product delivers


these packages to pre-10/08 versions of Solaris.


We will use the Oracle Solaris Legacy Containers 1.0.1
since our Solaris 10 version is 08/11 To install the Oracle Solaris Legacy Containers 1.0.1 software:

1. Download the Oracle Solaris Legacy Containers software bundle
from http://www.oracle.com.

2. Unarchive and install 1.0.1 software package:
guest # unzip solarislegacycontainers-solaris10-sparc.zip
guset # cd solarislegacycontainers/1.0.1/Product
guest # pkgadd -d `pwd` SUNWs8brandk

Starting the migration

On the source system


sol8# su - oracle


Shutdown the Oracle database and ASM instance
sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:19:48 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown immediate
sol8$ export ORACLE_SID=+ASM


sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:21:38 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown

Stop the listener


sol8 $ lsnrctl stop

LSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 26-AUG-2012 13:23:49

Copyright (c) 1991, 2010, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully

Creating the archive

sol8 # flarcreate -S -n s8-system /export/home/s8-system.flar

Copy the archive to the target guest domain


On the target system

Move and connect the SAN storage to the target system

On the control domain add the SAN storage LUN into the guest domain

primary # ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0
primary # ldm add-vdisk oradata oradata@primary-vds0 ldg1

On the guest domain verify that you can access the LUN

guest# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c0d2
/virtual-devices@100/channel-devices@200/disk@2


Set up the Oracle Solaris 8 branded zone on the guest domain


The Oracle Solaris 8 branded zone s8-zone  is configured with the zonecfg command.

Here is the output of zonecfg –z s8-zone info command after configuration is completed:


guest# zonecfg -z s8-zone info

zonename: s8-zone

zonepath: /zones/s8-zone

brand: solaris8

autoboot: true

bootargs:

pool:

limitpriv: default,proc_priocntl,proc_lock_memory

scheduling-class: FSS

ip-type: exclusive

hostid:

net:

        address not specified

        physical: vnet1

        defrouter not specified

device

        match: /dev/rdsk/c0d2s0

attr:

        name: machine

        type: string

        value: sun4u

 Install the Solaris 8 zone

guest# zoneadm -z s8-zone install -p -a /export/home/s8-system.flar

Boot the Solaris 8 zone

guest# zoneadm –z s8-zone boot

guest # zlogin –C s8-zone 

sol8_zone# su - oracle

Modify the ASM disk ownership 

sol8_zone# chown oracle:dba /dev/rdsk/c0d2s0

Start the listener


sol8_zone$ lsnrctl start

Start the ASM instance

sol8_zone$ export ORACLE_SID=+ASM
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:36:44 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 130023424 bytes
Fixed Size 2050360 bytes
Variable Size 102807240 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Start the database

sol8_zone$ export ORACLE_SID=ORA10
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:37:13 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2052448 bytes
Variable Size 385879712 bytes
Database Buffers 1207959552 bytes
Redo Buffers 14721024 bytes
Database mounted.
Database opened.

Second example


In this example we will modify the host name.

guest # zoneadm -z s8-zone install -u -a /net/server/s8_image.flar

Boot the Zone

guest # zoneadm -z s8-zone boot

Configure the Zone with a new ip address and new host name

guest # zlogin –C s8-zone

Modify the ASM disk ownership

sol8_zone # chown oracle:dba /dev/rdsk/c0d2s0

sol8_zone # cd $ORACLE_HOME/bin

reconfigure the ASM parameters

sol8_zone # ./localconfig delete

Aug 27 05:17:11 s8-zone last message repeated 3 times
Aug 27 05:17:28 s8-zone root: Oracle CSSD being stopped
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
sol8_zone # ./localconfig add

Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Configuration for local CSS has been initialized

sol8_zone # su - oracle

Start the listener + ASM + Oracle database

Thursday Jul 05, 2012

LDoms with Solaris 11


Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page.

Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11.

The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain


In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type.




You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order
to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system.


For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper



Monday Jun 04, 2012

Oracle Solaris Zones Physical to virtual (P2V)

Introduction
This document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.
Using an example and various scenarios, this paper describes how to take advantage of the
Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace.


The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster
Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.

Oracle Solaris Zones
Oracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.
This technology provides an isolated and secure environment for running applications.
A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.
Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.

Oracle Solaris Zones Physical-to-Virtual (P2V)
A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical
system and migrate it into a virtualized operating system environment
There are three main steps using this tool

1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image.
2. Preparing the target system by configuring a new zone that will host the new image.
3. Image installation on the target system using the image we created on step 1.

The host, where the image is built, is referred to as the source system and the host, where the
image is installed, is referred to as the target system.



Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)
Here are some benefits of this new feature:


  •  Simple- easy build process using Oracle Solaris 10 built-in commands.

  •  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.

  •  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page.

    Prerequisites
    The minimum Solaris version on the target system should be Solaris 10 9/10.
    Refer to the latest Administration Guide for Oracle Solaris 10 for a complete procedure on how to
    download and install Oracle Solaris.



  • NOTE: If the source system that used to build the image is an older version then the target
    system, then during the process, the operating system will be upgraded to Solaris 10 9/10
    (update on attach).
    Creating the Image Used to distribute the software.
    We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine,

    or build it into a NFS shared storage and
    mount the NFS file system from the target machine.
    Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.
    An image is created by using the flarcreate command:
    Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flar
    The command does the following:



  •  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).

  •  -n specifies the image name.

  •  -L specifies the archive format (i.e cpio).

    Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.
    Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB
    10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flar
    You can see example of the archive identification section in Appendix A: archive identification section.
    We can compress the flar image using the gzip command or adding the -c option to the flarcreate command
    Source # gzip /var/tmp/solaris_10_up9.flar
    An md5 checksum can be created for the image in order to ensure no data tampering
    Source # digest -v -a md5 /var/tmp/solaris_10_up9.flar


    Moving the image into the target system.
    If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.

    Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmp
    Configuring the Zone on the target system
    After copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.
    To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )

    ZFS integration
    A flash archive can be created on a system that is running a UFS or a ZFS root file system.
    NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then by
    default, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.
    This image cannot be used to install a zone. You must create the flar with an explicit cpio or pax
    archive when the system has a ZFS root.
    Use the flarcreate command with the -L archiver option, specifying cpio or pax as the
    method to archive the files. (For example, see Step 1 in the previous section).
    Optionally, on the target system you can create the zone root folder on a ZFS file system in
    order to benefit from the ZFS features (clones, snapshots, etc...).

    Target # zpool create zones c2t2d0


    Create the zone root folder:

    Target # chmod 700 /zones
    Target # zonecfg -z solaris10-up9-zone
    solaris10-up9-zone: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:solaris10-up9-zone> create
    zonecfg:solaris10-up9-zone> set zonepath=/zones
    zonecfg:solaris10-up9-zone> set autoboot=true
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone:net> set address=192.168.0.1
    zonecfg:solaris10-up9-zone:net> set physical=nxge0
    zonecfg:solaris10-up9-zone:net> end
    zonecfg:solaris10-up9-zone> verify
    zonecfg:solaris10-up9-zone> commit
    zonecfg:solaris10-up9-zone> exit

    Installing the Zone on the target system using the image
    Install the configured zone solaris10-up9-zone by using the zoneadm command with the install -
    a option and the path to the archive.
    The following example shows how to create an Image and sys-unconfig the zone.
    Target # zoneadm -z solaris10-up9-zone install -u -a
    /var/tmp/solaris_10_up9.flar
    Log File: /var/tmp/solaris10-up9-zone.install_log.AJaGve
    Installing: This may take several minutes...
    The following example shows how we can preserve system identity.
    Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar


    Resource management


    Some applications are sensitive to the number of CPUs on the target Zone. You need to
    match the number of CPUs on the Zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>add dedicated-cpu
    zonecfg:solaris10-up9-zone> set ncpus=16


    DTrace integration
    Some applications might need to be analyzing using DTrace on the target zone, you can
    add DTrace support on the zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>set
    limitpriv="default,dtrace_proc,dtrace_user"


    Exclusive IP
    stack

    An Oracle Solaris Container running in Oracle Solaris 10 can have a
    shared IP stack with the global zone, or it can have an exclusive IP
    stack (which was released in Oracle Solaris 10 8/07). An exclusive IP
    stack provides a complete, tunable, manageable and independent
    networking stack to each zone. A zone with an exclusive IP stack can
    configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec.
    For an example of how to configure an Oracle Solaris zone with an
    exclusive IP stack, see the following example

    zonecfg:solaris10-up9-zone set ip-type=exclusive
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone> set physical=nxge0

    When the installation completes, use the zoneadm list -i -v options to list the installed
    zones and verify the status.
    Target # zoneadm list -i -v
    See that the new Zone status is installed
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - solaris10-up9-zone installed /zones native shared
    Now boot the Zone
    Target # zoneadm -z solaris10-up9-zone boot
    We need to login into the Zone order to complete the zone set up or insert a sysidcfg file before
    booting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg file
    section
    Target # zlogin -C solaris10-up9-zone

    Troubleshooting
    If an installation fails, review the log file. On success, the log file is in /var/log inside the zone. On
    failure, the log file is in /var/tmp in the global zone.
    If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F

    to reset the zone to the configured state.
    Target # zoneadm -z solaris10-up9-zone uninstall -F
    Target # zonecfg -z solaris10-up9-zone delete -F
    Conclusion
    Oracle Solaris Zones P2V tool provides the flexibility to build pre-configured
    images with different software configuration for faster deployment and server consolidation.
    In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.

    Appendix A: archive identification section
    We can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access the
    identification section that contains the detailed description.
    Target # head -n 20 /var/tmp/solaris_10_up9.flar
    FlAsH-aRcHiVe-2.0
    section_begin=identification
    archive_id=e4469ee97c3f30699d608b20a36011be
    files_archived_method=cpio
    creation_date=20100901160827
    creation_master=mdet5140-1
    content_name=s10-system
    creation_node=mdet5140-1
    creation_hardware_class=sun4v
    creation_platform=SUNW,T5140
    creation_processor=sparc
    creation_release=5.10
    creation_os_name=SunOS
    creation_os_version=Generic_142909-16
    files_compressed_method=none
    content_architectures=sun4v
    type=FULL
    section_end=identification
    section_begin=predeployment
    begin 755 predeployment.cpio.Z

    Appendix B: sysidcfg file section
    Target # cat sysidcfg
    system_locale=C
    timezone=US/Pacific
    terminal=xterms
    security_policy=NONE
    root_password=HsABA7Dt/0sXX
    timeserver=localhost
    name_service=NONE
    network_interface=primary {hostname= solaris10-up9-zone
    netmask=255.255.255.0
    protocol_ipv6=no
    default_route=192.168.0.1}
    name_service=NONE
    nfs4_domain=dynamic

    We need to copy this file before booting the zone
    Target # cp sysidcfg /zones/solaris10-up9-zone/root/etc/


  • Wednesday Feb 15, 2012

    Oracle VM for SPARC (LDoms) Live Migration

    This document provides information how we can increase application
    availability by using the Oracle VM Server for SPARC software
    (previously called Sun Logical Domains).

    We tested the Oracle 11gr2 DB on a SPARC T4 while migration the
    guest domain from one SPARC T4 server to another without shutting down the
    Oracle DB.

    In the testing the Swingbench OrderEntry workload was used to
    generate workload, OrderEntry is based on the oe schema that ships
    with Oracle 11g.

    The scenario includes the following characteristic :
    30GB DB disk size with 18GB SGA size, The number of
    workload users were 50 and the time between actions taken by each
    user were 100 ms.


    The paper shows the complete configuration process, including the
    creation and configuration of guest domains using the ldm command.
    In addition to that Storage configuration and layout plus software requirements that
    were used to demonstrate the Live Migration between two T4 systems.


    For more information see Increasing Application Availability by Using the Oracle VM Server for SPARC Live Migration Feature: An Oracle Database Example


    Wednesday Mar 23, 2011

    How Traffix Systems Optimized Its LTE Diameter Load Balancing and Routing Solutions Using Oracle Hardware and Software

    Please read the following paper "How Traffix Systems Optimized Its LTE Diameter Load Balancing and Routing Solutions Using Oracle Hardware and Software"


    This paper provides technical information on how Traffix Systems,
    the leading Diameter protocol solutions vendor, optimized its
    Long Term Evolution (LTE) Traffix Diameter Load Balancer and Traffix Diameter Router to benefit from Oracle's software and
    hardware


    This paper also includes brief technical descriptions of how specific
    Oracle Solaris features and capabilities are implemented in the Traffix
    solutions





    Tuesday Sep 28, 2010

    IT 2020 technology optimism

    Please read the following white paper "IT 2020: Technology Optimism: An Oracle Scenario" which I contributed to the writing.


    The major idea behind the paper is  describing  a scenario of technology optimism, where IT has solved many of the difficult issues we struggle with today and broadens everyone’s horizon.



    Friday Jun 04, 2010

    Increasing Application Availability Using Oracle VM Server for SPARC (LDoms) An Oracle Database Example

    This white paper by Orgad Kimchi and Roman Ivanov of Oracle's ISV Engineering Team describes how to use the warm migration feature of Oracle VM Server for SPARC to move a running Oracle database application from one server to another without disruption. It includes explanations of the process and instructions for setting up the source and target servers.


    For the full white paper see Increasing Application Availability Using Oracle VM Server for SPARC

    About

    This blog covers cloud computing, big data and virtualization technologies

    Search

    Categories
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today