Monday Jun 24, 2013

How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones.
The following are benefits of using Oracle Solaris for a MongoDB cluster:

• You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster.
• In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort.
• You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools.
ZFS built-in compression provides optimized disk I/O utilization for better I/O performance.
In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies.

Figure 1 shows the architecture:


Friday Jun 14, 2013

Public Cloud Security Anti Spoofing Protection

This past week (9-Jun-2013) Oracle ISV Engineering participated in the IGT cloud meetup, the largest cloud community in Israel with 4,000 registered members.
During the meetup, ISV Engineering presented two presentations: 

Introduction to Oracle Cloud Infrastructure presented by Frederic Pariente
Use case : Cloud Security Design and Implementation presented by me 
In addition, there was a partner presentation from ECI Telecom
Using Oracle Solaris11 Technologies for Building ECI R&D and Product Private Clouds presented by Mark Markman from ECI Telecom 
The Solaris 11 feature that received the most attention from the audience was the new Solaris 11 network virtualization technology.
The Solaris 11 network virtualization allows us to build any physical network topology inside the Solaris operating system including virtual network cards (VNICs), virtual switches (vSwitch), and more sophisticated network components (e.g. load balancers, routers and fire-walls).
The benefits for using this technology are in reducing infrastructure cost since there is no need to invest in superfluous network equipment. In addition the infrastructure deployment is much faster, since all the network building blocks are based on software and not in hardware. 
One of the key features of this network virtualization technology is the Data Link Protection. With this capability we can provide the flexibility that our partners need in a cloud environment and allow them root account access from inside the Solaris zone. Although we disabled their ability to create spoofing attack  by sending outgoing packets with a different source IP or MAC address and packets which aren't types of IPv4, IPv6, and ARP.

The following example demonstrates how to enable this feature:
Create the virtual VNIC (in a further step, we will associate this VNIC with the Solaris zone):

# dladm create-vnic -l net0 vnic0

Setup the Solaris zone:
# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone:
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> add net
zonecfg:secure-zone:net> set physical=vnic0
zonecfg:secure-zone:net> end
zonecfg:secure-zone> verify
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

Install the zone:
# zoneadm -z secure-zone install
Boot the zone: # zoneadm -z secure-zone boot
Log In to the zone:
# zlogin -C secure-zone

NOTE - During the zone setup select the vnic0 network interface and assign the 10.0.0.1 IP address.

From the global zone enable link protection on vnic0:

We can set different modes: ip-nospoof, dhcp-nospoof, mac-nospoof and restricted.
ip-nospoof:
Any outgoing IP, ARP, or NDP packet must have an address field that matches either a DHCP-configured IP address or one of the addresses listed in the allowed-ips link property.
mac-nospoof: prevents the root user from changing the zone mac address. An outbound packet's source MAC address must match the datalink's configured MAC address.
dhcp-nospoof: prevents Client ID/DUID spoofing for DHCP.
restricted:
only allows IPv4, IPv6 and ARP protocols. Using this protection type prevents the link from generating potentially harmful L2 control frames.

# dladm set-linkprop -p protection=mac-nospoof,restricted,ip-nospoof vnic0

Specify the 10.0.0.1 IP address as values for the allowed-ips property for the vnic0 link:

# dladm set-linkprop -p allowed-ips=10.0.0.1 vnic0

Verify the link protection property values:
# dladm show-linkprop -p protection,allowed-ips vnic0

LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
vnic0 protection rw mac-nospoof, -- mac-nospoof,
restricted, restricted,
ip-nospoof ip-nospoof,
dhcp-nospoof
vnic0 allowed-ips rw 10.0.0.1 -- --

We can see that 10.0.0.1 is set as allowed ip address.

Log In to the zone

# zlogin secure-zone

After we login into the zone let's try to change the zone's ip address:

root@secure-zone:~# ifconfig vnic0 10.0.0.2
ifconfig:could not create address: Permission denied

As we can see we can't change the zone's ip address!

Optional - disable the link protection from the global zone:

# dladm reset-linkprop -p protection,allowed-ips vnic0

NOTE - we don't need to reboot the zone in order to disable this property.

Verify the change

# dladm show-linkprop -p protection,allowed-ips vnic0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
vnic0 protection rw -- -- mac-nospoof,
restricted,
ip-nospoof,
dhcp-nospoof
vnic0 allowed-ips rw -- -- --

As we can see we don't have restriction on the allowed-ips property.

Conclusion In this blog I demonstrated how we can leverage the Solaris 11 Data link protection in order to prevent spoofing attacks.

Monday May 20, 2013

How To Protect Public Cloud Using Solaris 11 Technologies

When we meet with our partners, we often ask them, “ What are their main security challenges for public cloud infrastructure.? What worries them in this regard?”
This is what we've gathered from our partners regarding the security challenges:

1.    Protect data at rest in transit and in use using encryption
2.    Prevent denial of service attacks against their infrastructure
3.    Segregate network traffic between different cloud users
4.    Disable hostile code (e.g.’ rootkit’ attacks)
5.    Minimize operating system attack surface
6.    Secure data deletions once we have done with our project
7.    Enable strong authorization and authentication for non secure protocols

Based on these guidelines, we began to design our Oracle Developer Cloud. Our vision was to leverage Solaris 11 technologies in order to meet those security requirements.


First - Our partners would like to encrypt everything from disk up the layers to the application without the performance overhead which is usually associated with this type of technology.
The SPARC T4 (and lately the SPARC T5) integrated cryptographic accelerator allow us to encrypt data using ZFS encryption capability.
We can encrypt all the network traffic using SSL from the client connection to the cloud main portal using the Secure Global Desktop (SGD) technology and also encrypt the network traffic between the application tier to the database tier. In addition to that we can protect our Database tables using Oracle Transparent Data Encryption (TDE).
During our performance tests we saw that the performance impact was very low (less than 5%) when we enabled those encryption technologies.
The following example shows how we created an encrypted file system

# zfs create -o encryption=on rpool/zfs_file_system

Enter passphrase for 'rpool/zfs_file_system':
Enter again:

NOTE - In the above example, we used a passphrase that is interactively requested but we can use SSL or a key repository.
Second  - How we can mitigate denial of service attacks?
The new Solaris 11 network virtualization technology allow us to apply virtualization technologies to  our network by splitting the physical network card into multiple virtual network ‘cards’. in addition, it provides the capability to setup flow which is sophisticated quality of service mechanism.
Flows allow us to limit the network bandwidth for a specific network port on specific network interface.

In the following example we limit the SSL traffic to 100Mb on the vnic0 network interface

# dladm create-vnic vnic0 –l net0
# flowadm add-flow -l vnic0 -a transport=TCP,local_port=443 https-flow
# flowadm set-flowprop -p maxbw=100M https-flow


During any (Denial of Service) DOS attack against this web server, we can minimize the impact on the rest of the infrastructure.
Third -  How can we isolate network traffic between different tenants of the public cloud?
The new Solaris 11 network technology allow us to segregate the network traffic on multiple layers.

For example we can limit the network traffic based on the layer two using VLANs

# dladm create-vnic -l net0  -v 2 vnic1

Also we can be implement firewall rules for layer three separations using the Solaris 11 built-in firewall software.
For an example of Solaris 11 firewall see
In addition to the firewall software, Solaris 11 has built-in load balancer and routing software. In a cloud based environment it means that new functionality can be added promptly since we don't need an extra hardware in order to implement those extra functions.

Fourth - Rootkits have become a serious threat is allowing the insertion of hostile code using custom kernel modules.
The Solaris Zones technology prevents loading or unloading kernel modules (since local zones lack the sys_config privilege).
This way we can limit the attack surface and prevent this type of attack.

In the following example we can see that even the root user is unable to load custom kernel module inside a Solaris zone

# ppriv -De modload -p /tmp/systrace

modload[21174]: missing privilege "ALL" (euid = 0, syscall = 152) needed at modctl+0x52
Insufficient privileges to load a module

Fifth - the Solaris immutable zones technology allows us to minimize the operating system attack surface
For example: disable the ability to install new IPS packages and modify file systems like /etc
We can setup Solaris immutable zones using the zonecfg command.

# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone.
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> set file-mac-profile=fixed-configuration
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

# zoneadm -z secure-zone install

We can combine the ZFS encryption and immutable zones for more examples see:

Sixth - The main challenge of building secure BIG Data solution is the lack of built-in security mechanism for authorization and authentication.
The Integrated Solaris Kerberos allows us to enable strong authorization and authentication for non-secure by default distributed systems like Apache Hadoop.

The following example demonstrates how easy it is to install and setup Kerberos infrastructure on Solaris 11

# pkg install pkg://solaris/system/security/kerberos-5
# kdcmgr -a kws/admin -r EXAMPLE.COM create master


Finally - our partners want to assure that when the projects are finished and complete, all the data is erased without the ability to recover this data by looking at the disk blocks directly bypassing the file system layer.
ZFS assured delete feature allows us to implement this kind of secure deletion.
The following example shows how we can change the ZFS wrapping key to a random data (output of /dev/random) then we unmount the file system and finally destroy it.

# zfs key -c -o  keysource=raw,file:///dev/random rpool/zfs_file_system
# zfs key -u rpool/zfs_file_system
# zfs destroy rpool/zfs_file_system


Conclusion
In this blog entry, I covered how we can leverage the SPARC T4/T5 and the Solaris 11 features in order to build secure cloud infrastructure. Those technologies allow us to build highly protected environments without  the need to invest extra budget on special hardware. They also  allow us to protect our data and network traffic from various threats.
If you would like to hear more about those technologies, please join us at the next IGT cloud meet-up

Wednesday Feb 06, 2013

Building your developer cloud using Solaris 11

Solaris 11 combines all the good stuff that Solaris 10 has in terms
of enterprise scalability, performance and security and now the new cloud
technologies
.

During the last year I had the opportunity to work on one of the
most challenging engineering projects at Oracle,
building developer cloud for our OPN Gold members in order qualify their applications on Solaris 11.

This cloud platform provides an intuitive user interface for VM provisioning by selecting VM images pre-installed with Oracle Database 11g Release 2 or Oracle WebLogic Server 12c , In addition to that simple file upload and file download mechanisms.

We used the following Solaris 11 cloud technologies in order to build this platform, for example:


Oracle Solaris 11 Network Virtualization
Oracle Solaris Zones
ZFS encryption and cloning capability
NFS server inside Solaris 11 zone


You can find the technology building blocks slide deck here .


For more information about this unique cloud offering see .

Thursday Jan 24, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris Zones

This article starts with a brief overview of Hadoop and follows with an
example of setting up a Hadoop cluster with a NameNode, a secondary
NameNode, and three DataNodes using Oracle Solaris Zones.

The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:

· Fast provision of new cluster members using the zone cloning feature

· Very high network throughput between the zones for data node replication

· Optimized disk I/O utilization for better I/O performance with ZFS built-in compression

· Secure data at rest using ZFS encryption






Hadoop use the Distributed File System (HDFS) in order to store data.
HDFS provides high-throughput access to application data and is suitable
for applications that have large data sets.

The Hadoop cluster building blocks are as follows:

· NameNode: The centerpiece of HDFS, which stores file system metadata,
directs the slave DataNode daemons to perform the low-level I/O tasks,
and also runs the JobTracker process.

· Secondary NameNode: Performs internal checks of the NameNode transaction log.

· DataNodes: Nodes that store the data in the HDFS file system, which are also known as slaves and run the TaskTracker process.

In the example presented in this article, all the Hadoop cluster
building blocks will be installed using the Oracle Solaris Zones, ZFS,
and network virtualization technologies.





Monday Jun 04, 2012

Oracle Solaris Zones Physical to virtual (P2V)

Introduction
This document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.
Using an example and various scenarios, this paper describes how to take advantage of the
Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace.


The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster
Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.

Oracle Solaris Zones
Oracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.
This technology provides an isolated and secure environment for running applications.
A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.
Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.

Oracle Solaris Zones Physical-to-Virtual (P2V)
A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical
system and migrate it into a virtualized operating system environment
There are three main steps using this tool

1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image.
2. Preparing the target system by configuring a new zone that will host the new image.
3. Image installation on the target system using the image we created on step 1.

The host, where the image is built, is referred to as the source system and the host, where the
image is installed, is referred to as the target system.



Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)
Here are some benefits of this new feature:


  •  Simple- easy build process using Oracle Solaris 10 built-in commands.

  •  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.

  •  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page.

    Prerequisites
    The minimum Solaris version on the target system should be Solaris 10 9/10.
    Refer to the latest Administration Guide for Oracle Solaris 10 for a complete procedure on how to
    download and install Oracle Solaris.



  • NOTE: If the source system that used to build the image is an older version then the target
    system, then during the process, the operating system will be upgraded to Solaris 10 9/10
    (update on attach).
    Creating the Image Used to distribute the software.
    We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine,

    or build it into a NFS shared storage and
    mount the NFS file system from the target machine.
    Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.
    An image is created by using the flarcreate command:
    Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flar
    The command does the following:



  •  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).

  •  -n specifies the image name.

  •  -L specifies the archive format (i.e cpio).

    Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.
    Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB
    10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flar
    You can see example of the archive identification section in Appendix A: archive identification section.
    We can compress the flar image using the gzip command or adding the -c option to the flarcreate command
    Source # gzip /var/tmp/solaris_10_up9.flar
    An md5 checksum can be created for the image in order to ensure no data tampering
    Source # digest -v -a md5 /var/tmp/solaris_10_up9.flar


    Moving the image into the target system.
    If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.

    Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmp
    Configuring the Zone on the target system
    After copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.
    To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )

    ZFS integration
    A flash archive can be created on a system that is running a UFS or a ZFS root file system.
    NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then by
    default, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.
    This image cannot be used to install a zone. You must create the flar with an explicit cpio or pax
    archive when the system has a ZFS root.
    Use the flarcreate command with the -L archiver option, specifying cpio or pax as the
    method to archive the files. (For example, see Step 1 in the previous section).
    Optionally, on the target system you can create the zone root folder on a ZFS file system in
    order to benefit from the ZFS features (clones, snapshots, etc...).

    Target # zpool create zones c2t2d0


    Create the zone root folder:

    Target # chmod 700 /zones
    Target # zonecfg -z solaris10-up9-zone
    solaris10-up9-zone: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:solaris10-up9-zone> create
    zonecfg:solaris10-up9-zone> set zonepath=/zones
    zonecfg:solaris10-up9-zone> set autoboot=true
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone:net> set address=192.168.0.1
    zonecfg:solaris10-up9-zone:net> set physical=nxge0
    zonecfg:solaris10-up9-zone:net> end
    zonecfg:solaris10-up9-zone> verify
    zonecfg:solaris10-up9-zone> commit
    zonecfg:solaris10-up9-zone> exit

    Installing the Zone on the target system using the image
    Install the configured zone solaris10-up9-zone by using the zoneadm command with the install -
    a option and the path to the archive.
    The following example shows how to create an Image and sys-unconfig the zone.
    Target # zoneadm -z solaris10-up9-zone install -u -a
    /var/tmp/solaris_10_up9.flar
    Log File: /var/tmp/solaris10-up9-zone.install_log.AJaGve
    Installing: This may take several minutes...
    The following example shows how we can preserve system identity.
    Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar


    Resource management


    Some applications are sensitive to the number of CPUs on the target Zone. You need to
    match the number of CPUs on the Zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>add dedicated-cpu
    zonecfg:solaris10-up9-zone> set ncpus=16


    DTrace integration
    Some applications might need to be analyzing using DTrace on the target zone, you can
    add DTrace support on the zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>set
    limitpriv="default,dtrace_proc,dtrace_user"


    Exclusive IP
    stack

    An Oracle Solaris Container running in Oracle Solaris 10 can have a
    shared IP stack with the global zone, or it can have an exclusive IP
    stack (which was released in Oracle Solaris 10 8/07). An exclusive IP
    stack provides a complete, tunable, manageable and independent
    networking stack to each zone. A zone with an exclusive IP stack can
    configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec.
    For an example of how to configure an Oracle Solaris zone with an
    exclusive IP stack, see the following example

    zonecfg:solaris10-up9-zone set ip-type=exclusive
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone> set physical=nxge0

    When the installation completes, use the zoneadm list -i -v options to list the installed
    zones and verify the status.
    Target # zoneadm list -i -v
    See that the new Zone status is installed
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - solaris10-up9-zone installed /zones native shared
    Now boot the Zone
    Target # zoneadm -z solaris10-up9-zone boot
    We need to login into the Zone order to complete the zone set up or insert a sysidcfg file before
    booting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg file
    section
    Target # zlogin -C solaris10-up9-zone

    Troubleshooting
    If an installation fails, review the log file. On success, the log file is in /var/log inside the zone. On
    failure, the log file is in /var/tmp in the global zone.
    If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F

    to reset the zone to the configured state.
    Target # zoneadm -z solaris10-up9-zone uninstall -F
    Target # zonecfg -z solaris10-up9-zone delete -F
    Conclusion
    Oracle Solaris Zones P2V tool provides the flexibility to build pre-configured
    images with different software configuration for faster deployment and server consolidation.
    In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.

    Appendix A: archive identification section
    We can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access the
    identification section that contains the detailed description.
    Target # head -n 20 /var/tmp/solaris_10_up9.flar
    FlAsH-aRcHiVe-2.0
    section_begin=identification
    archive_id=e4469ee97c3f30699d608b20a36011be
    files_archived_method=cpio
    creation_date=20100901160827
    creation_master=mdet5140-1
    content_name=s10-system
    creation_node=mdet5140-1
    creation_hardware_class=sun4v
    creation_platform=SUNW,T5140
    creation_processor=sparc
    creation_release=5.10
    creation_os_name=SunOS
    creation_os_version=Generic_142909-16
    files_compressed_method=none
    content_architectures=sun4v
    type=FULL
    section_end=identification
    section_begin=predeployment
    begin 755 predeployment.cpio.Z

    Appendix B: sysidcfg file section
    Target # cat sysidcfg
    system_locale=C
    timezone=US/Pacific
    terminal=xterms
    security_policy=NONE
    root_password=HsABA7Dt/0sXX
    timeserver=localhost
    name_service=NONE
    network_interface=primary {hostname= solaris10-up9-zone
    netmask=255.255.255.0
    protocol_ipv6=no
    default_route=192.168.0.1}
    name_service=NONE
    nfs4_domain=dynamic

    We need to copy this file before booting the zone
    Target # cp sysidcfg /zones/solaris10-up9-zone/root/etc/


  • Wednesday Nov 25, 2009

    Solaris Zones migration with ZFS

    ABSTRACT
    In this entry I will demonstrate how to migrate a Solaris Zone running
    on T5220 server to a new  T5220 server using ZFS as file system for
    this Zone.
    Introduction to Solaris Zones

    Solaris Zones provide a new isolation primitive for the Solaris OS,
    which is secure, flexible, scalable and lightweight. Virtualized OS
    services  look like different Solaris instances. Together with the
    existing Solaris Resource management framework, Solaris Zones forms the
    basis of Solaris Containers.

    Introduction to ZFS


    ZFS is a new kind of file system that provides simple administration,
    transactional semantics, end-to-end data integrity, and immense
    scalability.
    Architecture layout :





    Prerequisites :

    The Global Zone on the target system must be running the same Solaris release as the original host.

    To ensure that the zone will run properly, the target system must have
    the same versions of the following required operating system packages
    and patches as those installed on the original host.


    Packages that deliver files under an inherit-pkg-dir resource
    Packages where SUNW_PKG_ALLZONES=true
    Other packages and patches, such as those for third-party products, can be different.

    Note for Solaris 10 10/08: If the new host has later versions of the
    zone-dependent packages and their associated patches, using zoneadm
    attach with the -u option updates those packages within the zone to
    match the new host. The update on attach software looks at the zone
    that is being migrated and determines which packages must be updated to
    match the new host. Only those packages are updated. The rest of the
    packages, and their associated patches, can vary from zone to zone.
    This option also enables automatic migration between machine classes,
    such as from sun4u to sun4v.


    Create the ZFS pool for the zone
    # zpool create zones c2t5d2
    # zpool list

    NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    zones   298G    94K   298G     0%  ONLINE  -

    Create a ZFS file system for the zone
    # zfs create zones/zone1
    # zfs list

    NAME          USED  AVAIL  REFER  MOUNTPOINT
    zones         130K   293G    18K  /zones
    zones/zone1    18K   293G    18K  /zones/zone1

    Change the file system permission
    # chmod 700 /zones/zone1

    Configure the zone
    # zonecfg -z zone1

    zone1: No such zone configured
    Use 'create' to begin configuring a new zone.

    zonecfg:zone1> create -b
    zonecfg:zone1> set autoboot=true
    zonecfg:zone1> set zonepath=/zones/zone1
    zonecfg:zone1> add net
    zonecfg:zone1:net> set address=192.168.1.1
    zonecfg:zone1:net> set physical=e1000g0
    zonecfg:zone1:net> end
    zonecfg:zone1> verify
    zonecfg:zone1> commit
    zonecfg:zone1> exit
    Install the new Zone
    # zoneadm -z zone1 install

    Boot the new zone
    # zoneadm -z zone1 boot

    Login to the zone
    # zlogin -C zone1

    Answer all the setup questions

    How to Validate a Zone Migration Before the Migration Is Performed

    Generate the manifest on a source host named zone1 and pipe the output
    to a remote command that will immediately validate the target host:
    # zoneadm -z zone1 detach -n | ssh targethost zoneadm -z zone1 attach -n -

    Start the migration process

    Halt the zone to be moved, zone1 in this procedure.
    # zoneadm -z zone1 halt

    Create snapshot for this zone in order to save its original state
    # zfs snapshot zones/zone1@snap
    # zfs list

    NAME               USED  AVAIL  REFER  MOUNTPOINT
    zones             4.13G   289G    19K  /zones
    zones/zone1       4.13G   289G  4.13G  /zones/zone1
    zones/zone1@snap      0      -  4.13G  -
    Detach the zone.
    # zoneadm -z zone1 detach

    Export the ZFS pool,use the zpool export command
    # zpool export zones


    On the target machine
     Connect the storage to the machine and then import the ZFS pool on the target machine
    # zpool import zones
    # zpool list

    NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    zones   298G  4.13G   294G     1%  ONLINE  -
    # zfs list

    NAME               USED  AVAIL  REFER  MOUNTPOINT
    zones             4.13G   289G    19K  /zones
    zones/zone1       4.13G   289G  4.13G  /zones/zone1<
    zones/zone1@snap  2.94M      -  4.13G  -

    On the new host, configure the zone.
    # zonecfg -z zone1

    You will see the following system message:

    zone1: No such zone configured

    Use 'create' to begin configuring a new zone.

    To create the zone zone1 on the new host, use the zonecfg command with the -a option and the zonepath on the new host.

    zonecfg:zone1> create -a /zones/zone1
    Commit the configuration and exit.
    zonecfg:zone1> commit
    zonecfg:zone1> exit
    Attach the zone with a validation check.
    # zoneadm -z zone1 attach

    The system administrator is notified of required actions to be taken if either or both of the following conditions are present:

    Required packages and patches are not present on the new machine.

    The software levels are different between machines.

    Note for Solaris 10 10/08: Attach the zone with a validation check and
    update the zone to match a host running later versions of the dependent
    packages or having a different machine class upon attach.
    # zoneadm -z zone1 attach -u

    Solaris 10 5/09 and later: Also use the -b option to back out specified patches, either official or IDR, during the attach.
    # zoneadm -z zone1 attach -u -b IDR246802-01 -b 123456-08

    Note that you can use the -b option independently of the -u option.

    Boot the zone
    # zoneadm -z zone1 boot

    Login to the new zone
    # zlogin -C zone1

    [Connected to zone 'zone1' console]

    Hostname: zone1

    All the process took approximately five minutes

    For more information about Solaris ZFS and Zones

    About

    This blog covers cloud computing, big data and virtualization technologies

    Search

    Categories
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today