Thursday Aug 22, 2013

Hadoop Cluster with Oracle Solaris Hands on Lab at Oracle Open WORLD 2013

If you want to learn how-to build a Hadoop cluster using Solaris 11 technologies please join me at the following Oracle Open WORLD 2013 lab.
How to Set Up a Hadoop Cluster with Oracle Solaris [HOL10182]



In this Hands-on-Lab we will preset and demonstrate using exercises how to set up a Hadoop cluster Using Oracle Solaris 11 technologies like: Zones, ZFS, DTrace  and Network Virtualization.
Key topics include the Hadoop Distributed File System and MapReduce.
We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes.
In addition how we can combine the Oracle Solaris 11 technologies for better scalability and data security.
During the lab users will learn how to load data into the Hadoop cluster and run Map-Reduce job.
This hands-on training lab is for system administrators and others responsible for managing Apache Hadoop clusters in production or development environments.

This Lab will cover the following topics:

    1. How to install Hadoop

    2. Edit the Hadoop configuration files

    3. Configure the Network Time Protocol

    4. Create the Virtual Network Interfaces

    5. Create the NameNode and the Secondary NameNode Zones

    6. Configure the NameNode

    7. Set Up SSH between the Hadoop cluster member

    8. Format the HDFS File System

    9. Start the Hadoop Cluster

   10. Run a MapReduce Job

   11. How to secure data at rest using ZFS encryption

   12. Performance monitoring using Solaris DTrace

Register Now


Monday Jun 24, 2013

How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones.
The following are benefits of using Oracle Solaris for a MongoDB cluster:

• You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster.
• In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort.
• You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools.
ZFS built-in compression provides optimized disk I/O utilization for better I/O performance.
In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies.

Figure 1 shows the architecture:


Wednesday Feb 06, 2013

Building your developer cloud using Solaris 11

Solaris 11 combines all the good stuff that Solaris 10 has in terms
of enterprise scalability, performance and security and now the new cloud
technologies
.

During the last year I had the opportunity to work on one of the
most challenging engineering projects at Oracle,
building developer cloud for our OPN Gold members in order qualify their applications on Solaris 11.

This cloud platform provides an intuitive user interface for VM provisioning by selecting VM images pre-installed with Oracle Database 11g Release 2 or Oracle WebLogic Server 12c , In addition to that simple file upload and file download mechanisms.

We used the following Solaris 11 cloud technologies in order to build this platform, for example:


Oracle Solaris 11 Network Virtualization
Oracle Solaris Zones
ZFS encryption and cloning capability
NFS server inside Solaris 11 zone


You can find the technology building blocks slide deck here .


For more information about this unique cloud offering see .

Thursday Jan 24, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris Zones

This article starts with a brief overview of Hadoop and follows with an
example of setting up a Hadoop cluster with a NameNode, a secondary
NameNode, and three DataNodes using Oracle Solaris Zones.

The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:

· Fast provision of new cluster members using the zone cloning feature

· Very high network throughput between the zones for data node replication

· Optimized disk I/O utilization for better I/O performance with ZFS built-in compression

· Secure data at rest using ZFS encryption






Hadoop use the Distributed File System (HDFS) in order to store data.
HDFS provides high-throughput access to application data and is suitable
for applications that have large data sets.

The Hadoop cluster building blocks are as follows:

· NameNode: The centerpiece of HDFS, which stores file system metadata,
directs the slave DataNode daemons to perform the low-level I/O tasks,
and also runs the JobTracker process.

· Secondary NameNode: Performs internal checks of the NameNode transaction log.

· DataNodes: Nodes that store the data in the HDFS file system, which are also known as slaves and run the TaskTracker process.

In the example presented in this article, all the Hadoop cluster
building blocks will be installed using the Oracle Solaris Zones, ZFS,
and network virtualization technologies.





Wednesday Feb 15, 2012

Oracle VM for SPARC (LDoms) Live Migration

This document provides information how we can increase application
availability by using the Oracle VM Server for SPARC software
(previously called Sun Logical Domains).

We tested the Oracle 11gr2 DB on a SPARC T4 while migration the
guest domain from one SPARC T4 server to another without shutting down the
Oracle DB.

In the testing the Swingbench OrderEntry workload was used to
generate workload, OrderEntry is based on the oe schema that ships
with Oracle 11g.

The scenario includes the following characteristic :
30GB DB disk size with 18GB SGA size, The number of
workload users were 50 and the time between actions taken by each
user were 100 ms.


The paper shows the complete configuration process, including the
creation and configuration of guest domains using the ldm command.
In addition to that Storage configuration and layout plus software requirements that
were used to demonstrate the Live Migration between two T4 systems.


For more information see Increasing Application Availability by Using the Oracle VM Server for SPARC Live Migration Feature: An Oracle Database Example


Tuesday Dec 15, 2009

VirtualBox Teleporting


ABSTRACT

In this entry, I will demonstrate how to use the new feature of
VirtualBox version 3.1 Migration (a.k.a teleporting) for moving a
virtual machine over a network from one VirtualBox host to another,
while the virtual machine is running.


Introduction to VirtualBox


VirtualBox
is a general-purpose full virtualizer for x86 hardware.
Targeted at server, desktop and embedded use, it is now the only
professional-quality virtualization solution that is also Open Source
Software.

Introduction to Teleporting


Teleporting requires that a machine be currently running on one host,
which is then called the "source". The host to which the virtual
machine will be teleported will then be called the "target". The
machine on the target is then configured to wait for the source to
contact the target. The machine's running state will then be
transferred from the source to the target with minimal downtime.

This works regardless of the host operating system that is running on
the hosts: you can teleport virtual machines between Solaris and Mac
hosts, for example.


Architecture layout :




Prerequisites :

1. The target and source machines should both be running VirtualBox version 3.1or later.

2. The target machine must be configured with the same amount of memory
(machine and video memory) and other hardware settings,  as the source
machine. Otherwise teleporting will fail with an error message.

3. The two virtual machines on the source and the target must share the same storage

  they either use the same iSCSI targets or both hosts have access to  the same storage via NFS or SMB/CIFS.

4. The source and target machines cannot have any snapshots.

5. The hosts must have fairly similar CPUs. Teleporting between Intel and AMD CPUs will probably fail with an error message.

Preparing the storage environment


For this example, I will use OpenSolaris x86 as CIFS server in order to
enable shared storage for the source and target machines ,but you can
use any iSCSI NFS or CIFS server for this task.

Install the packages from the OpenSolaris.org repository:

# pfexec pkg install SUNWsmbs SUNWsmbskr


Reboot the system to activate the SMB server in the kernel.

# pfexec reboot


Enable  the CIFS service:

# pfexec svcadm enable –r smb/server


If the following warning is issued, you can ignore it:

svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Verified the service

# pfexec svcs smb/server


STATE          STIME    FMRI

online          8:38:22 svc:/network/smb/server:default

The Solaris CIFS SMB service uses WORKGROUP as the default group. If
the workgroup needs to be changed, use the following command to change
the workgroup name:

# pfexec smbadm join –w workgroup-name


Next edit the /etc/pam. conf file to enable encrypted passwords to be used for CIFS.

Add the following line to the end of the file:

other password required pam_smb_passwd. so. 1     nowarn

# pfexec  echo "other password required pam_smb_passwd.so.1     nowarn" >> /etc/pam.conf


 Each user currently in the /etc/passwd file needs to re-encrypt to be able to use the CIFS service:
# pfexec passwd user-name

Note -
After the PAM module is installed, the passwd command
automatically generates CIFS-suitable passwords for new users. You must
also run the passwd command to generate CIFS-style passwords for
existing users.

Create mixed-case ZFS file system.

# pfexec zfs create -o casesensitivity=mixed  rpool/vboxstorage


Enable SMB sharing for the ZFS file system.

# pfexec  zfs set sharesmb=on rpool/vboxstorage


Verify how the file system is shared.

# pfexec sharemgr show -vp


Now, you can access the share by connecting to \\\\solaris-hostname\\share-name


Create new Virtual machine, for the virtual hard disk select “Create new hard disk” then select next



Select the next button



For the disk location enter the network drive that you mapped from the previous section then press the next button



Verify the disk settings and then press
the finish button



Continue with the Install process ,after finishing the install process shutdown the Virtual machine in order to avoid any storage locking.


On the target machine :


Map the same network drive


Configure a new virtual machine but
instead of selecting “Create new hard drive” select use “Use existing hard drive”.




In the Virtual Media Manger window
select the Add button,and point to the same location as the
source machine hard drive ( the network drive).




Don't start the Virtual machine yet.

To wait for a teleport request to arrive when it is started,use the following VBoxManage command:

VBoxManage modifyvm <targetvmname> --teleporter on --teleporterport <port>


where <targetvmname> is the name of the virtual machine on the
target in this use case opensolaris  ,and <port> is a TCP/IP
port number to be used on both the source and the target. In this example, I used port 6000.

C:\\Program Files\\Sun\\VirtualBox>VBoxManage modifyvm  opensolaris --teleporter on --teleporterport 6000


Next, start the VM on the target. You will see that instead of actually
running, it will show a progress dialog. indicating that it is waiting
for a teleport request to arrive.



You can see that the machine status changed to Teleporting 



On the source machine: 

Start the Virtual machine

When it is running and you want it to be teleported, issue the following command :
VBoxManage controlvm <sourcevmname> teleport --host <targethost> --port <port>
where <sourcevmname> is the name of the virtual machine on the
source (the machine that is currently running), <targethost> is
the host or IP name of the target that has the machine that is waiting
for the teleport request, and <port> must be the same number as
specified in the command on the target (i.e. for this example: 6000).
C:\\Program Files\\Sun\\VirtualBox>VBoxManage controlvm opensolaris  teleport --host target_machine_ip --port 6000
VirtualBox Command Line Management Interface Version 3.1.0
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
You can see that the machine status changed to Teleported



The teleporting process took ~5 seconds

For more information about VirtualBox

Wednesday Nov 25, 2009

Solaris Zones migration with ZFS

ABSTRACT
In this entry I will demonstrate how to migrate a Solaris Zone running
on T5220 server to a new  T5220 server using ZFS as file system for
this Zone.
Introduction to Solaris Zones

Solaris Zones provide a new isolation primitive for the Solaris OS,
which is secure, flexible, scalable and lightweight. Virtualized OS
services  look like different Solaris instances. Together with the
existing Solaris Resource management framework, Solaris Zones forms the
basis of Solaris Containers.

Introduction to ZFS


ZFS is a new kind of file system that provides simple administration,
transactional semantics, end-to-end data integrity, and immense
scalability.
Architecture layout :





Prerequisites :

The Global Zone on the target system must be running the same Solaris release as the original host.

To ensure that the zone will run properly, the target system must have
the same versions of the following required operating system packages
and patches as those installed on the original host.


Packages that deliver files under an inherit-pkg-dir resource
Packages where SUNW_PKG_ALLZONES=true
Other packages and patches, such as those for third-party products, can be different.

Note for Solaris 10 10/08: If the new host has later versions of the
zone-dependent packages and their associated patches, using zoneadm
attach with the -u option updates those packages within the zone to
match the new host. The update on attach software looks at the zone
that is being migrated and determines which packages must be updated to
match the new host. Only those packages are updated. The rest of the
packages, and their associated patches, can vary from zone to zone.
This option also enables automatic migration between machine classes,
such as from sun4u to sun4v.


Create the ZFS pool for the zone
# zpool create zones c2t5d2
# zpool list

NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zones   298G    94K   298G     0%  ONLINE  -

Create a ZFS file system for the zone
# zfs create zones/zone1
# zfs list

NAME          USED  AVAIL  REFER  MOUNTPOINT
zones         130K   293G    18K  /zones
zones/zone1    18K   293G    18K  /zones/zone1

Change the file system permission
# chmod 700 /zones/zone1

Configure the zone
# zonecfg -z zone1

zone1: No such zone configured
Use 'create' to begin configuring a new zone.

zonecfg:zone1> create -b
zonecfg:zone1> set autoboot=true
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add net
zonecfg:zone1:net> set address=192.168.1.1
zonecfg:zone1:net> set physical=e1000g0
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
Install the new Zone
# zoneadm -z zone1 install

Boot the new zone
# zoneadm -z zone1 boot

Login to the zone
# zlogin -C zone1

Answer all the setup questions

How to Validate a Zone Migration Before the Migration Is Performed

Generate the manifest on a source host named zone1 and pipe the output
to a remote command that will immediately validate the target host:
# zoneadm -z zone1 detach -n | ssh targethost zoneadm -z zone1 attach -n -

Start the migration process

Halt the zone to be moved, zone1 in this procedure.
# zoneadm -z zone1 halt

Create snapshot for this zone in order to save its original state
# zfs snapshot zones/zone1@snap
# zfs list

NAME               USED  AVAIL  REFER  MOUNTPOINT
zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1
zones/zone1@snap      0      -  4.13G  -
Detach the zone.
# zoneadm -z zone1 detach

Export the ZFS pool,use the zpool export command
# zpool export zones


On the target machine
 Connect the storage to the machine and then import the ZFS pool on the target machine
# zpool import zones
# zpool list

NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zones   298G  4.13G   294G     1%  ONLINE  -
# zfs list

NAME               USED  AVAIL  REFER  MOUNTPOINT
zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1<
zones/zone1@snap  2.94M      -  4.13G  -

On the new host, configure the zone.
# zonecfg -z zone1

You will see the following system message:

zone1: No such zone configured

Use 'create' to begin configuring a new zone.

To create the zone zone1 on the new host, use the zonecfg command with the -a option and the zonepath on the new host.

zonecfg:zone1> create -a /zones/zone1
Commit the configuration and exit.
zonecfg:zone1> commit
zonecfg:zone1> exit
Attach the zone with a validation check.
# zoneadm -z zone1 attach

The system administrator is notified of required actions to be taken if either or both of the following conditions are present:

Required packages and patches are not present on the new machine.

The software levels are different between machines.

Note for Solaris 10 10/08: Attach the zone with a validation check and
update the zone to match a host running later versions of the dependent
packages or having a different machine class upon attach.
# zoneadm -z zone1 attach -u

Solaris 10 5/09 and later: Also use the -b option to back out specified patches, either official or IDR, during the attach.
# zoneadm -z zone1 attach -u -b IDR246802-01 -b 123456-08

Note that you can use the -b option independently of the -u option.

Boot the zone
# zoneadm -z zone1 boot

Login to the new zone
# zlogin -C zone1

[Connected to zone 'zone1' console]

Hostname: zone1

All the process took approximately five minutes

For more information about Solaris ZFS and Zones

Thursday Nov 12, 2009

Performance Study / Best Practices for running MySQL on Xen Based Hypervisors

ABSTRACT


This blog entry provides technical insight into the benchmark of the MySQL database on Xen Virtualization environment based on the xVM Hypervisor


Introduction to the xVM Hypervisor


 xVM hypervisor can securely execute multiple virtual machines simultaneously, each running its own operating system, on a single physical system. Each virtual machine instance is called a domain. There are two kinds of domains. The control domain is called domain0, or dom0. A guest OS, or unprivileged domain, is called a domainU or domU. Unlike virtualization using zones, each domain runs a full instance of an operating system.


Introduction to the MySQL database


 MySQL database is the world's most popular open source database because of its fast performance, high reliability, ease of use, and dramatic cost savings.


Tests Objective:


The main objective is to bring an understanding on how MySQL behaves within a virtualized environment, using UFS or ZFS file system


Tests Description:


We built a test environment by using a Sun X4450 MySQL 5.4 was installed on OpenSolaris 2009_06 because of the OS built-in integration with the xVM Hypervisor . A separate set of performance tests was run with MySQL data placed on a SAN disk. xVM guest OS is OpenSolaris 2009_06 .


When running under xVM the server resources ( cpu, memory) were divided between the dom0 and domU guest OS.


 dom0 - 2 vcpu and 2GB RAM


domU -  4 vcpu and 6GB RAM



  • We used paravirtualized domU operating system in order to get the best performance.

  • We chose to analyze the performance behavior for InnoDB storage engine due to its high popularity.

  • We chose to analyze the performance behavior for two file systems (ZFS and UFS) in order to check which file system performs better for MySQL .


SysBench was used as loading tool to test base performance for each configuration.


The tool is simple to use, modular, cross-platform and multi-threaded. It also and can give a good feeling regarding the performance for a simple database use.  


Hardware configuration:


Server :SUN X4450 ,with 2X 2.9GHz dual-core CPU,8GB RAM , 2 X 146 GB internal disks.


Storage :StorageTek 6140 - configured RAID 0+1, directly attached to the server.


Software: MySQL 5.4, OpenSolaris 2009_06 .


The SysBench script


sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=10000000 --mysql-socket=/tmp/mysql.sock --mysql-user=root prepare sysbench --num-threads=8 --max-time=900 --max-requests=500000 --test=oltp --mysql-user=root --mysql-host=localhost --mysql-port=3306 --mysql-table-engine=innodb --oltp-test-mode=complex --oltp-table-size=80000000 run


We tested it with different number of threads  4 ,8 ,16 ,32  (--num-threads=8 )  


The benchmark layout





After the creation of OpenSolaris 2009_06 in domU we attached the SAN storage


Attached the file system to the guest

xm block-attach para-opensolaris phy:/dev/dsk/c0t600A0B8000267DD400000A8D494DB1A6d0p0 3 w

Verified access to the file system from the guest

root@para-opensolaris:~# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
     0. c7d3 <DEFAULT cyl 4096 alt 0 hd 128 sec 32>
        /xpvd/xdf@3
     1. c7t0d0 <DEFAULT cyl 3915 alt 0 hd 255 sec 63>
        /xpvd/xdf@51712

zpool create -f xvmpool c7d3


root@para-opensolaris:~# zpool list
NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool    29.8G  10.6G  19.2G    35%  ONLINE  -
xvmpool  117.94G  89.5K  117.94G     0%  ONLINE  -


The first result of running this benchmark on UFS



The first result of running this benchmark on ZFS



The results after match ZFS recored size to block size and limiting  ZFS ARC size


zfs create -o recordsize=16k xvmpool/mysql


set zfs:zfs_arc_max = 0x10000000   in /etc/system



The results after disable ZFS cache flush ( We have battery backed cache) 


set zfs:zfs_nocacheflush = 1   in /etc/system



Conclusion


After ZFS tuning we were able to receive the same results as UFS  


Thus, we can benefit from ZFS extra features like snapshot and clone.


For more information about ZFS and OpenSolaris

Thursday Jul 02, 2009

Storage virtualization with COMSTAR and ZFS


COMSTAR is a software framework that enables you to turn any
OpenSolaris host into a SCSI target that can be accessed over the
network by initiator hosts. COMSTAR breaks down the huge task of
handling a SCSI target subsystem into independent functional modules.
These modules are then glued together by the SCSI Target Mode Framework (STMF).

COMSTAR features include:

    \* Extensive LUN Masking and mapping functions
    \* Multipathing across different transport protocols
    \* Multiple parallel transfers per SCSI command
    \* Scalable design
    \* Compatible with generic HBAs


COMSTAR is integrated into the latest OpenSolaris

In this entry I will demonstrate the integration between COMSTAR and ZFS


Architecture layout :




You can install all the appropriate COMSTAR packages

# pkg install storage-server
On a newly installed OpenSolaris system, the STMF service is disabled by default.

You must complete this task to enable the STMF service.
View the existing state of the service
# svcs stmf
 disabled 15:58:17 svc:/system/stmf:default

Enable the stmf service
# svcadm enable stmf
Verify that the service is active.
# svcs stmf
 online 15:59:53 svc:/system/stmf:default

Create a RAID-Z storage pool.
The server has six controllers each with eight disks and I have
built the storage pool to spread I/O evenly and to enable me to build 8
 RAID-Z stripes of equal length.
# zpool create -f  tank \\
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0
raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0


After the pool is created, the zfs utility can be used to create 50GB ZFS volume.
# zfs create -V 50g tank/comstar-vol1

Create a logical unit using the volume.
# sbdadm create-lu /dev/zvol/rdsk/tank/comstar-vol1
Created the following logical unit :

GUID DATA SIZE SOURCE
--------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

Verify the creation of the logical unit and obtain the Global Unique Identification (GUID) number for the logical unit.
# sbdadm list-lu
Found 1 LU(s)
GUID DATA SIZE SOURCE
-------------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

This procedure makes a logical unit available to all initiator hosts on a storage network.
Add a view for the logical unit.

# stmfadm add-view GUID_number

Identify the host identifier of the initiator host you want to add to your view.
Follow the instructions for each port provider to identify the initiators associated with each
port provider.


You can see that the port mode is Initiator

# fcinfo hba-port
       HBA Port WWN: 210000e08b91facd
        Port Mode: Initiator
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:

Before making changes to the HBA ports, first check the existing port
bindings.
View what is currently bound to the port drivers.
In this example, the current binding is pci1077,2422.
# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba qlc fctl  
fcp cpc random crypto stmf nfs lofs logindmux ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlc
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlc)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlc)

Quit mdb.
> $q
Remove the current binding, which in this example is qlc.
In this example, the qlc driver is actively bound to pci1077,2422.
You must remove the existing binding for qlc before you can add that
binding to a new driver.

 Single quotes are required in this syntax.
# update_drv -d -i 'pci1077,2422' qlc
Cannot unload module: qlc
Will be unloaded upon reboot.

This message does not indicate an error.
The configuration files have been updated but the qlc driver remains
bound to the port until reboot.
Establish the new binding to qlt.
Single quotes are required in this syntax.
# update_drv -a -i 'pci1077,2422' qlt
  Warning: Driver (qlt) successfully added to system but failed to
attach

This message does not indicate an error. The qlc driver remains bound
to the port, until reboot.
The qlt driver attaches when the system is rebooted.
Reboot the system to attach the new driver, and then recheck the
bindings.
# reboot
# mdb -k

Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba fctl stmf lofs fcip cpc random crypto nfs logindmux
ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlt
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlt)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlt)

Quit mdb.
 > $q


You can see that the port mode is Target

# fcinfo hba-port

        HBA Port WWN: 210000e08b91facd
        Port Mode: Target
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:


Verify that the target mode framework has access to the HBA ports.
# stmfadm list-target -v

Target: wwn.210100E08BB1FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt1,0
Sessions : 0
Target: wwn.210000E08B91FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt0,0
Sessions : 1
Initiator: wwn.210000E08B89F077
Alias: -
Logged in since: Thu Jul 2 12:02:59 2009


Now for the client setup :


On the client machine verify that you can see the new logical unit
# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t2d0                 disk         connected    configured   unknown
c1::dsk/c1t3d0                 disk         connected    configured   unknown
c2                             fc-private   connected    configured   unknown
c2::210000e08b91facd           disk         connected    configured   unknown
c3                             fc           connected    unconfigured unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok

You might need to rescan the SAN BUS in order to discover the new logical unit
# luxadm -e forcelip /dev/cfg/c2
# format
Searching for disks...
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
1. c1t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,0
2. c1t3d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
3. c2t210000E08B91FACDd0 <SUN-COMSTAR-1.0 cyl 65533 alt 2 hd 16 sec 100>
/pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w210000e08b91facd,0
Specify disk (enter its number):

You can see the SUN-COMSTAR-1.0 in the disk properties
Now you can build storage pool on top of it
# zpool create comstar-pool c2t210000E08B91FACDd0
Verify the pool creation
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT
comstar-pool 49.8G 114K 49.7G 0% ONLINE -

After the pool is created, the zfs utility can be used to create a ZFS volume.
#  zfs create -V 48g comstar-pool/comstar-vol1


For more information about COMSTAR please check the COMSTAR  project on OpenSolaris




Sunday Apr 19, 2009

Ldom with ZFS

Logical Domains offers a powerful and consistent methodology for creating virtualized server environments across the entire CoolThreads server range:

   \* Create multiple independent virtual machines quickly and easily
     using the hypervisor built into every CoolThreads system.
   \* Leverage advanced Solaris technologies such as ZFS cloning and
     snapshots to speed deployment and dramatically reduce disk
     capacity requirements.

In this entry I will demonstrate the integration between Ldom and ZFS

Architecture layout





Downloading Logical Domains Manager and Solaris Security Toolkit

Download the Software

Download the zip file (LDoms_Manager-1_1.zip) from the Sun Software Download site. You can find the software from this web site:

http://www.sun.com/ldoms

 Unzip the zip file.
# unzip LDoms_Manager-1_1.zip
Please read the REDME file for any prerequisite
The installation script is part of the SUNWldm package and is in the Install subdirectory.


# cd LDoms_Manager-1_1


Run the install-ldm installation script with no options.
# Install/install-ldm

Select a security profile from this list:

a) Hardened Solaris configuration for LDoms (recommended)
b) Standard Solaris configuration
c) Your custom-defined Solaris security configuration profile

Enter a, b, or c [a]: a


Shut down and reboot your server
# /usr/sbin/shutdown -y -g0 -i6

Use the ldm list command to verify that the Logical Domains Manager is running
# /opt/SUNWldm/bin/ldm list


NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-c-- SP 32 16256M 0.0% 2d 23h 27m

Creating Default Services

You must create the following virtual default services initially to be able to use them later:
vdiskserver – virtual disk server
vswitch – virtual switch service
vconscon – virtual console concentrator service

Create a virtual disk server (vds) to allow importing virtual disks into a logical domain.
# ldm add-vds primary-vds0 primary

Create a virtual console concentrator (vcc) service for use by the virtual network terminal server daemon (vntsd)
# ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Create a virtual switch service
(vsw) to enable networking between virtual network
(vnet) devices in logical domains
# ldm add-vsw net-dev=e1000g0 primary-vsw0 primary

Verify the services have been created by using the list-services subcommand.


# ldm list-services

Set Up the Control Domain

Assign cryptographic resources to the control domain.
# ldm set-mau 1 primary

Assign virtual CPUs to the control domain.
# ldm set-vcpu 4 primary

Assign memory to the control domain.
# ldm set-memory 4G primary

Add a logical domain machine configuration to the system controller (SC).
# ldm add-config initial

Verify that the configuration is ready to be used at the next reboot
# ldm list-config

factory-default
initial [next poweron]

Reboot the server
# shutdown -y -g0 -i6

Enable the virtual network terminal server daemon, vntsd
# svcadm enable vntsd

Create the zpool

# zpool create ldompool c1t2d0 c1t3d0

# zfs create ldompool/goldimage

# zfs create -V 15g ldompool/goldimage/disk_image



Creating and Starting a Guest Domain

Create a logical domain.
# ldm add-domain goldldom

Add CPUs to the guest domain.
ldm add-vcpu 4 goldldom

Add memory to the guest domain
# ldm add-memory 2G goldldom

Add a virtual network deviceto the guest domain.
# ldm add-vnet vnet1 primary-vsw0 goldldom

Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain
# ldm add-vdsdev /dev/zvol/dsk/ldompool/goldimage/disk_image vol1@primary-vds0

Add a virtual disk to the guest domain.
# ldm add-vdisk vdisk0 vol1@primary-vds0 goldldom

Set auto-boot and boot-device variables for the guest domain
# ldm set-variable auto-boot\\?=false goldldom
# ldm set-var boot-device=vdisk0 goldldom


Bind resources to the guest domain goldldom and then list the domain to verify that it is bound.
# ldm bind-domain goldldom
# ldm list-domain goldldom


NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      4     4G       0.2%  15m
goldldom         bound      ------  5000    4     2G

Start the guest domain
# ldm start-domain goldldom
Connect to the console of a guest domain
# telnet 0 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is '\^]'.
Connecting to console "goldldom" in group "goldldom" ....
Press ~? for control options ..

{0} ok

Jump-Start the goldldom

{0} ok boot net - install
We can login to the new guest and verify that the file system is zfs

# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool  14.9G  1.72G  13.2G    11%  ONLINE  -
Restore the goldldom configuration to an "as-manufactured" state with the sys-unconfig command


# sys-unconfig
This program will unconfigure your system.  It will cause it
to revert to a "blank" system - it will not have a name or know
about other systems or networks.
This program will also halt the system.
Do you want to continue (y/n) y

Press ~. in order to return to the primary domain

Stop the guest domain
# ldm stop goldldom
Unbind the guest domain

# ldm unbind  goldldom
Snap shot the disk image
# zfs snapshot ldompool/goldimage/disk_image@sysunconfig

Create new zfs file system for the new guest
# zfs create ldompool/domain1

Clone the goldldom disk image
# zfs clone ldompool/goldimage/disk_image@sysunconfig ldompool/domain1/disk_image

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ldompool 17.0G 117G 21K /ldompool
ldompool/domain1 18K 117G 18K /ldompool/domain1
ldompool/domain1/disk_image 0 117G 2.01G -
ldompool/goldimage 17.0G 117G 18K /ldompool/goldimage
ldompool/goldimage/disk_image 17.0G 132G 2.01G -
ldompool/goldimage/disk_image@sysunconfig 0 - 2.01G -

Creating and Starting the second  Domain


# ldm add-domain domain1
# ldm add-vcpu 4 domain1
# ldm add-memory 2G domain1
# ldm add-vnet vnet1 primary-vsw0 domain1
# ldm add-vdsdev /dev/zvol/dsk/ldompool/domain1/disk_image vol2@primary-vds0
# ldm add-vdisk vdisk1 vol2@primary-vds0 domain1
# ldm set-var auto-boot\\?=false domain1
# ldm set-var boot-device=vdisk1 domain1

# ldm bind-domain domain1
# ldm list-domain domain1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
domain1 bound ------ 5001 8 2G

Start the domain
# ldm start-domain domain1

Connect to the console
# telnet 0 5001
{0} ok boot net -s

Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Booting to milestone "milestone/single-user:default".
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Configured interface vnet0
Requesting System Maintenance Mode
SINGLE USER MODE

# zpool import -f rpool
# zpool export rpool
# reboot


Answer the configuration questions

Login to the new domain and verify that we have zfs file system

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 14.9G 1.72G 13.2G 11% ONLINE -

Tuesday Mar 03, 2009

OpenSolaris on Amazon EC2


Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "Amazon AWS Hands-on workshop" at the Sun office in Herzeliya. During the event Simone Brunozzi, Amazon Web Services Evangelist, demonstrated Amazon's Ec2 and S3 using the AWS console. There were 40 attendees from a wide breadth of technology firms.

For more information regarding using OpenSolaris on amazon EC2 see http://www.sun.com/third-party/global/amazon/index.jsp.


Monday Jan 12, 2009

Solaris iSCSI Server

This document describes how to build Iscsi server based on Solaris platform on sun X4500 server.



On the target (server)


The server has six controllers each with eight disks and I have built the storage pool to spread I/O evenly and to enable me to build 8 RAID-Z stripes of equal length.


zpool create -f  tank \\
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 \\
raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0

After the pool is created, the zfs utility can be used to create 50GB ZFS volume.


zfs create -V 50g tank/iscsivol000

Enable the Iscsi service


svcadm enable iscsitgt

Verify that the service is enabled.


svcs –a | grep iscsitgt


To view the list of commands, iscsitadm can be run without any options:


iscsitadm

Usage: iscsitadm -?,-V,--help Usage: iscsitadm create [-?]  [-?] [ Usage: iscsitadm list [-?]  [-?] [ Usage: iscsitadm modify [-?]  [-?] [ Usage: iscsitadm delete [-?]  [-?] [ Usage: iscsitadm show [-?] 

[-?] [ For more information, please see iscsitadm(1M)



To begin using the iSCSI target, a base directory needs to be created.


This directory is used to persistently store the target and initiator configuration that is added through the iscsitadm utility.


iscsitadm modify admin -d /etc/iscsi




Once the volumes are created, they need to be exported to an initiator


iscsitadm create target -b /dev/zvol/rdsk/tank/iscsivol000 target-label


Once the targets are created, iscsitadm's "list" command and "target" subcommand can be used to display the targets and their properties:


iscsitadm list target -v

On the initiator (client)


Install iscsi client from http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en 

About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today