Tuesday Sep 28, 2010

IT 2020 technology optimism

Please read the following white paper "IT 2020: Technology Optimism: An Oracle Scenario" which I contributed to the writing.

The major idea behind the paper is  describing  a scenario of technology optimism, where IT has solved many of the difficult issues we struggle with today and broadens everyone’s horizon.

Friday Jun 04, 2010

Increasing Application Availability Using Oracle VM Server for SPARC (LDoms) An Oracle Database Example

This white paper by Orgad Kimchi and Roman Ivanov of Oracle's ISV Engineering Team describes how to use the warm migration feature of Oracle VM Server for SPARC to move a running Oracle database application from one server to another without disruption. It includes explanations of the process and instructions for setting up the source and target servers.

For the full white paper see Increasing Application Availability Using Oracle VM Server for SPARC

Monday May 10, 2010

Oracle VM Server for SPARC (LDoms) Dynamic Resource Management

In this entry, I will demonstrate how to use the new feature of Oracle VM Server for SPARC
(previously called Sun Logical Domains or LDoms) version 1.3 Dynamic Resource Management
(a.k.a DRM) for allocating CPUs resources based on workload and pre defined polices

For the full article see  Oracle VM Server for SPARC DRM

Tuesday Dec 15, 2009

VirtualBox Teleporting


In this entry, I will demonstrate how to use the new feature of
VirtualBox version 3.1 Migration (a.k.a teleporting) for moving a
virtual machine over a network from one VirtualBox host to another,
while the virtual machine is running.

Introduction to VirtualBox

is a general-purpose full virtualizer for x86 hardware.
Targeted at server, desktop and embedded use, it is now the only
professional-quality virtualization solution that is also Open Source

Introduction to Teleporting

Teleporting requires that a machine be currently running on one host,
which is then called the "source". The host to which the virtual
machine will be teleported will then be called the "target". The
machine on the target is then configured to wait for the source to
contact the target. The machine's running state will then be
transferred from the source to the target with minimal downtime.

This works regardless of the host operating system that is running on
the hosts: you can teleport virtual machines between Solaris and Mac
hosts, for example.

Architecture layout :

Prerequisites :

1. The target and source machines should both be running VirtualBox version 3.1or later.

2. The target machine must be configured with the same amount of memory
(machine and video memory) and other hardware settings,  as the source
machine. Otherwise teleporting will fail with an error message.

3. The two virtual machines on the source and the target must share the same storage

  they either use the same iSCSI targets or both hosts have access to  the same storage via NFS or SMB/CIFS.

4. The source and target machines cannot have any snapshots.

5. The hosts must have fairly similar CPUs. Teleporting between Intel and AMD CPUs will probably fail with an error message.

Preparing the storage environment

For this example, I will use OpenSolaris x86 as CIFS server in order to
enable shared storage for the source and target machines ,but you can
use any iSCSI NFS or CIFS server for this task.

Install the packages from the OpenSolaris.org repository:

# pfexec pkg install SUNWsmbs SUNWsmbskr

Reboot the system to activate the SMB server in the kernel.

# pfexec reboot

Enable  the CIFS service:

# pfexec svcadm enable –r smb/server

If the following warning is issued, you can ignore it:

svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Verified the service

# pfexec svcs smb/server

STATE          STIME    FMRI

online          8:38:22 svc:/network/smb/server:default

The Solaris CIFS SMB service uses WORKGROUP as the default group. If
the workgroup needs to be changed, use the following command to change
the workgroup name:

# pfexec smbadm join –w workgroup-name

Next edit the /etc/pam. conf file to enable encrypted passwords to be used for CIFS.

Add the following line to the end of the file:

other password required pam_smb_passwd. so. 1     nowarn

# pfexec  echo "other password required pam_smb_passwd.so.1     nowarn" >> /etc/pam.conf

 Each user currently in the /etc/passwd file needs to re-encrypt to be able to use the CIFS service:
# pfexec passwd user-name

Note -
After the PAM module is installed, the passwd command
automatically generates CIFS-suitable passwords for new users. You must
also run the passwd command to generate CIFS-style passwords for
existing users.

Create mixed-case ZFS file system.

# pfexec zfs create -o casesensitivity=mixed  rpool/vboxstorage

Enable SMB sharing for the ZFS file system.

# pfexec  zfs set sharesmb=on rpool/vboxstorage

Verify how the file system is shared.

# pfexec sharemgr show -vp

Now, you can access the share by connecting to \\\\solaris-hostname\\share-name

Create new Virtual machine, for the virtual hard disk select “Create new hard disk” then select next

Select the next button

For the disk location enter the network drive that you mapped from the previous section then press the next button

Verify the disk settings and then press
the finish button

Continue with the Install process ,after finishing the install process shutdown the Virtual machine in order to avoid any storage locking.

On the target machine :

Map the same network drive

Configure a new virtual machine but
instead of selecting “Create new hard drive” select use “Use existing hard drive”.

In the Virtual Media Manger window
select the Add button,and point to the same location as the
source machine hard drive ( the network drive).

Don't start the Virtual machine yet.

To wait for a teleport request to arrive when it is started,use the following VBoxManage command:

VBoxManage modifyvm <targetvmname> --teleporter on --teleporterport <port>

where <targetvmname> is the name of the virtual machine on the
target in this use case opensolaris  ,and <port> is a TCP/IP
port number to be used on both the source and the target. In this example, I used port 6000.

C:\\Program Files\\Sun\\VirtualBox>VBoxManage modifyvm  opensolaris --teleporter on --teleporterport 6000

Next, start the VM on the target. You will see that instead of actually
running, it will show a progress dialog. indicating that it is waiting
for a teleport request to arrive.

You can see that the machine status changed to Teleporting 

On the source machine: 

Start the Virtual machine

When it is running and you want it to be teleported, issue the following command :
VBoxManage controlvm <sourcevmname> teleport --host <targethost> --port <port>
where <sourcevmname> is the name of the virtual machine on the
source (the machine that is currently running), <targethost> is
the host or IP name of the target that has the machine that is waiting
for the teleport request, and <port> must be the same number as
specified in the command on the target (i.e. for this example: 6000).
C:\\Program Files\\Sun\\VirtualBox>VBoxManage controlvm opensolaris  teleport --host target_machine_ip --port 6000
VirtualBox Command Line Management Interface Version 3.1.0
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
You can see that the machine status changed to Teleported

The teleporting process took ~5 seconds

For more information about VirtualBox

Wednesday Nov 25, 2009

Solaris Zones migration with ZFS

In this entry I will demonstrate how to migrate a Solaris Zone running
on T5220 server to a new  T5220 server using ZFS as file system for
this Zone.
Introduction to Solaris Zones

Solaris Zones provide a new isolation primitive for the Solaris OS,
which is secure, flexible, scalable and lightweight. Virtualized OS
services  look like different Solaris instances. Together with the
existing Solaris Resource management framework, Solaris Zones forms the
basis of Solaris Containers.

Introduction to ZFS

ZFS is a new kind of file system that provides simple administration,
transactional semantics, end-to-end data integrity, and immense
Architecture layout :

Prerequisites :

The Global Zone on the target system must be running the same Solaris release as the original host.

To ensure that the zone will run properly, the target system must have
the same versions of the following required operating system packages
and patches as those installed on the original host.

Packages that deliver files under an inherit-pkg-dir resource
Packages where SUNW_PKG_ALLZONES=true
Other packages and patches, such as those for third-party products, can be different.

Note for Solaris 10 10/08: If the new host has later versions of the
zone-dependent packages and their associated patches, using zoneadm
attach with the -u option updates those packages within the zone to
match the new host. The update on attach software looks at the zone
that is being migrated and determines which packages must be updated to
match the new host. Only those packages are updated. The rest of the
packages, and their associated patches, can vary from zone to zone.
This option also enables automatic migration between machine classes,
such as from sun4u to sun4v.

Create the ZFS pool for the zone
# zpool create zones c2t5d2
# zpool list

zones   298G    94K   298G     0%  ONLINE  -

Create a ZFS file system for the zone
# zfs create zones/zone1
# zfs list

zones         130K   293G    18K  /zones
zones/zone1    18K   293G    18K  /zones/zone1

Change the file system permission
# chmod 700 /zones/zone1

Configure the zone
# zonecfg -z zone1

zone1: No such zone configured
Use 'create' to begin configuring a new zone.

zonecfg:zone1> create -b
zonecfg:zone1> set autoboot=true
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add net
zonecfg:zone1:net> set address=
zonecfg:zone1:net> set physical=e1000g0
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
Install the new Zone
# zoneadm -z zone1 install

Boot the new zone
# zoneadm -z zone1 boot

Login to the zone
# zlogin -C zone1

Answer all the setup questions

How to Validate a Zone Migration Before the Migration Is Performed

Generate the manifest on a source host named zone1 and pipe the output
to a remote command that will immediately validate the target host:
# zoneadm -z zone1 detach -n | ssh targethost zoneadm -z zone1 attach -n -

Start the migration process

Halt the zone to be moved, zone1 in this procedure.
# zoneadm -z zone1 halt

Create snapshot for this zone in order to save its original state
# zfs snapshot zones/zone1@snap
# zfs list

zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1
zones/zone1@snap      0      -  4.13G  -
Detach the zone.
# zoneadm -z zone1 detach

Export the ZFS pool,use the zpool export command
# zpool export zones

On the target machine
 Connect the storage to the machine and then import the ZFS pool on the target machine
# zpool import zones
# zpool list

zones   298G  4.13G   294G     1%  ONLINE  -
# zfs list

zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1<
zones/zone1@snap  2.94M      -  4.13G  -

On the new host, configure the zone.
# zonecfg -z zone1

You will see the following system message:

zone1: No such zone configured

Use 'create' to begin configuring a new zone.

To create the zone zone1 on the new host, use the zonecfg command with the -a option and the zonepath on the new host.

zonecfg:zone1> create -a /zones/zone1
Commit the configuration and exit.
zonecfg:zone1> commit
zonecfg:zone1> exit
Attach the zone with a validation check.
# zoneadm -z zone1 attach

The system administrator is notified of required actions to be taken if either or both of the following conditions are present:

Required packages and patches are not present on the new machine.

The software levels are different between machines.

Note for Solaris 10 10/08: Attach the zone with a validation check and
update the zone to match a host running later versions of the dependent
packages or having a different machine class upon attach.
# zoneadm -z zone1 attach -u

Solaris 10 5/09 and later: Also use the -b option to back out specified patches, either official or IDR, during the attach.
# zoneadm -z zone1 attach -u -b IDR246802-01 -b 123456-08

Note that you can use the -b option independently of the -u option.

Boot the zone
# zoneadm -z zone1 boot

Login to the new zone
# zlogin -C zone1

[Connected to zone 'zone1' console]

Hostname: zone1

All the process took approximately five minutes

For more information about Solaris ZFS and Zones

Thursday Nov 12, 2009

Performance Study / Best Practices for running MySQL on Xen Based Hypervisors


This blog entry provides technical insight into the benchmark of the MySQL database on Xen Virtualization environment based on the xVM Hypervisor

Introduction to the xVM Hypervisor

 xVM hypervisor can securely execute multiple virtual machines simultaneously, each running its own operating system, on a single physical system. Each virtual machine instance is called a domain. There are two kinds of domains. The control domain is called domain0, or dom0. A guest OS, or unprivileged domain, is called a domainU or domU. Unlike virtualization using zones, each domain runs a full instance of an operating system.

Introduction to the MySQL database

 MySQL database is the world's most popular open source database because of its fast performance, high reliability, ease of use, and dramatic cost savings.

Tests Objective:

The main objective is to bring an understanding on how MySQL behaves within a virtualized environment, using UFS or ZFS file system

Tests Description:

We built a test environment by using a Sun X4450 MySQL 5.4 was installed on OpenSolaris 2009_06 because of the OS built-in integration with the xVM Hypervisor . A separate set of performance tests was run with MySQL data placed on a SAN disk. xVM guest OS is OpenSolaris 2009_06 .

When running under xVM the server resources ( cpu, memory) were divided between the dom0 and domU guest OS.

 dom0 - 2 vcpu and 2GB RAM

domU -  4 vcpu and 6GB RAM

  • We used paravirtualized domU operating system in order to get the best performance.

  • We chose to analyze the performance behavior for InnoDB storage engine due to its high popularity.

  • We chose to analyze the performance behavior for two file systems (ZFS and UFS) in order to check which file system performs better for MySQL .

SysBench was used as loading tool to test base performance for each configuration.

The tool is simple to use, modular, cross-platform and multi-threaded. It also and can give a good feeling regarding the performance for a simple database use.  

Hardware configuration:

Server :SUN X4450 ,with 2X 2.9GHz dual-core CPU,8GB RAM , 2 X 146 GB internal disks.

Storage :StorageTek 6140 - configured RAID 0+1, directly attached to the server.

Software: MySQL 5.4, OpenSolaris 2009_06 .

The SysBench script

sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=10000000 --mysql-socket=/tmp/mysql.sock --mysql-user=root prepare sysbench --num-threads=8 --max-time=900 --max-requests=500000 --test=oltp --mysql-user=root --mysql-host=localhost --mysql-port=3306 --mysql-table-engine=innodb --oltp-test-mode=complex --oltp-table-size=80000000 run

We tested it with different number of threads  4 ,8 ,16 ,32  (--num-threads=8 )  

The benchmark layout

After the creation of OpenSolaris 2009_06 in domU we attached the SAN storage

Attached the file system to the guest

xm block-attach para-opensolaris phy:/dev/dsk/c0t600A0B8000267DD400000A8D494DB1A6d0p0 3 w

Verified access to the file system from the guest

root@para-opensolaris:~# format
Searching for disks...done

     0. c7d3 <DEFAULT cyl 4096 alt 0 hd 128 sec 32>
     1. c7t0d0 <DEFAULT cyl 3915 alt 0 hd 255 sec 63>

zpool create -f xvmpool c7d3

root@para-opensolaris:~# zpool list
rpool    29.8G  10.6G  19.2G    35%  ONLINE  -
xvmpool  117.94G  89.5K  117.94G     0%  ONLINE  -

The first result of running this benchmark on UFS

The first result of running this benchmark on ZFS

The results after match ZFS recored size to block size and limiting  ZFS ARC size

zfs create -o recordsize=16k xvmpool/mysql

set zfs:zfs_arc_max = 0x10000000   in /etc/system

The results after disable ZFS cache flush ( We have battery backed cache) 

set zfs:zfs_nocacheflush = 1   in /etc/system


After ZFS tuning we were able to receive the same results as UFS  

Thus, we can benefit from ZFS extra features like snapshot and clone.

For more information about ZFS and OpenSolaris

Sunday Aug 02, 2009

Logical Domains Physical-to-Virtual (P2V) Migration

The Logical domains P2V migration tool automatically converts an existing physical system
to a virtual system that runs in a logical domain on a chip multithreading (CMT) system.
The source system can be any of the following:

    \*  Any sun4u SPARC system that runs at least the Solaris 8 Operating System
    \*  Any sun4v system that runs the Solaris 10 OS, but does not run in a logical domain

In this entry I will demonstrate how to use Logical domains P2V migration tool to migrate Solaris running on V440 server (physical)
Into guest domain running on the T5220 server (virtual)
Architecture layout :

Before you can run the LogicalDomains P2VMigration Tool, ensure that the following are true:

  •     Target system runs at least LogicalDomains 1.1 on the following:

  •      Solaris 10 10/08 OS

  •     Solaris 10 5/08 OS with the appropriate LogicalDomains 1.1 patches

  •     Guest domains run at least the Solaris 10 5/08 OS

  •     Source system runs at least the Solaris 8 OS


In addition to these prerequisites, configure an NFS file system to be shared by both the source
and target systems. This file system should be writable by root.However, if a shared file system
is not available, use a local file system that is large enough to hold a file system dump of the
source system on both the source and target systems Limitations
Version 1.0 of the LogicalDomains P2VMigration Tool has the following limitations:

  •      Only UFS file systems are supported.

  •      Each guest domain can have only a single virtual switch and virtual disk

  •      The flash archiving method silently ignores excluded file systems.

The conversion from a physical system to a virtual system is performed in the following phases:

  • Collection phase   Runs on the physical source system. collect creates a file system image of the source system based on the configuration information that it collects about the source system.

  • Preparation phase. Runs on the control domain of the target system. prepare creates the logical domain on the target system based on the configuration information collected in the collect phase.

  • Conversion phase. Runs on the control domain of the target system. In the convert phase,the created logical domain is converted into a logical domain that runs the Solaris 10 OS by using the standard Solaris upgrade process.

    Collection phase
    On the target machine T5220

    Prepare the NFS server in order to hold the a file system dump of the source system on both the source and target systems.
    In this use case I will use the target machine (T5220) as the NFS server
    # mkdir /p2v
    # share -F nfs -o root=v440 /p2v

    Verify the NFS share
    # share
    -  /p2v  root=v440  ""
    Install the Logical Domains P2V MigrationTool
    Go to the Logical Domains download page at http://www.sun.com/servers/coolthreads/
    Download the P2V software package, SUNWldmp2v
    Use the pkgadd command to install the SUNWldmp2v package
    # pkgadd -d . SUNWldmp2v

    Create the /etc/ldmp2v.conf file we will use it later
    # cat /etc/ldmp2v.conf

    On the source machine V440
    Install the Logical Domains P2V MigrationTool
    # pkgadd -d . SUNWldmp2v
    Mount the NFS share
    # mkdir /p2v
    # mount t5220:/p2v /p2v
    Run the collection command
    # /usr/sbin/ldmp2v collect -d /p2v/v440
    Collecting system configuration ...
    Archiving file systems ...
    DUMP: Date of this level 0 dump: August 2, 2009 4:11:56 PM IDT
    DUMP: Date of last level 0 dump: the epoch
    DUMP: Dumping /dev/rdsk/c1t0d0s0 (mdev440-2:/) to /p2v/v440/ufsdump.0.
    The collection phase took 5 minutes for 4.6 GB dump file
    Preparation phase
    On the target machine T5220
    Run the preparation command
    We will keep the source machine (V440) mac address
    # /usr/sbin/ldmp2v prepare -d /p2v/v440 -o keep-mac v440
     Creating vdisks ...
    Creating file systems ...
    Populating file systems ...
    The preparation phase took 26 minutes

    We can see that for each physical cpu on the V440 server the LDom P2V Tool create 4 vcpu on the guest domain and assigns the same amount memory that the physical system has
    # ldm list -l v440

    v440                   inactive   ------                                   4     8G



        NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID                  MTU
        vnet0            primary-vsw0                           00:03:ba:c4:d2:9d        1

        NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP
        disk0            v440-vol0@primary-vds0

    Conversion Phase
    Before starting the conversion phase shout down the source server (V440) in order to avoid ip address conflict
    On the V440 server
    # poweroff
    On the jumpstart server

    You can use the Custom JumpStart feature to perform a completely hands-off conversion.
    This feature requires that you create and configure the appropriate sysidcfg and profile files for the client on the JumpStart server.
    The profile should consist of the following lines: install_type upgrade root_device  c0d0s0

    The sysidcfg file :

    timeserver=localhost timezone=Europe/Amsterdam
    terminal=vt100 security_policy=NONE nfs4_domain=dynamic network_interface=PRIMARY {netmask= default_route=none protocol_ipv6=no}
    On the target server T5220
    # ldmp2v convert -j -n vnet0 -d /p2v/v440 v440Testing original system status ...
    LDom v440 started
    Waiting for Solaris to come up ...
    Using Custom JumpStart
    Connected to 0.
    Escape character is '\^]'.

    Connecting to console "v440" in group "v440" ....
    Press ~? for control options ..
    SunOS Release 5.10 Version Generic_139555-08 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.

    For information about the P2V migration tool, see the ldmp2v(1M) man page.

Thursday Jul 02, 2009

Storage virtualization with COMSTAR and ZFS

COMSTAR is a software framework that enables you to turn any
OpenSolaris host into a SCSI target that can be accessed over the
network by initiator hosts. COMSTAR breaks down the huge task of
handling a SCSI target subsystem into independent functional modules.
These modules are then glued together by the SCSI Target Mode Framework (STMF).

COMSTAR features include:

    \* Extensive LUN Masking and mapping functions
    \* Multipathing across different transport protocols
    \* Multiple parallel transfers per SCSI command
    \* Scalable design
    \* Compatible with generic HBAs

COMSTAR is integrated into the latest OpenSolaris

In this entry I will demonstrate the integration between COMSTAR and ZFS

Architecture layout :

You can install all the appropriate COMSTAR packages

# pkg install storage-server
On a newly installed OpenSolaris system, the STMF service is disabled by default.

You must complete this task to enable the STMF service.
View the existing state of the service
# svcs stmf
 disabled 15:58:17 svc:/system/stmf:default

Enable the stmf service
# svcadm enable stmf
Verify that the service is active.
# svcs stmf
 online 15:59:53 svc:/system/stmf:default

Create a RAID-Z storage pool.
The server has six controllers each with eight disks and I have
built the storage pool to spread I/O evenly and to enable me to build 8
 RAID-Z stripes of equal length.
# zpool create -f  tank \\
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0
raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0

After the pool is created, the zfs utility can be used to create 50GB ZFS volume.
# zfs create -V 50g tank/comstar-vol1

Create a logical unit using the volume.
# sbdadm create-lu /dev/zvol/rdsk/tank/comstar-vol1
Created the following logical unit :

--------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

Verify the creation of the logical unit and obtain the Global Unique Identification (GUID) number for the logical unit.
# sbdadm list-lu
Found 1 LU(s)
-------------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

This procedure makes a logical unit available to all initiator hosts on a storage network.
Add a view for the logical unit.

# stmfadm add-view GUID_number

Identify the host identifier of the initiator host you want to add to your view.
Follow the instructions for each port provider to identify the initiators associated with each
port provider.

You can see that the port mode is Initiator

# fcinfo hba-port
       HBA Port WWN: 210000e08b91facd
        Port Mode: Initiator
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:

Before making changes to the HBA ports, first check the existing port
View what is currently bound to the port drivers.
In this example, the current binding is pci1077,2422.
# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba qlc fctl  
fcp cpc random crypto stmf nfs lofs logindmux ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlc
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlc)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlc)

Quit mdb.
> $q
Remove the current binding, which in this example is qlc.
In this example, the qlc driver is actively bound to pci1077,2422.
You must remove the existing binding for qlc before you can add that
binding to a new driver.

 Single quotes are required in this syntax.
# update_drv -d -i 'pci1077,2422' qlc
Cannot unload module: qlc
Will be unloaded upon reboot.

This message does not indicate an error.
The configuration files have been updated but the qlc driver remains
bound to the port until reboot.
Establish the new binding to qlt.
Single quotes are required in this syntax.
# update_drv -a -i 'pci1077,2422' qlt
  Warning: Driver (qlt) successfully added to system but failed to

This message does not indicate an error. The qlc driver remains bound
to the port, until reboot.
The qlt driver attaches when the system is rebooted.
Reboot the system to attach the new driver, and then recheck the
# reboot
# mdb -k

Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba fctl stmf lofs fcip cpc random crypto nfs logindmux
ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlt
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlt)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlt)

Quit mdb.
 > $q

You can see that the port mode is Target

# fcinfo hba-port

        HBA Port WWN: 210000e08b91facd
        Port Mode: Target
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:

Verify that the target mode framework has access to the HBA ports.
# stmfadm list-target -v

Target: wwn.210100E08BB1FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt1,0
Sessions : 0
Target: wwn.210000E08B91FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt0,0
Sessions : 1
Initiator: wwn.210000E08B89F077
Alias: -
Logged in since: Thu Jul 2 12:02:59 2009

Now for the client setup :

On the client machine verify that you can see the new logical unit
# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t2d0                 disk         connected    configured   unknown
c1::dsk/c1t3d0                 disk         connected    configured   unknown
c2                             fc-private   connected    configured   unknown
c2::210000e08b91facd           disk         connected    configured   unknown
c3                             fc           connected    unconfigured unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok

You might need to rescan the SAN BUS in order to discover the new logical unit
# luxadm -e forcelip /dev/cfg/c2
# format
Searching for disks...
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
1. c1t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
2. c1t3d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
3. c2t210000E08B91FACDd0 <SUN-COMSTAR-1.0 cyl 65533 alt 2 hd 16 sec 100>
Specify disk (enter its number):

You can see the SUN-COMSTAR-1.0 in the disk properties
Now you can build storage pool on top of it
# zpool create comstar-pool c2t210000E08B91FACDd0
Verify the pool creation
comstar-pool 49.8G 114K 49.7G 0% ONLINE -

After the pool is created, the zfs utility can be used to create a ZFS volume.
#  zfs create -V 48g comstar-pool/comstar-vol1

For more information about COMSTAR please check the COMSTAR  project on OpenSolaris

Sunday Jun 14, 2009

3rd annual Java Technology Day June 22nd, 2009

Lots of cool staff  JavaFX,  Virtualization , Cloud computing , Mysql

I will have two sessions in the event :

1. Virtualization from the Desktop to the Enterprise

2. Sun in front of the cloud.

I am looking forward to see you 

Tuesday Apr 21, 2009

IGT Cloud Computing WG Meeting

Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "IGT Cloud Computing WG Meeting" at the Sun office in Herzeliya. During the event Nati Shalom, CTO GigaSpaces, Moshe Kaplan, CEO Rocketier , Haim Yadid, Performance Expert, ScalableJ, presented various cloud computing technologies,There were 50 attendees from a wide breadth of technology firms.

For more information regarding using Sun's cloud see http://www.sun.com/solutions/cloudcomputing/index.jsp .

meeting agenda :

Auto-Scaling Your Existing Web Application Nati Shalom, CTO, Gigaspaces


In this session, will cover how to take a standard JEE web application and scale it out  or down dynamically, without changes to the application code. Seeing as most web applications are over-provisioned to meet  infrequent peak loads, this is a dramatic change, because it enables growing your application as needed, when needed, without paying for unutilized resources.  Nati will discuss the challenges involved in dynamic scaling, such as ensuring the integrity and consistency of the application, how to keep the load-balancer in sync with servers' changing location, and how to maintain affinity and high availability of session information with the load balancer. If time permits, Nati  will show a live demo of a Web 2.0 app scaling dynamically on the Amazon cloud.


How your very large databases can work in the cloud computing world?
Moshe Kaplan, RockeTier, a performance expert and scale out architect
Cloud computing is famous for its flexibility, dynamic nature and ability to infinite growth. However, infinite growth means very large databases with billions of records in it. This leads us to a paradox: "How can weak servers support very large databases which usually require several CPUs and dedicated hardware?"
The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on cloud computing architecture such and sharding. What is Sharding ? What kinds of Sharding can you implement? What are the best practices?

Utilizing the cloud for Performance Validation
Haim Yadid, Performance Expert, ScalableJ

Creating Loaded environment is crucial for software performance validation. Execution of such a simulated environment required usually great deal of hardware which is then left unused during most of the development cycle. In this short session I will suggest utilizing cloud computing for performance validation. I will present a case study where loaded environment used 12 machines on AWS for the duration of the test. This approach gives much more flexibility and reduces TCO dramatically. We will discuss the limitation of this approach and suggest means to address them.


Sunday Apr 19, 2009

Ldom with ZFS

Logical Domains offers a powerful and consistent methodology for creating virtualized server environments across the entire CoolThreads server range:

   \* Create multiple independent virtual machines quickly and easily
     using the hypervisor built into every CoolThreads system.
   \* Leverage advanced Solaris technologies such as ZFS cloning and
     snapshots to speed deployment and dramatically reduce disk
     capacity requirements.

In this entry I will demonstrate the integration between Ldom and ZFS

Architecture layout

Downloading Logical Domains Manager and Solaris Security Toolkit

Download the Software

Download the zip file (LDoms_Manager-1_1.zip) from the Sun Software Download site. You can find the software from this web site:


 Unzip the zip file.
# unzip LDoms_Manager-1_1.zip
Please read the REDME file for any prerequisite
The installation script is part of the SUNWldm package and is in the Install subdirectory.

# cd LDoms_Manager-1_1

Run the install-ldm installation script with no options.
# Install/install-ldm

Select a security profile from this list:

a) Hardened Solaris configuration for LDoms (recommended)
b) Standard Solaris configuration
c) Your custom-defined Solaris security configuration profile

Enter a, b, or c [a]: a

Shut down and reboot your server
# /usr/sbin/shutdown -y -g0 -i6

Use the ldm list command to verify that the Logical Domains Manager is running
# /opt/SUNWldm/bin/ldm list

primary active -n-c-- SP 32 16256M 0.0% 2d 23h 27m

Creating Default Services

You must create the following virtual default services initially to be able to use them later:
vdiskserver – virtual disk server
vswitch – virtual switch service
vconscon – virtual console concentrator service

Create a virtual disk server (vds) to allow importing virtual disks into a logical domain.
# ldm add-vds primary-vds0 primary

Create a virtual console concentrator (vcc) service for use by the virtual network terminal server daemon (vntsd)
# ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Create a virtual switch service
(vsw) to enable networking between virtual network
(vnet) devices in logical domains
# ldm add-vsw net-dev=e1000g0 primary-vsw0 primary

Verify the services have been created by using the list-services subcommand.

# ldm list-services

Set Up the Control Domain

Assign cryptographic resources to the control domain.
# ldm set-mau 1 primary

Assign virtual CPUs to the control domain.
# ldm set-vcpu 4 primary

Assign memory to the control domain.
# ldm set-memory 4G primary

Add a logical domain machine configuration to the system controller (SC).
# ldm add-config initial

Verify that the configuration is ready to be used at the next reboot
# ldm list-config

initial [next poweron]

Reboot the server
# shutdown -y -g0 -i6

Enable the virtual network terminal server daemon, vntsd
# svcadm enable vntsd

Create the zpool

# zpool create ldompool c1t2d0 c1t3d0

# zfs create ldompool/goldimage

# zfs create -V 15g ldompool/goldimage/disk_image

Creating and Starting a Guest Domain

Create a logical domain.
# ldm add-domain goldldom

Add CPUs to the guest domain.
ldm add-vcpu 4 goldldom

Add memory to the guest domain
# ldm add-memory 2G goldldom

Add a virtual network deviceto the guest domain.
# ldm add-vnet vnet1 primary-vsw0 goldldom

Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain
# ldm add-vdsdev /dev/zvol/dsk/ldompool/goldimage/disk_image vol1@primary-vds0

Add a virtual disk to the guest domain.
# ldm add-vdisk vdisk0 vol1@primary-vds0 goldldom

Set auto-boot and boot-device variables for the guest domain
# ldm set-variable auto-boot\\?=false goldldom
# ldm set-var boot-device=vdisk0 goldldom

Bind resources to the guest domain goldldom and then list the domain to verify that it is bound.
# ldm bind-domain goldldom
# ldm list-domain goldldom

primary          active     -n-cv-  SP      4     4G       0.2%  15m
goldldom         bound      ------  5000    4     2G

Start the guest domain
# ldm start-domain goldldom
Connect to the console of a guest domain
# telnet 0 5000
Connected to 0.
Escape character is '\^]'.
Connecting to console "goldldom" in group "goldldom" ....
Press ~? for control options ..

{0} ok

Jump-Start the goldldom

{0} ok boot net - install
We can login to the new guest and verify that the file system is zfs

# zpool list
rpool  14.9G  1.72G  13.2G    11%  ONLINE  -
Restore the goldldom configuration to an "as-manufactured" state with the sys-unconfig command

# sys-unconfig
This program will unconfigure your system.  It will cause it
to revert to a "blank" system - it will not have a name or know
about other systems or networks.
This program will also halt the system.
Do you want to continue (y/n) y

Press ~. in order to return to the primary domain

Stop the guest domain
# ldm stop goldldom
Unbind the guest domain

# ldm unbind  goldldom
Snap shot the disk image
# zfs snapshot ldompool/goldimage/disk_image@sysunconfig

Create new zfs file system for the new guest
# zfs create ldompool/domain1

Clone the goldldom disk image
# zfs clone ldompool/goldimage/disk_image@sysunconfig ldompool/domain1/disk_image

# zfs list
ldompool 17.0G 117G 21K /ldompool
ldompool/domain1 18K 117G 18K /ldompool/domain1
ldompool/domain1/disk_image 0 117G 2.01G -
ldompool/goldimage 17.0G 117G 18K /ldompool/goldimage
ldompool/goldimage/disk_image 17.0G 132G 2.01G -
ldompool/goldimage/disk_image@sysunconfig 0 - 2.01G -

Creating and Starting the second  Domain

# ldm add-domain domain1
# ldm add-vcpu 4 domain1
# ldm add-memory 2G domain1
# ldm add-vnet vnet1 primary-vsw0 domain1
# ldm add-vdsdev /dev/zvol/dsk/ldompool/domain1/disk_image vol2@primary-vds0
# ldm add-vdisk vdisk1 vol2@primary-vds0 domain1
# ldm set-var auto-boot\\?=false domain1
# ldm set-var boot-device=vdisk1 domain1

# ldm bind-domain domain1
# ldm list-domain domain1
domain1 bound ------ 5001 8 2G

Start the domain
# ldm start-domain domain1

Connect to the console
# telnet 0 5001
{0} ok boot net -s

Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Booting to milestone "milestone/single-user:default".
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Configured interface vnet0
Requesting System Maintenance Mode

# zpool import -f rpool
# zpool export rpool
# reboot

Answer the configuration questions

Login to the new domain and verify that we have zfs file system

# zpool list
rpool 14.9G 1.72G 13.2G 11% ONLINE -

Monday Mar 16, 2009

Brief Technical Overview and Installation of Ganglia on Solaris

Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters.

For this setup we will use the following software packages:

1. Ganglia - the core Ganglia package

2. Zlib - zlib compression libraries

3. Libgcc - low-level runtime library

4. Rrdtool - round Robin Database graphing tool

5. Apache web server with php support

You can get the  packagers ( 1-3)  from sunfreeware (depending on your architecture -  x86 or SPARC)

Unzip and Install the packages

1. gzip -d ganglia-3.0.7-sol10-sparc-local.gz

pkgadd -d ./ganglia-3.0.7-sol10-sparc-local

2. gzip -d zlib-1.2.3-sol10-sparc-local.gz

pkgadd -d ./zlib-1.2.3-sol10-sparc-local

3. gzip -d libgcc-3.4.6-sol10-sparc-local.gz

pkgadd -d ./libgcc-3.4.6-sol10-sparc-local

4. You will need pkgutil from blastwave in order to install rrdtool software packages

/usr/sfw/bin/wget http://blastwave.network.com/csw/unstable/sparc/5.8/pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz

gunzip pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz

pkgadd -d pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg


Now you can install packages with all required dependencies with a single command:

/opt/csw/bin/pkgutil -i rrdtool
5. You will need to download Apache ,PHP and Core libraries from Cool stack

Core libraries used by other packages

bzip2 -d CSKruntime_1.3.1_sparc.pkg.bz2

pkgadd -d ./CSKruntime_1.3.1_sparc.pkg

Apache 2.2.9,  PHP 5.2.6

bzip2 -d CSKamp_1.3.1_sparc.pkg.bz2

pkgadd -d ./CSKamp_1.3.1_sparc.pkg

The following packages are available:

1 CSKapache2 Apache httpd

(sparc) 2.2.9

2 CSKmysql32 MySQL 5.1.25 32bit

(sparc) 5.1.25

3 CSKphp5 PHP 5

(sparc) 5.2.6

Select package(s) you wish to process (or 'all' to process

all packages). (default: all) [?,??,q]:1,3

Select the 1 and 3 option

Enable the web server service

svcadm enable svc:/network/http:apache22-csk

Verify it is working

svcs svc:/network/http:apache22-csk

STATE          STIME    FMRI

online         17:02:13 svc:/network/http:apache22-csk

 Locate the Web server  DocumentRoot

grep DocumentRoot /opt/coolstack/apache2/conf/httpd.conf

DocumentRoot "/opt/coolstack/apache2/htdocs"

Copy the Ganglia directory tree

cp -rp /usr/local/doc/ganglia/web  /opt/coolstack/apache2/htdocs/ganglia

Change the rrdtool path on  /opt/coolstack/apache2/htdocs/ganglia/conf.php

from /usr/bin/rrdtool  to /opt/csw/bin/rrdtool



Start the gmond daemon with the default configuration

/usr/local/sbin/gmond --default_config > /etc/gmond.conf

Edit /etc/gmond.conf  ,change  name = "unspecified"  to name="grid1"  (This is our grid name.)

Verify that it has started : 

ps -ef | grep gmond
nobody 3774 1 0 16::57:41 ? 0:55 /usr/local/gmond

In order to debug any problem, try:

/usr/local/sbin/gmond --debug=9

Build the directory for the rrd images

mkdir -p /var/lib/ganglia/rrds
chown -R nobody  /var/lib/ganglia/rrds
Add the folowing line to /etc/gmetad.conf

data_source "grid1"  localhost

Start the gmetad daemon


Verify it -->

 ps -ef | grep gmetad

nobody  4350     1   0 17:10:30 ?           0:24 /usr/local/sbin/gmetad

To debug any problem

/usr/local/sbin/gmetad --debug=9

Point your browser to: http://server-name/ganglia

Wednesday Mar 04, 2009

Technical overview GlassFish 3.0 on Amazon cloud

The integrated GigaSpaces GlassFish solution with its components is captured in the following diagram :


   SLA Driven deployment environment:

The SLA Driven deployment environment is responsible for hosting all services in the network. It basically does match making between the application requirements and the availability of the resources over the network. It is comprised of the following components:

    • Grid Service Manager - GSM – responsible for managing the application lifecycle and deployment

    • Grid Service Container GSC – a light weight container which is essentially a wrapper on top of the Java process that exposes the JVM to the GSM and provides a means to deploy and undeploy services dynamically.

    • Processing-Unit (PU )– Represents the application deployment unit. A Processing Unit is essentially an extension of the spring application context that packages specific application components in a single package and uses dependency injection to mesh together these components. The Processing Unit is an atomic deployment artifact and its composition is determined by the scaling and failover granularity of a given application. It, therefore,  is the unit-of-scale and failover. There are number of pre-defined Processing Unit types :

      • Web Processing Unit Web Processing Unit is responsible for managing Web Container instances and enables them to run within SLA driven container environment. With a Web Processing Unit, one can deploy the Web Container as group of services and apply SLAs or QoS semantics such as one-per-vm, one-per-machine, etc.  In other words, one can easily use the Processing Unit SLA to determine how web containers would be provisioned on the network. In our specific case most of the GlassFish v3 Prelude integration takes place at this level.

      • Data Grid Processing Unit Data Grid is a processing unit that wraps the GigaSpaces space instances. By wrapping the space instance it adds SLA capabilities avliable with each processing unit. One of the common SLA is to ensure that primary instances will not be running on the same machine as the backup instances etc. It also determines deployment topology (partitioned, replicated), as well as scaling policy, etc. The data grid includes another instance, not shown in the above diagram, called the Mirror Service. The Mirror Service is responsible for making sure that all updates made on the Data Grid will be passed reliably to the underlying database.


  • Load Balancer Agent – The Load Balancer Agent is responsible for listening to web-containers availability and add those instances to the Load Balancer list when a new container is added, or remove it when it has been removed. The Load Balancer Agent is currently configured to work with the Apache Load Balancer but can be easily set up to work with any external Load Balancer.

How it works:

The following section provides a high-level description of how all the above components work together to provide high performance and scaling.


  • Deployment - The deployment of the application is done through the GigaSpaces processing-unit deployment command. Assigning specific SLA as part of the deployment lets the GSM know how we wish to distribute the web instances over the network. For example, one could specify in the SLA that there would be only one instance per machine and define the minimum number of instances that need to be running. If needed, one can add specific system requirements such as JVM version, OS-Type, etc. to the SLA . The deployment command points to to a specific web application archive (WAR). The WAR file needs to include a configuration attribute in its META-INF configuration that will instruct the deployer tool to use GlassFish v3 Prelude as the web container for this specific web application. Upon deployment the GlassFish-processing-unit will be started on the available GSC containers that matches the SLA definitions. The GSC will assign specific port to that container instance. .  When GlassFish starts it will load the war file automatically and start serving http requests to that instance of the web application.

  • Connecting to the Load Balancer - Auto scaling -The load balancer agent is assigned with each instance of the GSM. It listens for the availability of new web containers and ensures that the available containers will join the load balancer by continuously updating the load-balancer configuration whenever such change happens. This happens automatically through the GigaSpaces discovery protocol and does not require any human intervention.

  • Handling failure - Self healing - If one of the web containers fails, the GSM will automatically detect that and start and new web container on one of the available GSC containers if one exists. If there is not enough resources available, the GSM will wait till such a resource will become available. In cloud environments, the GSM will initiate a new machine instance in such an event by calling the proper service on the cloud infrastructure.

  • Session replication - HttpSession can be automatically backed up by the GigaSpaces In Memroy Data Grid (IMDG) . In this case user applications do not need to change their code. When user data is stored in the HttpSession,  that data gets stored into the underlying IMDG. When the http request is completed that data is flushed into the shared data-grid servers.

  • Scaling the database tier - Beyond session state caching, the web application can get a a reference to the GigaSpaces IMDG and use it to store data in-memory in order to reduce contention on the database. GigaSpaces data grid automatically synchronizes updates with the database. To enable maximum performance, the update to the database is done in most cases asynchronously (Write-Behind). A built-in hibernate plug-in handles the mapping between the in-memory data and the relational data model. You can read more on how this model handles failure as well as consistency, aggregated queries here.

Tuesday Mar 03, 2009

GlassFish 3.0 on Amazon cloud

Here is how you can run the demo of GlassFish 3.0 on Amazon cloud. 

Where should I start? The best way to get started is to run a demo applicaiton and see for yourself how this integration works. To make it even simpler we offer the demo on our new cloud offering. This will enable you to expereince how a full production ready environment which include full clustering, dynamic scaling, full high avliability and Session replication works in one click. To run the demo on the cloud follow the follwoing steps:

1. Download the GlassFish web scaling deployment file from here to your local machine.

2. Get the mycloud page and get your free access code – this will enable you to get access to the cloud.



3. Select the stock-demo-2.3.0-gf-sun.xml then hit the Deploy button (you first need to save the attached file in one your local machine) – The system will start provisioning the web application on the cloud. This will include a machine for managing the cloud, a machine for the load-balancer and machines for the web and data-grid containers. After approximately 3 minutes the application will be deployed completely. At this point you should see “running” link on the load-balancer machine. Click on this link to open your web-client application.


4. Test auto-scaling – click multiple times on the “running” link to open more clients. This will enable us to increase the load (request/sec) on the system. As soon as the request/sec will grow beyond a certain threshold you’ll see new machines being provisioned. After two minutes approximately the machine will be running and a new web-container will be auto-deployed into that machine. This new web-container will be linked automatically with the load-balancer and the load-balancer in return will spread the load to this new machine as well. This will reduce the load on each of the servers.

5. Test self-healing – you can now kill one of the machines and see how your web client is behaving. You should see that even though the machine was killed the client was hardly effected and system scaled itself down automatically.

Seeing what’s going on behind the scene:

All this may seam to be like a magic for you. If you want to access the machines and watch the web containers, the data-grid instances and the machines as well as the real time statistics you can click on the Manage button button. This will open-up a management console that is running on the cloud through the web. With this tool you can view all the components of our systems. You can even query the data using SQL browser and view the data as it enters the system. In addition to that you can choose to add more services, relocate services through a simple click of a mouse or drag and drop action.


For more information regarding using Glassfish see http://www.sun.com/software/products/glassfish_portfolio/


OpenSolaris on Amazon EC2

Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "Amazon AWS Hands-on workshop" at the Sun office in Herzeliya. During the event Simone Brunozzi, Amazon Web Services Evangelist, demonstrated Amazon's Ec2 and S3 using the AWS console. There were 40 attendees from a wide breadth of technology firms.

For more information regarding using OpenSolaris on amazon EC2 see http://www.sun.com/third-party/global/amazon/index.jsp.


This blog covers cloud computing, big data and virtualization technologies


« July 2015