Thursday Aug 02, 2012

Solaris 11 Firewall


Oracle Solaris 11 includes software firewall.
In the cloud, this means that the need for expensive network hardware can be reduced while changes to network configurations can be made quickly and easily.
You can use the following script in order to manage the Solaris 11 firewall.
The script runs on Solaris 11 (global zone) and Solaris 11 Zone with exclusive ip stack (the default).

Script usage and examples:

Enable and start the firewall service

# fw.ksh start

Enable and start the firewall service in addition to that it reads the firewall rules from /etc/ipf/ipf.conf.
For more firewall rules examples see here.

Disable and stop the firewall service

# fw.ksh stop

Restart the firewall service after modifying the rules of /etc/ipf/ipf.conf.

# fw.ksh restart

Checking the firewall status

# fw.ksh status

The script will print the firewall status (online,offline) and the active rules.

This section provides the script. The recommendation is to copy the content and paste it into the suggested file name using gedit to create the file on Oracle Solaris 11.

# more fw.ksh


#! /bin/ksh

#

# FILENAME:    fw.ksh

# Manage Solaris firewall script

# Usage:

# fw.ksh {start|stop|restart|status}


case "$1" in

 start)

        /usr/sbin/svcadm enable svc:/network/ipfilter:default


         while [[ $serviceStatus != online && $serviceStatus != maintenance ]] ; do

            sleep 5

            serviceStatus=`/usr/bin/svcs -H -o STATE svc:/network/ipfilter:default`

        done

        /usr/sbin/ipf -Fa -f /etc/ipf/ipf.conf

   ;;

 restart)

        $0 stop

        $0 start

   ;;

 stop)

        /usr/sbin/svcadm disable svc:/network/ipfilter:default

   ;;

 status)

        serviceStatus=`/usr/bin/svcs -H -o STATE svc:/network/ipfilter:default`


        if [[ $serviceStatus != "online" ]] ; then

            /usr/bin/echo "The Firewall service is offline"

        else

            /usr/bin/echo "\nThe Firewall service is online\n"

            /usr/sbin/ipfstat -io

        fi

   ;;

*)

        /usr/bin/echo "Usage: $0 {start|stop|restart|status}"

        exit 1

   ;;

esac

exit 0

Thursday Jul 05, 2012

LDoms with Solaris 11


Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page.

Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11.

The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain


In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type.




You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order
to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system.


For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper



Monday Jun 04, 2012

Oracle Solaris Zones Physical to virtual (P2V)

Introduction
This document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.
Using an example and various scenarios, this paper describes how to take advantage of the
Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace.


The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster
Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.

Oracle Solaris Zones
Oracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.
This technology provides an isolated and secure environment for running applications.
A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.
Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.

Oracle Solaris Zones Physical-to-Virtual (P2V)
A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical
system and migrate it into a virtualized operating system environment
There are three main steps using this tool

1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image.
2. Preparing the target system by configuring a new zone that will host the new image.
3. Image installation on the target system using the image we created on step 1.

The host, where the image is built, is referred to as the source system and the host, where the
image is installed, is referred to as the target system.



Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)
Here are some benefits of this new feature:


  •  Simple- easy build process using Oracle Solaris 10 built-in commands.

  •  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.

  •  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page.

    Prerequisites
    The minimum Solaris version on the target system should be Solaris 10 9/10.
    Refer to the latest Administration Guide for Oracle Solaris 10 for a complete procedure on how to
    download and install Oracle Solaris.



  • NOTE: If the source system that used to build the image is an older version then the target
    system, then during the process, the operating system will be upgraded to Solaris 10 9/10
    (update on attach).
    Creating the Image Used to distribute the software.
    We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine,

    or build it into a NFS shared storage and
    mount the NFS file system from the target machine.
    Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.
    An image is created by using the flarcreate command:
    Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flar
    The command does the following:



  •  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).

  •  -n specifies the image name.

  •  -L specifies the archive format (i.e cpio).

    Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.
    Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB
    10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flar
    You can see example of the archive identification section in Appendix A: archive identification section.
    We can compress the flar image using the gzip command or adding the -c option to the flarcreate command
    Source # gzip /var/tmp/solaris_10_up9.flar
    An md5 checksum can be created for the image in order to ensure no data tampering
    Source # digest -v -a md5 /var/tmp/solaris_10_up9.flar


    Moving the image into the target system.
    If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.

    Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmp
    Configuring the Zone on the target system
    After copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.
    To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )

    ZFS integration
    A flash archive can be created on a system that is running a UFS or a ZFS root file system.
    NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then by
    default, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.
    This image cannot be used to install a zone. You must create the flar with an explicit cpio or pax
    archive when the system has a ZFS root.
    Use the flarcreate command with the -L archiver option, specifying cpio or pax as the
    method to archive the files. (For example, see Step 1 in the previous section).
    Optionally, on the target system you can create the zone root folder on a ZFS file system in
    order to benefit from the ZFS features (clones, snapshots, etc...).

    Target # zpool create zones c2t2d0


    Create the zone root folder:

    Target # chmod 700 /zones
    Target # zonecfg -z solaris10-up9-zone
    solaris10-up9-zone: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:solaris10-up9-zone> create
    zonecfg:solaris10-up9-zone> set zonepath=/zones
    zonecfg:solaris10-up9-zone> set autoboot=true
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone:net> set address=192.168.0.1
    zonecfg:solaris10-up9-zone:net> set physical=nxge0
    zonecfg:solaris10-up9-zone:net> end
    zonecfg:solaris10-up9-zone> verify
    zonecfg:solaris10-up9-zone> commit
    zonecfg:solaris10-up9-zone> exit

    Installing the Zone on the target system using the image
    Install the configured zone solaris10-up9-zone by using the zoneadm command with the install -
    a option and the path to the archive.
    The following example shows how to create an Image and sys-unconfig the zone.
    Target # zoneadm -z solaris10-up9-zone install -u -a
    /var/tmp/solaris_10_up9.flar
    Log File: /var/tmp/solaris10-up9-zone.install_log.AJaGve
    Installing: This may take several minutes...
    The following example shows how we can preserve system identity.
    Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar


    Resource management


    Some applications are sensitive to the number of CPUs on the target Zone. You need to
    match the number of CPUs on the Zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>add dedicated-cpu
    zonecfg:solaris10-up9-zone> set ncpus=16


    DTrace integration
    Some applications might need to be analyzing using DTrace on the target zone, you can
    add DTrace support on the zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>set
    limitpriv="default,dtrace_proc,dtrace_user"


    Exclusive IP
    stack

    An Oracle Solaris Container running in Oracle Solaris 10 can have a
    shared IP stack with the global zone, or it can have an exclusive IP
    stack (which was released in Oracle Solaris 10 8/07). An exclusive IP
    stack provides a complete, tunable, manageable and independent
    networking stack to each zone. A zone with an exclusive IP stack can
    configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec.
    For an example of how to configure an Oracle Solaris zone with an
    exclusive IP stack, see the following example

    zonecfg:solaris10-up9-zone set ip-type=exclusive
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone> set physical=nxge0

    When the installation completes, use the zoneadm list -i -v options to list the installed
    zones and verify the status.
    Target # zoneadm list -i -v
    See that the new Zone status is installed
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - solaris10-up9-zone installed /zones native shared
    Now boot the Zone
    Target # zoneadm -z solaris10-up9-zone boot
    We need to login into the Zone order to complete the zone set up or insert a sysidcfg file before
    booting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg file
    section
    Target # zlogin -C solaris10-up9-zone

    Troubleshooting
    If an installation fails, review the log file. On success, the log file is in /var/log inside the zone. On
    failure, the log file is in /var/tmp in the global zone.
    If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F

    to reset the zone to the configured state.
    Target # zoneadm -z solaris10-up9-zone uninstall -F
    Target # zonecfg -z solaris10-up9-zone delete -F
    Conclusion
    Oracle Solaris Zones P2V tool provides the flexibility to build pre-configured
    images with different software configuration for faster deployment and server consolidation.
    In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.

    Appendix A: archive identification section
    We can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access the
    identification section that contains the detailed description.
    Target # head -n 20 /var/tmp/solaris_10_up9.flar
    FlAsH-aRcHiVe-2.0
    section_begin=identification
    archive_id=e4469ee97c3f30699d608b20a36011be
    files_archived_method=cpio
    creation_date=20100901160827
    creation_master=mdet5140-1
    content_name=s10-system
    creation_node=mdet5140-1
    creation_hardware_class=sun4v
    creation_platform=SUNW,T5140
    creation_processor=sparc
    creation_release=5.10
    creation_os_name=SunOS
    creation_os_version=Generic_142909-16
    files_compressed_method=none
    content_architectures=sun4v
    type=FULL
    section_end=identification
    section_begin=predeployment
    begin 755 predeployment.cpio.Z

    Appendix B: sysidcfg file section
    Target # cat sysidcfg
    system_locale=C
    timezone=US/Pacific
    terminal=xterms
    security_policy=NONE
    root_password=HsABA7Dt/0sXX
    timeserver=localhost
    name_service=NONE
    network_interface=primary {hostname= solaris10-up9-zone
    netmask=255.255.255.0
    protocol_ipv6=no
    default_route=192.168.0.1}
    name_service=NONE
    nfs4_domain=dynamic

    We need to copy this file before booting the zone
    Target # cp sysidcfg /zones/solaris10-up9-zone/root/etc/


  • Wednesday Nov 25, 2009

    Solaris Zones migration with ZFS

    ABSTRACT
    In this entry I will demonstrate how to migrate a Solaris Zone running
    on T5220 server to a new  T5220 server using ZFS as file system for
    this Zone.
    Introduction to Solaris Zones

    Solaris Zones provide a new isolation primitive for the Solaris OS,
    which is secure, flexible, scalable and lightweight. Virtualized OS
    services  look like different Solaris instances. Together with the
    existing Solaris Resource management framework, Solaris Zones forms the
    basis of Solaris Containers.

    Introduction to ZFS


    ZFS is a new kind of file system that provides simple administration,
    transactional semantics, end-to-end data integrity, and immense
    scalability.
    Architecture layout :





    Prerequisites :

    The Global Zone on the target system must be running the same Solaris release as the original host.

    To ensure that the zone will run properly, the target system must have
    the same versions of the following required operating system packages
    and patches as those installed on the original host.


    Packages that deliver files under an inherit-pkg-dir resource
    Packages where SUNW_PKG_ALLZONES=true
    Other packages and patches, such as those for third-party products, can be different.

    Note for Solaris 10 10/08: If the new host has later versions of the
    zone-dependent packages and their associated patches, using zoneadm
    attach with the -u option updates those packages within the zone to
    match the new host. The update on attach software looks at the zone
    that is being migrated and determines which packages must be updated to
    match the new host. Only those packages are updated. The rest of the
    packages, and their associated patches, can vary from zone to zone.
    This option also enables automatic migration between machine classes,
    such as from sun4u to sun4v.


    Create the ZFS pool for the zone
    # zpool create zones c2t5d2
    # zpool list

    NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    zones   298G    94K   298G     0%  ONLINE  -

    Create a ZFS file system for the zone
    # zfs create zones/zone1
    # zfs list

    NAME          USED  AVAIL  REFER  MOUNTPOINT
    zones         130K   293G    18K  /zones
    zones/zone1    18K   293G    18K  /zones/zone1

    Change the file system permission
    # chmod 700 /zones/zone1

    Configure the zone
    # zonecfg -z zone1

    zone1: No such zone configured
    Use 'create' to begin configuring a new zone.

    zonecfg:zone1> create -b
    zonecfg:zone1> set autoboot=true
    zonecfg:zone1> set zonepath=/zones/zone1
    zonecfg:zone1> add net
    zonecfg:zone1:net> set address=192.168.1.1
    zonecfg:zone1:net> set physical=e1000g0
    zonecfg:zone1:net> end
    zonecfg:zone1> verify
    zonecfg:zone1> commit
    zonecfg:zone1> exit
    Install the new Zone
    # zoneadm -z zone1 install

    Boot the new zone
    # zoneadm -z zone1 boot

    Login to the zone
    # zlogin -C zone1

    Answer all the setup questions

    How to Validate a Zone Migration Before the Migration Is Performed

    Generate the manifest on a source host named zone1 and pipe the output
    to a remote command that will immediately validate the target host:
    # zoneadm -z zone1 detach -n | ssh targethost zoneadm -z zone1 attach -n -

    Start the migration process

    Halt the zone to be moved, zone1 in this procedure.
    # zoneadm -z zone1 halt

    Create snapshot for this zone in order to save its original state
    # zfs snapshot zones/zone1@snap
    # zfs list

    NAME               USED  AVAIL  REFER  MOUNTPOINT
    zones             4.13G   289G    19K  /zones
    zones/zone1       4.13G   289G  4.13G  /zones/zone1
    zones/zone1@snap      0      -  4.13G  -
    Detach the zone.
    # zoneadm -z zone1 detach

    Export the ZFS pool,use the zpool export command
    # zpool export zones


    On the target machine
     Connect the storage to the machine and then import the ZFS pool on the target machine
    # zpool import zones
    # zpool list

    NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    zones   298G  4.13G   294G     1%  ONLINE  -
    # zfs list

    NAME               USED  AVAIL  REFER  MOUNTPOINT
    zones             4.13G   289G    19K  /zones
    zones/zone1       4.13G   289G  4.13G  /zones/zone1<
    zones/zone1@snap  2.94M      -  4.13G  -

    On the new host, configure the zone.
    # zonecfg -z zone1

    You will see the following system message:

    zone1: No such zone configured

    Use 'create' to begin configuring a new zone.

    To create the zone zone1 on the new host, use the zonecfg command with the -a option and the zonepath on the new host.

    zonecfg:zone1> create -a /zones/zone1
    Commit the configuration and exit.
    zonecfg:zone1> commit
    zonecfg:zone1> exit
    Attach the zone with a validation check.
    # zoneadm -z zone1 attach

    The system administrator is notified of required actions to be taken if either or both of the following conditions are present:

    Required packages and patches are not present on the new machine.

    The software levels are different between machines.

    Note for Solaris 10 10/08: Attach the zone with a validation check and
    update the zone to match a host running later versions of the dependent
    packages or having a different machine class upon attach.
    # zoneadm -z zone1 attach -u

    Solaris 10 5/09 and later: Also use the -b option to back out specified patches, either official or IDR, during the attach.
    # zoneadm -z zone1 attach -u -b IDR246802-01 -b 123456-08

    Note that you can use the -b option independently of the -u option.

    Boot the zone
    # zoneadm -z zone1 boot

    Login to the new zone
    # zlogin -C zone1

    [Connected to zone 'zone1' console]

    Hostname: zone1

    All the process took approximately five minutes

    For more information about Solaris ZFS and Zones

    Sunday Aug 02, 2009

    Logical Domains Physical-to-Virtual (P2V) Migration


    The Logical domains P2V migration tool automatically converts an existing physical system
    to a virtual system that runs in a logical domain on a chip multithreading (CMT) system.
    The source system can be any of the following:



        \*  Any sun4u SPARC system that runs at least the Solaris 8 Operating System
        \*  Any sun4v system that runs the Solaris 10 OS, but does not run in a logical domain


    In this entry I will demonstrate how to use Logical domains P2V migration tool to migrate Solaris running on V440 server (physical)
    Into guest domain running on the T5220 server (virtual)
    Architecture layout :



    Before you can run the LogicalDomains P2VMigration Tool, ensure that the following are true:



    •     Target system runs at least LogicalDomains 1.1 on the following:

    •      Solaris 10 10/08 OS

    •     Solaris 10 5/08 OS with the appropriate LogicalDomains 1.1 patches

    •     Guest domains run at least the Solaris 10 5/08 OS

    •     Source system runs at least the Solaris 8 OS


    prerequisites


    In addition to these prerequisites, configure an NFS file system to be shared by both the source
    and target systems. This file system should be writable by root.However, if a shared file system
    is not available, use a local file system that is large enough to hold a file system dump of the
    source system on both the source and target systems Limitations
    Version 1.0 of the LogicalDomains P2VMigration Tool has the following limitations:



    •      Only UFS file systems are supported.

    •      Each guest domain can have only a single virtual switch and virtual disk

    •      The flash archiving method silently ignores excluded file systems.

    The conversion from a physical system to a virtual system is performed in the following phases:


    • Collection phase   Runs on the physical source system. collect creates a file system image of the source system based on the configuration information that it collects about the source system.

    • Preparation phase. Runs on the control domain of the target system. prepare creates the logical domain on the target system based on the configuration information collected in the collect phase.

    • Conversion phase. Runs on the control domain of the target system. In the convert phase,the created logical domain is converted into a logical domain that runs the Solaris 10 OS by using the standard Solaris upgrade process.


      Collection phase
      On the target machine T5220

      Prepare the NFS server in order to hold the a file system dump of the source system on both the source and target systems.
      In this use case I will use the target machine (T5220) as the NFS server
      # mkdir /p2v
      # share -F nfs -o root=v440 /p2v

      Verify the NFS share
      # share
      -  /p2v  root=v440  ""
      Install the Logical Domains P2V MigrationTool
      Go to the Logical Domains download page at http://www.sun.com/servers/coolthreads/
      ldoms/get.jsp.
      Download the P2V software package, SUNWldmp2v
      Use the pkgadd command to install the SUNWldmp2v package
      # pkgadd -d . SUNWldmp2v

      Create the /etc/ldmp2v.conf file we will use it later
      # cat /etc/ldmp2v.conf

      VSW="primary-vsw0"
      VDS="primary-vds0"
      VCC="primary-vcc0"
      BACKEND_PREFIX="/ldoms/disks/"
      BACKEND_TYPE="file"
      BACKEND_SPARSE="no"
      BOOT_TIMEOUT=10
      On the source machine V440
      Install the Logical Domains P2V MigrationTool
      # pkgadd -d . SUNWldmp2v
      Mount the NFS share
      # mkdir /p2v
      # mount t5220:/p2v /p2v
      Run the collection command
      # /usr/sbin/ldmp2v collect -d /p2v/v440
      Collecting system configuration ...
      Archiving file systems ...
      DUMP: Date of this level 0 dump: August 2, 2009 4:11:56 PM IDT
      DUMP: Date of last level 0 dump: the epoch
      DUMP: Dumping /dev/rdsk/c1t0d0s0 (mdev440-2:/) to /p2v/v440/ufsdump.0.
      The collection phase took 5 minutes for 4.6 GB dump file
      Preparation phase
      On the target machine T5220
      Run the preparation command
      We will keep the source machine (V440) mac address
      # /usr/sbin/ldmp2v prepare -d /p2v/v440 -o keep-mac v440
       Creating vdisks ...
      Creating file systems ...
      Populating file systems ...
      The preparation phase took 26 minutes


      We can see that for each physical cpu on the V440 server the LDom P2V Tool create 4 vcpu on the guest domain and assigns the same amount memory that the physical system has
      # ldm list -l v440


      NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
      v440                   inactive   ------                                   4     8G

      CONTROL
          failure-policy=ignore

      DEPENDENCY
          master=

      NETWORK
          NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID                  MTU
          vnet0            primary-vsw0                           00:03:ba:c4:d2:9d        1

      DISK
          NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP
          disk0            v440-vol0@primary-vds0

      Conversion Phase
      Before starting the conversion phase shout down the source server (V440) in order to avoid ip address conflict
      On the V440 server
      # poweroff
      On the jumpstart server

      You can use the Custom JumpStart feature to perform a completely hands-off conversion.
      This feature requires that you create and configure the appropriate sysidcfg and profile files for the client on the JumpStart server.
      The profile should consist of the following lines: install_type upgrade root_device  c0d0s0



      The sysidcfg file :

      name_service=NONE
      root_password=uQkoXlMLCsZhI
      system_locale=C
      timeserver=localhost timezone=Europe/Amsterdam
      terminal=vt100 security_policy=NONE nfs4_domain=dynamic network_interface=PRIMARY {netmask=255.255.255.192 default_route=none protocol_ipv6=no}
      On the target server T5220
      # ldmp2v convert -j -n vnet0 -d /p2v/v440 v440Testing original system status ...
      LDom v440 started
      Waiting for Solaris to come up ...
      Using Custom JumpStart
      Trying 0.0.0.0...
      Connected to 0.
      Escape character is '\^]'.

      Connecting to console "v440" in group "v440" ....
      Press ~? for control options ..
      SunOS Release 5.10 Version Generic_139555-08 64-bit
      Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
      Use is subject to license terms.



      For information about the P2V migration tool, see the ldmp2v(1M) man page.





    Thursday Jul 02, 2009

    Storage virtualization with COMSTAR and ZFS


    COMSTAR is a software framework that enables you to turn any
    OpenSolaris host into a SCSI target that can be accessed over the
    network by initiator hosts. COMSTAR breaks down the huge task of
    handling a SCSI target subsystem into independent functional modules.
    These modules are then glued together by the SCSI Target Mode Framework (STMF).

    COMSTAR features include:

        \* Extensive LUN Masking and mapping functions
        \* Multipathing across different transport protocols
        \* Multiple parallel transfers per SCSI command
        \* Scalable design
        \* Compatible with generic HBAs


    COMSTAR is integrated into the latest OpenSolaris

    In this entry I will demonstrate the integration between COMSTAR and ZFS


    Architecture layout :




    You can install all the appropriate COMSTAR packages

    # pkg install storage-server
    On a newly installed OpenSolaris system, the STMF service is disabled by default.

    You must complete this task to enable the STMF service.
    View the existing state of the service
    # svcs stmf
     disabled 15:58:17 svc:/system/stmf:default

    Enable the stmf service
    # svcadm enable stmf
    Verify that the service is active.
    # svcs stmf
     online 15:59:53 svc:/system/stmf:default

    Create a RAID-Z storage pool.
    The server has six controllers each with eight disks and I have
    built the storage pool to spread I/O evenly and to enable me to build 8
     RAID-Z stripes of equal length.
    # zpool create -f  tank \\
    raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
    raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0
    raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
    raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
    raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
    raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
    raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
    raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
    spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0


    After the pool is created, the zfs utility can be used to create 50GB ZFS volume.
    # zfs create -V 50g tank/comstar-vol1

    Create a logical unit using the volume.
    # sbdadm create-lu /dev/zvol/rdsk/tank/comstar-vol1
    Created the following logical unit :

    GUID DATA SIZE SOURCE
    --------------------------- ------------------- ----------------
    600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

    Verify the creation of the logical unit and obtain the Global Unique Identification (GUID) number for the logical unit.
    # sbdadm list-lu
    Found 1 LU(s)
    GUID DATA SIZE SOURCE
    -------------------------------- ------------------- ----------------
    600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

    This procedure makes a logical unit available to all initiator hosts on a storage network.
    Add a view for the logical unit.

    # stmfadm add-view GUID_number

    Identify the host identifier of the initiator host you want to add to your view.
    Follow the instructions for each port provider to identify the initiators associated with each
    port provider.


    You can see that the port mode is Initiator

    # fcinfo hba-port
           HBA Port WWN: 210000e08b91facd
            Port Mode: Initiator
            Port ID: 2
            OS Device Name: /dev/cfg/c16
            Manufacturer: QLogic Corp.
            Model: 375-3294-01
            Firmware Version: 04.04.01
            FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
            Serial Number: 0402R00-0634175788
            Driver Name: qlc
            Driver Version: 20080617-2.30
            Type: L-port
            State: online
            Supported Speeds: 1Gb 2Gb 4Gb
            Current Speed: 4Gb
            Node WWN: 200000e08b91facd
            Max NPIV Ports: 63
            NPIV port list:

    Before making changes to the HBA ports, first check the existing port
    bindings.
    View what is currently bound to the port drivers.
    In this example, the current binding is pci1077,2422.
    # mdb -k
    Loading modules: [ unix genunix specfs dtrace mac cpu.generic
    cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
    neti sctp arp usba qlc fctl  
    fcp cpc random crypto stmf nfs lofs logindmux ptm ufs sppp nsctl ipc ]
    > ::devbindings -q qlc
    ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlc)
    ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlc)

    Quit mdb.
    > $q
    Remove the current binding, which in this example is qlc.
    In this example, the qlc driver is actively bound to pci1077,2422.
    You must remove the existing binding for qlc before you can add that
    binding to a new driver.

     Single quotes are required in this syntax.
    # update_drv -d -i 'pci1077,2422' qlc
    Cannot unload module: qlc
    Will be unloaded upon reboot.

    This message does not indicate an error.
    The configuration files have been updated but the qlc driver remains
    bound to the port until reboot.
    Establish the new binding to qlt.
    Single quotes are required in this syntax.
    # update_drv -a -i 'pci1077,2422' qlt
      Warning: Driver (qlt) successfully added to system but failed to
    attach

    This message does not indicate an error. The qlc driver remains bound
    to the port, until reboot.
    The qlt driver attaches when the system is rebooted.
    Reboot the system to attach the new driver, and then recheck the
    bindings.
    # reboot
    # mdb -k

    Loading modules: [ unix genunix specfs dtrace mac cpu.generic
    cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
    neti sctp arp usba fctl stmf lofs fcip cpc random crypto nfs logindmux
    ptm ufs sppp nsctl ipc ]
    > ::devbindings -q qlt
    ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlt)
    ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlt)

    Quit mdb.
     > $q


    You can see that the port mode is Target

    # fcinfo hba-port

            HBA Port WWN: 210000e08b91facd
            Port Mode: Target
            Port ID: 2
            OS Device Name: /dev/cfg/c16
            Manufacturer: QLogic Corp.
            Model: 375-3294-01
            Firmware Version: 04.04.01
            FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
            Serial Number: 0402R00-0634175788
            Driver Name: qlc
            Driver Version: 20080617-2.30
            Type: L-port
            State: online
            Supported Speeds: 1Gb 2Gb 4Gb
            Current Speed: 4Gb
            Node WWN: 200000e08b91facd
            Max NPIV Ports: 63
            NPIV port list:


    Verify that the target mode framework has access to the HBA ports.
    # stmfadm list-target -v

    Target: wwn.210100E08BB1FACD
    Operational Status: Online
    Provider Name : qlt
    Alias : qlt1,0
    Sessions : 0
    Target: wwn.210000E08B91FACD
    Operational Status: Online
    Provider Name : qlt
    Alias : qlt0,0
    Sessions : 1
    Initiator: wwn.210000E08B89F077
    Alias: -
    Logged in since: Thu Jul 2 12:02:59 2009


    Now for the client setup :


    On the client machine verify that you can see the new logical unit
    # cfgadm -al
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c0                             scsi-bus     connected    configured   unknown
    c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
    c1                             scsi-bus     connected    configured   unknown
    c1::dsk/c1t0d0                 disk         connected    configured   unknown
    c1::dsk/c1t2d0                 disk         connected    configured   unknown
    c1::dsk/c1t3d0                 disk         connected    configured   unknown
    c2                             fc-private   connected    configured   unknown
    c2::210000e08b91facd           disk         connected    configured   unknown
    c3                             fc           connected    unconfigured unknown
    usb0/1                         unknown      empty        unconfigured ok
    usb0/2                         unknown      empty        unconfigured ok

    You might need to rescan the SAN BUS in order to discover the new logical unit
    # luxadm -e forcelip /dev/cfg/c2
    # format
    Searching for disks...
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
    1. c1t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
    /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,0
    2. c1t3d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
    /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
    3. c2t210000E08B91FACDd0 <SUN-COMSTAR-1.0 cyl 65533 alt 2 hd 16 sec 100>
    /pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w210000e08b91facd,0
    Specify disk (enter its number):

    You can see the SUN-COMSTAR-1.0 in the disk properties
    Now you can build storage pool on top of it
    # zpool create comstar-pool c2t210000E08B91FACDd0
    Verify the pool creation
    # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    comstar-pool 49.8G 114K 49.7G 0% ONLINE -

    After the pool is created, the zfs utility can be used to create a ZFS volume.
    #  zfs create -V 48g comstar-pool/comstar-vol1


    For more information about COMSTAR please check the COMSTAR  project on OpenSolaris




    Sunday Apr 19, 2009

    Ldom with ZFS

    Logical Domains offers a powerful and consistent methodology for creating virtualized server environments across the entire CoolThreads server range:

       \* Create multiple independent virtual machines quickly and easily
         using the hypervisor built into every CoolThreads system.
       \* Leverage advanced Solaris technologies such as ZFS cloning and
         snapshots to speed deployment and dramatically reduce disk
         capacity requirements.

    In this entry I will demonstrate the integration between Ldom and ZFS

    Architecture layout





    Downloading Logical Domains Manager and Solaris Security Toolkit

    Download the Software

    Download the zip file (LDoms_Manager-1_1.zip) from the Sun Software Download site. You can find the software from this web site:

    http://www.sun.com/ldoms

     Unzip the zip file.
    # unzip LDoms_Manager-1_1.zip
    Please read the REDME file for any prerequisite
    The installation script is part of the SUNWldm package and is in the Install subdirectory.


    # cd LDoms_Manager-1_1


    Run the install-ldm installation script with no options.
    # Install/install-ldm

    Select a security profile from this list:

    a) Hardened Solaris configuration for LDoms (recommended)
    b) Standard Solaris configuration
    c) Your custom-defined Solaris security configuration profile

    Enter a, b, or c [a]: a


    Shut down and reboot your server
    # /usr/sbin/shutdown -y -g0 -i6

    Use the ldm list command to verify that the Logical Domains Manager is running
    # /opt/SUNWldm/bin/ldm list


    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-c-- SP 32 16256M 0.0% 2d 23h 27m

    Creating Default Services

    You must create the following virtual default services initially to be able to use them later:
    vdiskserver – virtual disk server
    vswitch – virtual switch service
    vconscon – virtual console concentrator service

    Create a virtual disk server (vds) to allow importing virtual disks into a logical domain.
    # ldm add-vds primary-vds0 primary

    Create a virtual console concentrator (vcc) service for use by the virtual network terminal server daemon (vntsd)
    # ldm add-vcc port-range=5000-5100 primary-vcc0 primary

    Create a virtual switch service
    (vsw) to enable networking between virtual network
    (vnet) devices in logical domains
    # ldm add-vsw net-dev=e1000g0 primary-vsw0 primary

    Verify the services have been created by using the list-services subcommand.


    # ldm list-services

    Set Up the Control Domain

    Assign cryptographic resources to the control domain.
    # ldm set-mau 1 primary

    Assign virtual CPUs to the control domain.
    # ldm set-vcpu 4 primary

    Assign memory to the control domain.
    # ldm set-memory 4G primary

    Add a logical domain machine configuration to the system controller (SC).
    # ldm add-config initial

    Verify that the configuration is ready to be used at the next reboot
    # ldm list-config

    factory-default
    initial [next poweron]

    Reboot the server
    # shutdown -y -g0 -i6

    Enable the virtual network terminal server daemon, vntsd
    # svcadm enable vntsd

    Create the zpool

    # zpool create ldompool c1t2d0 c1t3d0

    # zfs create ldompool/goldimage

    # zfs create -V 15g ldompool/goldimage/disk_image



    Creating and Starting a Guest Domain

    Create a logical domain.
    # ldm add-domain goldldom

    Add CPUs to the guest domain.
    ldm add-vcpu 4 goldldom

    Add memory to the guest domain
    # ldm add-memory 2G goldldom

    Add a virtual network deviceto the guest domain.
    # ldm add-vnet vnet1 primary-vsw0 goldldom

    Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain
    # ldm add-vdsdev /dev/zvol/dsk/ldompool/goldimage/disk_image vol1@primary-vds0

    Add a virtual disk to the guest domain.
    # ldm add-vdisk vdisk0 vol1@primary-vds0 goldldom

    Set auto-boot and boot-device variables for the guest domain
    # ldm set-variable auto-boot\\?=false goldldom
    # ldm set-var boot-device=vdisk0 goldldom


    Bind resources to the guest domain goldldom and then list the domain to verify that it is bound.
    # ldm bind-domain goldldom
    # ldm list-domain goldldom


    NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
    primary          active     -n-cv-  SP      4     4G       0.2%  15m
    goldldom         bound      ------  5000    4     2G

    Start the guest domain
    # ldm start-domain goldldom
    Connect to the console of a guest domain
    # telnet 0 5000
    Trying 0.0.0.0...
    Connected to 0.
    Escape character is '\^]'.
    Connecting to console "goldldom" in group "goldldom" ....
    Press ~? for control options ..

    {0} ok

    Jump-Start the goldldom

    {0} ok boot net - install
    We can login to the new guest and verify that the file system is zfs

    # zpool list
    NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    rpool  14.9G  1.72G  13.2G    11%  ONLINE  -
    Restore the goldldom configuration to an "as-manufactured" state with the sys-unconfig command


    # sys-unconfig
    This program will unconfigure your system.  It will cause it
    to revert to a "blank" system - it will not have a name or know
    about other systems or networks.
    This program will also halt the system.
    Do you want to continue (y/n) y

    Press ~. in order to return to the primary domain

    Stop the guest domain
    # ldm stop goldldom
    Unbind the guest domain

    # ldm unbind  goldldom
    Snap shot the disk image
    # zfs snapshot ldompool/goldimage/disk_image@sysunconfig

    Create new zfs file system for the new guest
    # zfs create ldompool/domain1

    Clone the goldldom disk image
    # zfs clone ldompool/goldimage/disk_image@sysunconfig ldompool/domain1/disk_image

    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    ldompool 17.0G 117G 21K /ldompool
    ldompool/domain1 18K 117G 18K /ldompool/domain1
    ldompool/domain1/disk_image 0 117G 2.01G -
    ldompool/goldimage 17.0G 117G 18K /ldompool/goldimage
    ldompool/goldimage/disk_image 17.0G 132G 2.01G -
    ldompool/goldimage/disk_image@sysunconfig 0 - 2.01G -

    Creating and Starting the second  Domain


    # ldm add-domain domain1
    # ldm add-vcpu 4 domain1
    # ldm add-memory 2G domain1
    # ldm add-vnet vnet1 primary-vsw0 domain1
    # ldm add-vdsdev /dev/zvol/dsk/ldompool/domain1/disk_image vol2@primary-vds0
    # ldm add-vdisk vdisk1 vol2@primary-vds0 domain1
    # ldm set-var auto-boot\\?=false domain1
    # ldm set-var boot-device=vdisk1 domain1

    # ldm bind-domain domain1
    # ldm list-domain domain1
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    domain1 bound ------ 5001 8 2G

    Start the domain
    # ldm start-domain domain1

    Connect to the console
    # telnet 0 5001
    {0} ok boot net -s

    Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Booting to milestone "milestone/single-user:default".
    Configuring devices.
    Using RPC Bootparams for network configuration information.
    Attempting to configure interface vnet0...
    Configured interface vnet0
    Requesting System Maintenance Mode
    SINGLE USER MODE

    # zpool import -f rpool
    # zpool export rpool
    # reboot


    Answer the configuration questions

    Login to the new domain and verify that we have zfs file system

    # zpool list
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    rpool 14.9G 1.72G 13.2G 11% ONLINE -

    Monday Mar 16, 2009

    Brief Technical Overview and Installation of Ganglia on Solaris

    Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters.


    For this setup we will use the following software packages:


    1. Ganglia - the core Ganglia package


    2. Zlib - zlib compression libraries


    3. Libgcc - low-level runtime library


    4. Rrdtool - round Robin Database graphing tool


    5. Apache web server with php support


    You can get the  packagers ( 1-3)  from sunfreeware (depending on your architecture -  x86 or SPARC)


    Unzip and Install the packages



    1. gzip -d ganglia-3.0.7-sol10-sparc-local.gz

    pkgadd -d ./ganglia-3.0.7-sol10-sparc-local

    2. gzip -d zlib-1.2.3-sol10-sparc-local.gz

    pkgadd -d ./zlib-1.2.3-sol10-sparc-local

    3. gzip -d libgcc-3.4.6-sol10-sparc-local.gz

    pkgadd -d ./libgcc-3.4.6-sol10-sparc-local


    4. You will need pkgutil from blastwave in order to install rrdtool software packages



    /usr/sfw/bin/wget http://blastwave.network.com/csw/unstable/sparc/5.8/pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz


     
    gunzip pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz


     
    pkgadd -d pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg

     



    Now you can install packages with all required dependencies with a single command:



    /opt/csw/bin/pkgutil -i rrdtool
    5. You will need to download Apache ,PHP and Core libraries from Cool stack

    Core libraries used by other packages



    bzip2 -d CSKruntime_1.3.1_sparc.pkg.bz2

    pkgadd -d ./CSKruntime_1.3.1_sparc.pkg



    Apache 2.2.9,  PHP 5.2.6



    bzip2 -d CSKamp_1.3.1_sparc.pkg.bz2

    pkgadd -d ./CSKamp_1.3.1_sparc.pkg

    The following packages are available:

    1 CSKapache2 Apache httpd

    (sparc) 2.2.9

    2 CSKmysql32 MySQL 5.1.25 32bit

    (sparc) 5.1.25

    3 CSKphp5 PHP 5

    (sparc) 5.2.6

    Select package(s) you wish to process (or 'all' to process

    all packages). (default: all) [?,??,q]:1,3


    Select the 1 and 3 option


    Enable the web server service



    svcadm enable svc:/network/http:apache22-csk

    Verify it is working



    svcs svc:/network/http:apache22-csk

    STATE          STIME    FMRI

    online         17:02:13 svc:/network/http:apache22-csk


     Locate the Web server  DocumentRoot



    grep DocumentRoot /opt/coolstack/apache2/conf/httpd.conf


    DocumentRoot "/opt/coolstack/apache2/htdocs"

    Copy the Ganglia directory tree



    cp -rp /usr/local/doc/ganglia/web  /opt/coolstack/apache2/htdocs/ganglia

    Change the rrdtool path on  /opt/coolstack/apache2/htdocs/ganglia/conf.php


    from /usr/bin/rrdtool  to /opt/csw/bin/rrdtool




     




     


    Start the gmond daemon with the default configuration



    /usr/local/sbin/gmond --default_config > /etc/gmond.conf

    Edit /etc/gmond.conf  ,change  name = "unspecified"  to name="grid1"  (This is our grid name.)


    Verify that it has started : 



    ps -ef | grep gmond
    nobody 3774 1 0 16::57:41 ? 0:55 /usr/local/gmond

    In order to debug any problem, try:



    /usr/local/sbin/gmond --debug=9

    Build the directory for the rrd images



    mkdir -p /var/lib/ganglia/rrds
    chown -R nobody  /var/lib/ganglia/rrds
    Add the folowing line to /etc/gmetad.conf

    data_source "grid1"  localhost

    Start the gmetad daemon



    /usr/local/sbin/gmetad

    Verify it -->


     ps -ef | grep gmetad

    nobody  4350     1   0 17:10:30 ?           0:24 /usr/local/sbin/gmetad
     


    To debug any problem



    /usr/local/sbin/gmetad --debug=9

    Point your browser to: http://server-name/ganglia







    Monday Jan 12, 2009

    Solaris iSCSI Server

    This document describes how to build Iscsi server based on Solaris platform on sun X4500 server.



    On the target (server)


    The server has six controllers each with eight disks and I have built the storage pool to spread I/O evenly and to enable me to build 8 RAID-Z stripes of equal length.


    zpool create -f  tank \\
    raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
    raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 \\
    raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
    raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
    raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
    raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
    raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
    raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
    spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0

    After the pool is created, the zfs utility can be used to create 50GB ZFS volume.


    zfs create -V 50g tank/iscsivol000

    Enable the Iscsi service


    svcadm enable iscsitgt

    Verify that the service is enabled.


    svcs –a | grep iscsitgt


    To view the list of commands, iscsitadm can be run without any options:


    iscsitadm

    Usage: iscsitadm -?,-V,--help Usage: iscsitadm create [-?]  [-?] [ Usage: iscsitadm list [-?]  [-?] [ Usage: iscsitadm modify [-?]  [-?] [ Usage: iscsitadm delete [-?]  [-?] [ Usage: iscsitadm show [-?] 

    [-?] [ For more information, please see iscsitadm(1M)



    To begin using the iSCSI target, a base directory needs to be created.


    This directory is used to persistently store the target and initiator configuration that is added through the iscsitadm utility.


    iscsitadm modify admin -d /etc/iscsi




    Once the volumes are created, they need to be exported to an initiator


    iscsitadm create target -b /dev/zvol/rdsk/tank/iscsivol000 target-label


    Once the targets are created, iscsitadm's "list" command and "target" subcommand can be used to display the targets and their properties:


    iscsitadm list target -v

    On the initiator (client)


    Install iscsi client from http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en 

    About

    This blog covers cloud computing, big data and virtualization technologies

    Search

    Categories
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today