Friday Jun 26, 2009

SGE 6.2u3

\* Sun Grid 6.2 Engine Update 3 is now an ideal building block for private clouds
\* Service Domain Manager (SDM) Cloud Adapter provides an interface to manage Amazon Elastic Compute Cloud (EC2) Amazon Machine Images (AMI). They support cloud-only and hybrid (local+cloud) Sun Grid Engine environments.
\* Initial Power Saving uses SDM (Service Domain Manager) to recommend powering off hosts in the cloud when demand is low. For the initial stage, the powering off is optional, not automatic , and creates a standby pool.
\* A new module SGE Inspect (Sun Grid Engine 6.2 Update 3 Inspect) provides a graphical monitoring interface for Sun Grid Engine clusters and Service Domain Manager instances, all in a single window.
\* Graphical Installer makes it much easier to install the Sun Grid Engine software and simplifies initial cluster or cloud setup with easy-to-navigate displays.
\* Exclusive Host Scheduling The cluster can be configured to allow jobs to request exclusive use of various components of a given execution host, An exclusive job will only be scheduled on execution hosts that have no jobs currently running. For a parallel job running on multiple machines, this rule applies for the slave tasks of the job as well.
\* Job Submission Verifier (JSV) is an automatic filter for the system administrator to control, enforce and adjust jobs submissions. The basic idea is that on both the client side and the server side, the administrator has the ability to configure scripts that can read through job submission options and accept, reject, or modify the job submission accordingly.
\* Scales up to 63,000 core CPUs

System Requirements

Operating Systems Execution Hosts

\* OpenSolaris (all versions)
\* Solaris 10, 9, and 8 Operating Systems {SPARC Platform}
\* Solaris 10 and 9 Operating Systems (x86 Platform Edition)
\* Solaris 10 Operating System (x64 Platform Edition)
\* Apple Mac OS X 10.5 (Leopard), x86 platform
\* Apple Mac OS X 10.4 (Tiger), PPC platform
\* Apple Mac OS X 10.4 (Tiger), x86 platform
\* Hewlett Packard HP-UX 11.00 or higher (including HP-UX on IA64)
\* IBM AIX 5.1, 5.3
\* Linux x86, kernel 2.4, 2.6, glibc >= 2.3.21
\* Linux x64, kernel 2.4, 2.6, glibc >= 2.3.22
\* Linux IA64, kernel 2.4, 2.6, glibc >= 2.3.21
\* Microsoft Windows Server 20033, (4)
\* Windows XP Professional with Service Pack 1 or later[3, 4]
\* Windows 2000 Server with Service Pack 3 or later
\* Windows 2000 Professional with Service Pack 3 or later

Operating Systems Master Hosts

\* Solaris (see details under execution hosts)
\* Linux (x86 and x64, see details under execution hosts)

Master Host minimum hardware configuration

\* 50 MB for each binary platform
\* 80 MB of free memory minimum
\* 100 MB of free disk space minimum

Execution Host minimum hardware configuration

\* 50 MB for each binary platform
\* 20 MB of free memory minimum
\* 50 MB of free disk space minimum

Database Server

\* 50 MB for each binary platform
\* Minimum 200MB to 750MB of free memory
\* 10 GB of free disk space minimum

Sun Web Console

\* 50 MB for each binary platform
\* 200 MB of free memory minimum
\* 250 MB of free disk space minimum

Databases Supported for ARCo (see note 5)

\* PostgresSQL 8.0 through 8.3
\* MySQL(TM) 5.0
\* Oracle 9i or 10g

Operating Platforms Supported for ARCo

\* Solaris 10, 9 and 8 OS (SPARC Platform Edition)
\* Solaris 10, 9 and 8 OS (x86 Platform Edition)
\* Linux RPM Distribution

Service Domain Manager (SDM) Platforms Supported

\* OpenSolaris
\* Solaris 10, 9, and 8 Operating Systems (SPARC Platform Edition)
\* Solaris 10 and 9 Operating Systems (x86 Platform Edition)
\* Solaris 10 Operating System (x64 Platform Edition)
\* Apple Mac OS X 10.4 (Tiger), x86 platform
\* Apple Mac OS X 10.5 (Leopard), x86 platform
\* Linux x86, kernel 2.4, 2.6, glibc >= 2.3.2
\* Linux x64, kernel 2.4, 2.6, glibc >= 2.3.2

Supported Sun Java Web Console version 3.0.x web browsers

\* Netscape 6.2 and above
\* Mozilla 1.4 and above
\* Internet Explorer 5.5 and above
\* Firefox 1.0 and above

SDM Required Software

\* Sun Grid Engine 6.2
\* Java Runtime Environment (JRE) 6

Download SGE62.u3

Sun Studio 12 U1

The latest production release includes improved binary application performance, full OpenMP 3.0 support, profiling of distributed MPI applications, unified application and system profiling using Solaris DTrace technology (DLight), a new standalone graphical debugger (dbxTool) and much more
C/C++/Fortran 95 Compilers
The Sun C, C++, and Fortran compilers include advanced features for developing applications on Sun Solaris SPARC and x86/x64 platforms. They utilize a common optimizing backend code generator, and accept standard C, C++, and Fortran with extensions.

The Sun Studio Performance Tools
The Sun Studio performance tools are designed to help answer questions about application performance. This article discusses the kinds of performance questions that users typically ask.

Successful program debugging is more an art than a science. dbx is an interactive, source-level, post-mortem and real-time command-line debugging tool plus much more.

Performance Analyzer
The Sun Studio Performance Analyzer can help you assess the performance of your code, identify potential performance problems, and locate the part of the code where the problems occur. The Performance Analyzer can be used from the command line or from a graphical user interface.

Multithreaded (Parallel) Computing
Two critical forces are shaping the direction of software development. One is the deep adoption of parallel computing. The other is the move toward service-oriented architecture. But how prepared are the software developers and tool vendors for the challenge of working in parallel computing environments?

Performance Tuning and Optimization
Sun Studio C, C++, and Fortran compilers offer a rich set of compile-time options for specifying target hardware and advanced optimization techniques. But knowing which options to pick for a given application can be tricky.

Numerical Computation
The floating-point environment on Sun SPARC and x86/x64 platforms enables you to develop robust, high-performance, portable numerical applications. The floating-point environment can also help investigate unusual behavior of numerical programs written by others.

Sun Studio on Linux
The latest production release of Sun Studio 12 software has an IDE, performance analyzer, debugger, and better support for the GNU Compiler Collection (GCC). And, with the release of Sun Studio 12, Linux developers can now take advantage of Sun's world-class compilers and tools on the Linux platform.

Sun Performance Library
The Sun Performance Library is a set of optimized, high-speed mathematical subroutines for solving linear algebra and other numerically intensive problems. It is based on a collection of public domain applications available from Netlib. These public domain applications have been enchanced and optimized for Sun high-performance platforms.

High Performance Computing
High Performance and Technical Computing (HPTC) applies numerical computation techniques to highly complex scientific and engineering problems. Sun Studio compilers and tools provide a seamless, integrated environment from desktop to TeraFLOPS for both floating-point and data-intensive computing.
Studio 12u1 Download

Sun HPC Cluster Tools 8.2

Sun HPC ClusterTools 8.2 is based on Open MPI 1.3.3 and includes many new features

Sun HPC ClusterTools 8.2 software is an integrated toolkit that allows developers to create and tune Message-passing Interface (MPI) applications that run on high performance clusters and SMPs. Sun HPC ClusterTools software offers a comprehensive set of capabilities for parallel computing.

\* Infiniband, shared memory, GbE, 10 GbE, and Myrinet MX communication support
\* Sun Grid Engine plug-in
\* Supports third-party parallel debuggers Totalview and Allinea DDT
\* Integration with Sun Studio Analyzer
\* compiler support -- Sun Studio, gnu, PGI, Intel, Pathscale
\* Suspend / resume support
\* Improved intra-node shared memory performance and scalability
\* Infiniband QDR support
\* Automatic path migration support
\* Relocatable installation

Supported Platforms : all Sun UltraSPARC III or greater, all Sun Opteron-based platforms, all Sun Intel x86-based platforms.

Supported Operating Systems

\* Solaris 10 11/06 Operating System or later
\* OpenSolaris 2008.11 or later
\* Linux RHEL 5
\* Linux SLES 10
\* Linux CentOS 5

Supported Compilers

\* Studio 10, 11, 12, 12U1
\* PGI (Linux only)
\* Intel (Linux only)
\* Pathscale (Linux only)
\* gnu (Linux only)

HPC cluster Tool 8.2 Download

Sun HPC Software Linux Edition 2.0

Free Sun HPC Software, Linux Edition 2.0 Download
Sun HPC Software, Linux Edition 2.0 is available to download now. Please note that technical support is not included with the software.

What You Get

\* Lustre 1.8
\* perfctr 2.6.38
\* Env-switcher 1.0.13
\* genders 1.11
\* git
\* Heartbeat 2.1.4-2.1
\* Mellanox Firmware tools 2.5.0
\* Modules 3.2.6
\* MVAPICH 1.1
\* MVAPICH2 1.2p1
\* OFED 1.3.1
\* OpenMPI 1.2.6
\* RRDTool 1.2.30
\* HPCC Bench Suite 1.2.0
\* Lustre IOKit

\* IOR 2.10.1
\* LNET self test
\* NetPIPE 3.7.1
\* Slurm 1.3.13
\* MUNGE 0.5.8
\* Ganglia 3.1.01
\* oneSIS 2.0.1
\* Cobbler 1.4.1
\* CFEngine 2.2.6
\* Conman 0.2.3
\* FreeIPMI 0.7.5
\* IPMtool 1.8.10
\* lshw B.02.14
\* OpenSM 3.1.1
\* pdsh 2.18
\* Powerman 2.3.4
HPC stack 2.0 download

rockscluster 5.2 announce Solaris Clients

Rocks v5.2 is released for Linux on the i386 and x86_64 CPU architectures and for Solaris on the x86_64 architecture
Rocks v5.2 is released for Linux on the i386 and x86_64 CPU architectures and Solaris for the x86_64 architecture.
1. Solaris support for client nodes

With the new JumpStart Roll, one can now install and configure a Linux-based Rocks frontend to JumpStart Solaris-based back-end machines.
2. Attributes

Can assign attributes to nodes at four levels: global, appliance type, OS (e.g., Linux or SunOS), and host. An attribute can be accessed in an XML node as an entity. For example, if you assign the attribute foo with the value 123? to compute-0-0 (i.e., with the command, rocks set host attr foo 123?), then in an XML node file, you can access the value of the attribute foo with &foo;.

There will be Sun HPC cluster tool 8.2 roll and SunStudio 12u1 roll.

Sunday Jun 07, 2009

solaris support for Edu customer

recent anncouncement of the solaris support volume discount for education
solaris support
Solaris Subscriptions - Standard Support Tiers for Education. Service Plans are annual.

Number of SystemsPart NumberEdu List Price

Solaris Subscriptions - Premium Support Tiers for Education. Service Plans are annual.

Acturally these are monthly subscription and need to get Annual plan

Number of Systems Standard Premium
1-100 systems $11,568.00 $13,479.00
101-300 systems $20,283.00 $23,586.00
301-500 systems $28,932.00 $33,696.00
501-1000 systems $43,422.00 $50,544.00
1001-1500 systems $57,846.00 $67,392.00
1501 -2000 systems $71,799.00 $84,240.00
2001- 3000 systems $86,778.00 $101,088.00

comapre to the regular solaris support for non-education customer a big saving for education customer
solaris subscription

Premium Service Plan Pricing

Number of SystemsPart NumberEdu List Price
x64/x86 on Sun or Non-Sun1-2 Sockets3 or More Sockets
1 Year - x64/x86$1080$1980
3 Years - x64/x86$2980.80$5464.80

1-2 Sockets3-4 Sockets5-8 Sockets
1 Year - SPARC$1080$2160$ 4320
3 Years - SPARC$2980.80$5961.60$11,923.20

Standard Service Plan Pricing

x64/x86 on Sun or Non-Sun1-2 Sockets3 or More Sockets
1 Year - x64/x86$720$1320
3 Years - x64/x86$1987.20$53643.20

1-2 Sockets3-4 Sockets5-8 Sockets
1 Year - SPARC$720$1440$ 2880
3 Years - SPARC$1987.20$3974.40$7948.80

Basic Service Plan Pricing

> >>
x64/x86 on Sun or Non-Sun1-2 SocketsMore than 2 Sockets
1 Year - x64/x86$324 NA
3 Year - x64/x86$894.24 NA

1-2 Sockets3-4 Sockets
1 Year - SPARC$324NA
3 Years - SPARC$898.24NA

Sunday May 17, 2009

ZFS, /etc/zfs/zpool.cache and suncluster

ZFS still a very young FS and many new features with every new Solaris update and ZFS patch

This blog talk about /etc/zfs/zpool.cache behind the sence of zpool
The need of /etc/zfs/zpool.cache to speed up the import of ZFS may impact the HA-ZFS with Suncluster, the ZFS import happen before the Suncluster come to play.
In the beginning ZFS was designed to panic a system in the event of a catastrophic
write failure to a pool, since then zfs has introduce the failmode. PSARC 2007/567

The default behavior will be to "wait" for manual intervention before
allowing any further I/O attempts. Any I/O that was already queued would
remain in memory until the condition is resolved. This error condition can
be cleared by using the 'zpool clear' subcommand, which will attempt to resume
any queued I/Os.

The "continue" mode returns EIO to any new write request but attempts to
satisfy reads. Any write I/Os that were already in-flight at the time
of the failure will be queued and maybe resumed using 'zpool clear'.

Finally, the "panic" mode provides the existing behavior that was explained

The syntax for setting the pool property utilizes the "set" subcommand defined
in PSARC 2006/577:

# zpool set failmode=continue pool
# zpool create -o failmode=continue pool

ZFs and suncluster

Sun alert Solution 245626 : ZFS Pool Corruption May Occur With Sun Cluster 3.2 Running Solaris 10 with patch 137137-09 or 137138-09


This issue is addressed in the following releases:

SPARC Platform:

\* Solaris 10 with patch 139579-02 or later obsoleted by 139555-08

x86 Platform:

\* Solaris 10 with patch 139580-02 or later obsoleted by 139556-08

To avoid this problem, install Solaris 10 patch 139579-02 (for SPARC) or 139580-02 (for x86) immediately after you install 137137-09 or 137138-09 but before you reboot the cluster nodes.

The latest Suncluster 3.2u2 patch for solaris 10
sparc 126106-30
x86 126107-30

There are some IDR patch for ZFS performance

To boot the system without import any zpool
boot -m milestone=none
return to normal
svcadm milestone all

PCA- Patch Check advanced

Analyze, download and install patches for Sun Solaris.
By chance, I find this patching tool PCA
Created by
Martin Paul
Institute of Scientific Computing
Nordbergstrasse 15/C/3
1090 Wien

It seems has many follower
Other say about pca

Patch Check Advanced (pca) generates lists of installed and missing patches for Sun Solaris systems and optionally downloads patches. It resolves dependencies between patches and installs them in correct order. It can be the only tool you ever need for patch management, be it on a single machine or a complete network. Just one perl script, it doesn't need compilation nor installation, and it doesn't need root permissions to run. It works on all versions of Solaris, both SPARC and x86.


  • Easily understandable and configurable format for the patch report, containing Recommended/Security status and age of a patch.

  • Shows all missing Recommended/Security patches in one concise list. Only patches for packages which are actually installed are listed. Obsolete/Bad patches are ignored. Output can be formatted in HTML, with links to patch READMEs and downloads (Example).

  • It analyzes the patch dependencies, and lists required patches in the correct order for installation.

  • If requested, it downloads patches from Sun's patch server and installs them. One patch, groups of patches, or all missing patches. Start it, let it run, and return to a fully patched system.

  • Set up a local patch server and speed up downloads tremendously.

  • It's fast: Generating a complete patch report takes just a few seconds.

  • It's small: One file, ca. 4000 lines, both code and documentation. Makes understanding and modifying the code for your own needs easy.

  • It can assist in staying informed about firmware and other unbundled patches.

  • All the information about a machine needed for analysis can be read from files, so you can use pca even if it doesn't run on the target machine.

  • There's an auto update mechanism to keep pca itself up-to-date.


Usage of pca is free of charge for private, educational and commercial use. No responsiblity is taken for any damage caused by using pca. You may modify pca's source code to fit your local needs. If sharing modified versions of pca with others, keep a reference to the original author and distribution site.

This is a discussion and support list
To subscribe, send an empty message to To leave the list, send an empty message to

To post to the list, send your message to Messages to the list from non-subscribers are allowed, but moderated. If you are not subscribed, include a note that you'd like to receive a Cc: of any reply.

Please try it out


pca doesn't need any complicated compilation, installation or registration procedure, nor root permission. It's just one perl script.

  • You need perl to run pca. If you want to use any of pca's download functions, you need wget (?v1.7). Both are included in recent versions of Solaris.
  • Download the script:

    1. stable: pca (20090408-01, Changes, Usage)

    2. develop: pca (20090506-01, Changes, Usage)

    and make it executable (chmod +x pca). Move it to a directory in your PATH.

    Alternatively, pca is available as an SVR4 compliant package from Blastwave (maintained by D. Clarke), from OpenCSW (CSWpca, maintained by D. Michelsen) and on Sunfreeware (maintained by S. Christensen).

  • To download patches or patch READMEs from Sun, a Sun Online Account (SOA) is required. If you don't have one yet, get a free SOA and use the askauth or the user and passwd options to feed the SOA data to pca. A free SOA will grant access to security and driver patches only. To access all patches, you need to buy a Sun Service Plan and connect it to your SOA.

  • Run it: pca. There is no need to run pca as root for basic usage.

  • Documentation and release notes are included in the script; view it with pca --man. If you prefer documentation in man page format, get pca.8 and move it to a directory in your MANPATH.

  • If you are forced to use a proxy for web access, make sure that wget is configured to use it: Set http_proxy in /etc/wgetrc or $HOME/.wgetrc or use the wgetproxy option with pca.

    If you do not have wget installed on your system, download the current patch cross-reference file patchdiag.xref and move it to /var/tmp/ before running pca.

xvm Ops Center

Get the Sun xVM Ops Center 2.1

Select the level of management requirements you need from one of the following offerings:

  • xVM Ops Center Controller Provides the ability to access data from all proxies

  • Standard Agent Pack For basic monitoring and firmware provisioning

  • Premium Agent Pack For all hardware and operating system management capabilities

Sun xVM Ops Center can be purchased in two packs

  • Standard Agent Pack:
    • Used for Monitoring and firmware provisioning when operating system management is not needed

    • Features:- Agent,- Bare Metal Discovery,- Inventory,- Firmware Update,- Monitoring

  • Premium Agent Pack:
    • Used for Enterprise automation for customers who want support from Dock to Datacenter

    • Features:- Agent,- Bare Metal Discovery,- Inventory,- Firmware Update,- Monitoring,- Update Operating System,- Provisioning Operating System,- Profile Based Provisioning

  • Premium Services: 24x7 Support Services

  • xVM Ops Center Controller: - Available as a separately licensed product, - Licensed on per-server basis

  • Proxy: Included with xVM Ops Center Controller Subscription

  • Installation Services :Included with xVM Ops Center Controller Subscription

Operating System support

  • Solaris:Solaris SPARC 8/9/10, Solaris x64/x86 9/10

  • Red Hat Enterprise Linux Advanced Server:Red Hat RHEL AS/ES/WS 3/4/5

  • SUSE: Novell SUSE & SLES 8/9/10

Hrdware Supported

  • Sun Fire V125, V210, V215, V240, V245, V440, V445, V490, V890

  • Sun Fire T1000, T2000

  • Sun Fire T5120, T5220

  • Sun Fire V20z, V40z

  • Sun Fire X2100, X2100 M2, X2200 M2

  • Sun Fire X4100, X4100 M2, X4150, X4200, X4200 M2

  • Sun Fire X4450, X4500, X4600, X4600 M2

  • Sun Netra 240, 440

  • Sun Netra X4200 M2

  • Sun Blade 6000, 6048, 8000

  • Sun Blade T6320

  • Sun Blade X6220, X6250

  • Sun Blade X8420, X8440

Standards HW supported: Sun xVM Ops Center supports ALOM, ILOM, ELOM, and RSC service processor enabled systems.

xVM Ops Center Controller Minimum Specification

  • Memory 4GB Available RAM.

  • Hard Disk 1GB free in OS partitions 10GB per update channel 3GB per OS image

  • Operating System Solaris 10 update 3 and above, RHEL5 Server.

  • Processor AMD Opteron and Intel Xeon: 2 sockets UltraSPARC T1/T2: 1 socket; 2 or more cores UltraSPARC IV+/IV: 2 sockets UltraSPARC IIIi: 2 sockets

  • Network Connection At least one Network Interface Card (NIC).

Proxy Controller Minimum Specification

  • Memory 2GB Available RAM.

  • Hard Disk 1GB free in OS partitions 10GB per update channel 3GB per OS image

  • Operating System Solaris 10 update 3 and above, RHEL5 Server.

  • Processor AMD Opteron and Intel Xeon: 1 sockets; 2 or more cores, UltraSPARC T1/T2: 1 socket; 1 or more cores, UltraSPARC IV+/IV: 1 sockets, UltraSPARC IIIi: 1 sockets

  • Network Connection At least one Network Interface Card (NIC).

Due to the complexity of the xVM Ops Center, this is few software that customer can not just download and try it out:-(

xvm-server latest

This is the latest development in xVM-server

posted by duncanha on April 30, 2009

Sun are concentrating on including xVM Server EA as a Beta-level feature of the next Ops Center release and will be pushing towards full support of the xVM hypervisor that's included with OpenSolaris in a future release of Ops Center.

The xVM Server Early Access code in Ops Center can be deployed through Ops Center only, not as a standalone product.

We have closed the separate xVM Server Early Access program because xVM Server EA is included as a feature of Ops Center.

Existing Ops Center customers who wish to upgrade to 2.1 and access the xVM Server functionality should contact their Sun sales or services representative

Sun Storage 7xxx Unfied Storage System

Sun announced Sun Storage 7xxx Unified System the first product of the family 7110, one can find the docs here

Product Description The Sun Storage 7000 Unified Storage products provide efficient file and block data services to clients over a network, and a rich set of data services that can be applied to the data stored on the system. The Unified Storage Systems include support for a variety of industry-standard client protocols, including: \* CIFS \* NFS \* HTTP \* WebDAV \* iSCSI \* FTP Your Unified Storage System also includes new technologies to deliver the best storage price/performance and unprecedented observability of your workloads in production, including: \* Analytics, a feature for dynamically observing the behavior of your system in real-time and viewing data graphically. \* The Hybrid Storage Pool, composed of optional Flash-memory devices for acceleration of reads and writes, low-power, high-capacity disks, and DRAM memory, all managed transparently as a single data hierarchy. To manage the data that you export using these protocols, you can configure your Unified Storage System using our built-in collection of advanced data services, including: \* RAID-Z (RAID-5 and RAID-6), Mirrored, and Striped \* Snapshots, unlimited read-only and read-write, with snapshot schedules \* Built-in Data Compression \* Remote Replication of data for Disaster Recovery \* Active-Active Clustering (in the Sun Storage 7410) for High Availability \* Thin Provisioning of iSCSI LUNs \* Virus Scanning and Quarantine \* NDMP Backup and Restore To maximize the availability of your data in production, the Sun Storage products include a complete end-to-end architecture for data integrity, including redundancies at every level of the stack. Key features include: \* Predictive Self-Healing and Diagnosis of all System FRUs: CPUs, DRAM, I/O cards, Disks, Fans, Power Supplies \* ZFS End-to-End Data Checksums of all Data and Metadata, protecting data throughout the stack \* RAID-6 (DP) and optional RAID-6 Across JBODs \* Active-Active Clustering for High Availability \* Link Aggregations and IP Multipathing for Network Failure Protection \* I/O Multipathing between the Sun Storage 7410 and JBODs \* Integrated Software Restart of all System Software Services \* Phone-Home of Telemetry for all Software and Hardware Issues \* Lights-out Management of each System for Remote Power Control and Console Access

virtual iron first look

Virtual iron first look

The Virtual Iron solution consists of three components:

  • VI - Center provides a central place to control and automate virtual resources. It streamlines tasks that are normally highly manual and time-intensive and significantly reduces data center costs and complexity. Run on Linux and window server

  • Virtual Iron Virtualization Services are deployed automatically on bare-metal, industry-standard servers without requiring software installation or management. These features streamline data center management and reduce operational costs. Support server with Intel-V and AMD-V capability. Support Linux and window VM

  • Open Source Virtualization (Xen based)

Price $799/socket


Feature Benefit

  1. LiveProvisioning: A "zero touch" automated deployment capability that eliminates the need for physical installation or management of virtualization software on physical servers
  2. Virtual Storage Management: Create virtual storage on the fly from local, fibre channel or iSCSI storage servers
  3. Virtual Network Management: Create virtual switches that allow virtual machines to use standard networks or vLANs transparently
  4. Virtual Machine Creation Wizard:Use the virtual machine create wizard to simply create virtual infrastructure by cloning existing templates or creating new servers

    • Up to 8 CPUs

    • Up to 32 GB RAM

    • Up to 5 NICs with unique MAC addresses

    • Scheduling priority

    • Resource throttling

  5. LiveMigration: Migrate workloads from one physical server to another, for example to provide additional capacity, without any application downtime.
  6. LiveCapacity: LiveCapacity optimizes virtual machine performance and data center utilization. It accomplishes this by rebalancing nodes by moving virtual machines when the node's CPU utilization is above a threshold.
  7. LiveRecovery: Automates virtual machine recovery from physical hardware failures without the cost and complexity of clustering software. Virtual machines can be automatically restarted on new hardware when physical hardware fails, reducing outage duration and operational costs
  8. LiveMaintenance: LiveMaintenance automates the process of taking a physical server out of service for maintenance, such as adding more physical memory, without impacting the virtual machines running on it. LiveMaintenance uses LiveMigrate to move virtual machines -- without stopping operating systems or applications -- to other nodes in the virtual data center.
  9. LiveSnapshot: LiveSnapshot creates space-efficient point in time images of running or stopped virtual machines. LiveSnapshot makes it easy to try patches and roll back to previous states, or backup a running virtual machine.
  10. LivePower: LivePower optimizes data center power consumption by monitoring resource utilization in the virtual data center. When there is excess CPU capacity, LivePower consolidates virtual machines onto fewer servers and shuts down physical servers that are not running virtual machines, based on pre-defined policies. When virtual machine load increases beyond pre-defined thresholds, LivePower turns on physical servers and LiveMigrates virtual machines to rebalance the virtual data center and ensure that resource requirements and service levels are met.
  11. LiveConvert: An automated software solution powered by PlateSpin that enables customers to easily migrate 32-bit Windows workloads (data, applications, and operating systems) from physical, virtual and image-based infrastructures. LiveConvert provides Virtual Iron users with the ability to quickly migrate workloads between physical servers and virtual machines allowing users to quickly achieve the benefits of large-scale server consolidation.
  12. Jobs and Reports: View historical information on system performance or administrative actions in the data center. Useful for capacity planning, billing, and 404 auditing.
  13. User based access control: Perform user management locally or connect with Microsoft Active Directory or LDAP servers. Prevent unauthorized users from using the system. Monitor changes based on user actions for compliance purposes.

Sun Net Connect to be disabled on May 30, 2009

Email send to all registered Net Connect customer on march 11, 2009

Sun Alert 257308 "On may 30, 2009 Sun Net Connect Service will be disabled. Failure to uninstall Client may cause downtime"

replacement solutions

  1. Services Tools Bundle (STB)

  2. Sun Secure File Transport (SFT)

  3. Shared Shell

  4. xVM OpsCenter

  5. Auto Service Request (ASR) ASR require STB the download page

  6. Sun Inventory Channel

solaris 10 05/09 u7 release what's new

what new features of 05/09 (u7) release:
what new

  • Support Added for Using ZFS ClonesWhen Cloning a Zone

    1. If the source and the target zonepaths reside on ZFS and both are in the same pool, a snapshot of
      the source zonepath is taken and the zoneadm clone uses ZFS to clone the zone.

    2. You can specify to copy a ZFS zonepath instead of specifying to clone the ZFS. If neither the
      source nor the target zonepath is on ZFS, or if one is on ZFS and the other is not on ZFS, the
      clone process uses the existing copy technique.

  • iSCSI target update, performance and reliability improvements

    1. Improved TCP/IP timeout recovery

    2. iSCSI initiator invoked SCSI RESETs

    3. Code path and memory leak cleanup

    4. Improved interoperability with Target Port Group Tags (TPGT), unidirectional and
      bidirectional CHAP authentication, and RADIUS server support

    5. Improved Internet StorageName Service (iSNS) support, including recovery from
      unavailable iSNS servers

    6. Updated SCSI-3 Persistent Reserve functionality that enables the use of the functionality in
      various clustering solutions on both Solaris and other operating systems

  • The Solaris iSCSI Target release now supports a wide variety of iSCSI initiators for the following
    operating systems:

    1. Solaris 10

    2. OpenSolaris

    3. Linux: RedHat Enterprise Linux (RHEL), Suse, and Ubuntu

    4. VMWare ESX

    5. Microsoft Windows (XP, Vista, Server 2003, Server 2008, Windows Cluster Server)

    6. MacOSX

QDR Magnum infiniband Switch

During the sc08 Sun did showcase the new QDR Magnum infiniband Switch
QDR Magnum infibiband Switch

The SB6048 also introduce QDR NEM QNEM

  • 30 ports of 4x QDR InfiniBand and 24 ports of GbE

  • Up to 6:1 cable reduction as compared to competitive systems

  • Scalability up to 5,000+ node clusters

Sun PCIe Dual port QDR IB HCA and DDR FEM

General observation on QNEM:

  • Function as leaf switch

  • bases for 3 D torus IB Fibre

  • with other core switch one can build 3 stages or 5 stages or 7 stages IB fibre etc




Top Tags
« December 2016