Friday Jun 26, 2009

SGE 6.2u3

\* Sun Grid 6.2 Engine Update 3 is now an ideal building block for private clouds
\* Service Domain Manager (SDM) Cloud Adapter provides an interface to manage Amazon Elastic Compute Cloud (EC2) Amazon Machine Images (AMI). They support cloud-only and hybrid (local+cloud) Sun Grid Engine environments.
\* Initial Power Saving uses SDM (Service Domain Manager) to recommend powering off hosts in the cloud when demand is low. For the initial stage, the powering off is optional, not automatic , and creates a standby pool.
\* A new module SGE Inspect (Sun Grid Engine 6.2 Update 3 Inspect) provides a graphical monitoring interface for Sun Grid Engine clusters and Service Domain Manager instances, all in a single window.
\* Graphical Installer makes it much easier to install the Sun Grid Engine software and simplifies initial cluster or cloud setup with easy-to-navigate displays.
\* Exclusive Host Scheduling The cluster can be configured to allow jobs to request exclusive use of various components of a given execution host, An exclusive job will only be scheduled on execution hosts that have no jobs currently running. For a parallel job running on multiple machines, this rule applies for the slave tasks of the job as well.
\* Job Submission Verifier (JSV) is an automatic filter for the system administrator to control, enforce and adjust jobs submissions. The basic idea is that on both the client side and the server side, the administrator has the ability to configure scripts that can read through job submission options and accept, reject, or modify the job submission accordingly.
\* Scales up to 63,000 core CPUs

System Requirements

Operating Systems Execution Hosts

\* OpenSolaris (all versions)
\* Solaris 10, 9, and 8 Operating Systems {SPARC Platform}
\* Solaris 10 and 9 Operating Systems (x86 Platform Edition)
\* Solaris 10 Operating System (x64 Platform Edition)
\* Apple Mac OS X 10.5 (Leopard), x86 platform
\* Apple Mac OS X 10.4 (Tiger), PPC platform
\* Apple Mac OS X 10.4 (Tiger), x86 platform
\* Hewlett Packard HP-UX 11.00 or higher (including HP-UX on IA64)
\* IBM AIX 5.1, 5.3
\* Linux x86, kernel 2.4, 2.6, glibc >= 2.3.21
\* Linux x64, kernel 2.4, 2.6, glibc >= 2.3.22
\* Linux IA64, kernel 2.4, 2.6, glibc >= 2.3.21
\* Microsoft Windows Server 20033, (4)
\* Windows XP Professional with Service Pack 1 or later[3, 4]
\* Windows 2000 Server with Service Pack 3 or later
\* Windows 2000 Professional with Service Pack 3 or later

Operating Systems Master Hosts

\* Solaris (see details under execution hosts)
\* Linux (x86 and x64, see details under execution hosts)

Master Host minimum hardware configuration

\* 50 MB for each binary platform
\* 80 MB of free memory minimum
\* 100 MB of free disk space minimum

Execution Host minimum hardware configuration

\* 50 MB for each binary platform
\* 20 MB of free memory minimum
\* 50 MB of free disk space minimum

Database Server

\* 50 MB for each binary platform
\* Minimum 200MB to 750MB of free memory
\* 10 GB of free disk space minimum

Sun Web Console

\* 50 MB for each binary platform
\* 200 MB of free memory minimum
\* 250 MB of free disk space minimum

Databases Supported for ARCo (see note 5)

\* PostgresSQL 8.0 through 8.3
\* MySQL(TM) 5.0
\* Oracle 9i or 10g

Operating Platforms Supported for ARCo

\* Solaris 10, 9 and 8 OS (SPARC Platform Edition)
\* Solaris 10, 9 and 8 OS (x86 Platform Edition)
\* Linux RPM Distribution

Service Domain Manager (SDM) Platforms Supported

\* OpenSolaris
\* Solaris 10, 9, and 8 Operating Systems (SPARC Platform Edition)
\* Solaris 10 and 9 Operating Systems (x86 Platform Edition)
\* Solaris 10 Operating System (x64 Platform Edition)
\* Apple Mac OS X 10.4 (Tiger), x86 platform
\* Apple Mac OS X 10.5 (Leopard), x86 platform
\* Linux x86, kernel 2.4, 2.6, glibc >= 2.3.2
\* Linux x64, kernel 2.4, 2.6, glibc >= 2.3.2

Supported Sun Java Web Console version 3.0.x web browsers

\* Netscape 6.2 and above
\* Mozilla 1.4 and above
\* Internet Explorer 5.5 and above
\* Firefox 1.0 and above

SDM Required Software

\* Sun Grid Engine 6.2
\* Java Runtime Environment (JRE) 6

Download SGE62.u3

Sun HPC Cluster Tools 8.2

Sun HPC ClusterTools 8.2 is based on Open MPI 1.3.3 and includes many new features

Sun HPC ClusterTools 8.2 software is an integrated toolkit that allows developers to create and tune Message-passing Interface (MPI) applications that run on high performance clusters and SMPs. Sun HPC ClusterTools software offers a comprehensive set of capabilities for parallel computing.
Features


\* Infiniband, shared memory, GbE, 10 GbE, and Myrinet MX communication support
\* Sun Grid Engine plug-in
\* Supports third-party parallel debuggers Totalview and Allinea DDT
\* Integration with Sun Studio Analyzer
\* compiler support -- Sun Studio, gnu, PGI, Intel, Pathscale
\* Suspend / resume support
\* Improved intra-node shared memory performance and scalability
\* Infiniband QDR support
\* Automatic path migration support
\* Relocatable installation

Supported Platforms : all Sun UltraSPARC III or greater, all Sun Opteron-based platforms, all Sun Intel x86-based platforms.

Supported Operating Systems

\* Solaris 10 11/06 Operating System or later
\* OpenSolaris 2008.11 or later
\* Linux RHEL 5
\* Linux SLES 10
\* Linux CentOS 5

Supported Compilers

\* Studio 10, 11, 12, 12U1
\* PGI (Linux only)
\* Intel (Linux only)
\* Pathscale (Linux only)
\* gnu (Linux only)

HPC cluster Tool 8.2 Download

Sun HPC Software Linux Edition 2.0

Free Sun HPC Software, Linux Edition 2.0 Download
Sun HPC Software, Linux Edition 2.0 is available to download now. Please note that technical support is not included with the software.

What You Get

\* Lustre 1.8
\* perfctr 2.6.38
\* Env-switcher 1.0.13
\* genders 1.11
\* git 1.6.1.3
\* Heartbeat 2.1.4-2.1
\* Mellanox Firmware tools 2.5.0
\* Modules 3.2.6
\* MVAPICH 1.1
\* MVAPICH2 1.2p1
\* OFED 1.3.1
\* OpenMPI 1.2.6
\* RRDTool 1.2.30
\* HPCC Bench Suite 1.2.0
\* Lustre IOKit

\* IOR 2.10.1
\* LNET self test
\* NetPIPE 3.7.1
\* Slurm 1.3.13
\* MUNGE 0.5.8
\* Ganglia 3.1.01
\* oneSIS 2.0.1
\* Cobbler 1.4.1
\* CFEngine 2.2.6
\* Conman 0.2.3
\* FreeIPMI 0.7.5
\* IPMtool 1.8.10
\* lshw B.02.14
\* OpenSM 3.1.1
\* pdsh 2.18
\* Powerman 2.3.4
HPC stack 2.0 download

Tuesday Jul 15, 2008

Flextronics IB switches

Flextronics provide some very inexpensive IB switches
infiniband for Mass Interesting article that talk about in expensive IB for HPC cluster

3 socket AMD system

Base on today HT-1.0, it seems that 3 socket AMD 8xxx CPU with the 3 HT-links will be a very balance IO server
with 3 HT links, each CPU will have HT link to other CPU and one HT link to IO chipset.

Wednesday Jun 18, 2008

sb6048 IB-NEM switch connection without IB swicth

3x24 datacenter is perfect partner with sb6048 IB-NEM IB switch
What is the connection possibility for IB-NEM without a iB Switch?
Each IB-NEm contaions two 24 x4 port swicth chipset and provide dual connection to the 12 blades in one of the sb6048 chassis.
The other 2 12 port are presented as two set of 4 x12 port for outgoing connections.
The 12 blades in chassis are connected with fullly no blocking CLOS network.
betwenn the chassies the followings are possible:
  • 4 x12 cables between two chassis for 24 blades for non blocking CLOS
  • 2 x12 cables between 4 chassis for 48 blades for 50% blocking CLOS
  • 1 x12 cables between 5 chassis for 60 blades for 25% blocking CLOS

3x24 datacenter IB switch and sb6048

Sun recently introduce 3x24 Datacenter IB switch it is based on three 24 port x4 chipset
the key inovation is that it present the 72 x4 port as 24 x12 port in 1U form factor
It is best partner with the leaf switch IB-NEM in the Blade 6048
For smaller network (<=288) one could use 3x 24 datacenter swicth to build a 3 stages IB network.
  • 6 chassises of 12 blades: 1 1/2 sb6048 = 72 blade require single 3x24 datacenter switch
  • 12 chassises of 12 blades: 3 sb6048 = 144 blades require 2 3 x24 datacenter switches
  • 24 chassises of 12 blades: 6 sb6048 = 288 blades require 4 3 x24 datacenter switches

Thursday May 01, 2008

rocks 5.0 released

download; http://www.rocksclusters.org/wordpress/?page_id=82 new features:
  • Xen Support:
    You can use the Xen Roll to create VM Containers: physical machines that are used to hold Xen-based virtual machines. The Rocks Command Line was expanded to help build and maintain VMs (e.g., rocks create host vm compute-0-0-0? is used to install a VM).
  • Fully-Programmable Partitioning
    The partitioning of client nodes (e.g., compute nodes and tile nodes) has been retooled. You can supply Red Hat partitioning directives to any node by writing a program in the pre section which populates the file /tmp/user_partitioning_info. The program can be as simple as small bash script that echos Red Hat partitioning directives or as complex as a python program that outputs partitioning info based on: the nodes name, the nodes membership, the number of disks in the node or the type of disks in the node. See the Base Roll documentation for details.
  • OS: Based on CentOS release 5/update 1 and all updates as of April 29, 2008
  • Condor: updated to v7.0.1
  • Ganglia: Ganglia Monitor Core updated to v3.0.7
  • Ganglia: phpsysinfo updated to v2.5.4
  • Ganglia: rrdtool updated to v1.2.23
  • HPC: The HPC roll is now optional. You can build a bare-bones cluster by using only the Kernel Roll, OS Rolls (disk 1 and 2) and the Base Roll
  • HPC: MPICH updated to v1.2.7 patch 1)
  • HPC: MPICH2 added to the roll (v1.0.6 patch 1)
  • HPC: OpenMPI added to the roll (v1.2.6)
  • SGE: SGE updated to 6.1 update 4
  • SGE: Added tight integration for SGE and OpenMPI
  • Area51: chkrootkit updated to v0.48
  • Area51: tripwire updated to v2.4.1.2
  • Bio: Biopython updated to v1.45
  • Bio: Clustalw updated to v2.0.5
  • Bio: Fasta updated to v35.3.5
  • Bio: NCBI toolbox updated to Mar 2008 version
  • Bio: MpiBlast updated to v1.5.0-pio and is patched against the NCBI toolbox Mar 2008 version
  • Bio: Phylip updated to v3.67
  • Bio: T_coffee updated to v5.65
  • Bio: Gromacs and MrBayes are now MPI Enabled and compiled against rocks-openmpi

Sunday Apr 27, 2008

IB network design leaf and core switch trade-off

Current IB switch based on Mellanox IS III 24 port 4x chipset support upto 288 in 3 stages and 3456 in 5 stages.
Recently Sun has build in leaf switch in the Blade 6048 rack.
If one use 3stages IB switches with sb6048 build in leaf switch one will create 5 stages network, and if one use 5 stages IB switch then we will create 7 stage network.
TACC support more than 3456 nodes so it need 2 3456 switches and it use IB-NEM to create 7 stage CLOS network
For smaller network (<=288) one could use many 24 port switches as core swicthes to build a 3 stages IB network.
  • 6 chassises of 12 blades: 1 1/2 sb6048 = 72 blade require 3x24 port core switches
  • 8 chassises of 12 blades: 2 sb6048 = 96 blade require 4x24 port core switches
  • 12 chassises of 12 blades: 3 sb6048 = 144 blades require 6 x24 port core switches
  • 24 chassises of 12 blades: 6 sb6048 = 288 blades require 12 x24 port core switches
5 stages network using NEM IB leaf switch
  • 6 chassises of 12 blades: 1 1/2 sb6048 = 72 blade and 72 port core switch
  • 8 chassises of 12 blades: 2 sb6048 = 96 blade and 96 port core switch
  • 12 chassises of 12 blades: 3 sb6048 = 144 blades 144 port core switch
  • 24 chassises of 12 blades: 6 sb6048 = 288 blades and 288 port core switch
3 stages network using IB PCI-E EM :
  • 6 chassises of 12 blades: 1 1/2 sb6048 = 72 blade and 72 port core switch
  • 8 chassises of 12 blades: 2 sb6048 = 96 blade and 96 port core switch
  • 12 chassises of 12 blades: 3 sb6048 = 144 blades 144 port core switch
  • 24 chassises of 12 blades: 6 sb6048 = 288 blades and 288 port core switch
For much larger network (>288) one could use many 288 port switches as core switches to build a 5 stages IB network. one can also use the dual port IB EM modules connect to core switches , for <=288 of 3 stages network and <=3465 for 5 stages networks.
Of course one use many core switches to construct larger IB network.
The same argument can apply to rack with rack server:
there are design tradeoff with big IB switch and has small leaf switches in each rack.
for examples if we use 36 1U server as building block and 3 24 port leaf switches in a rack , then there are two way to build IB network
  • use 3 stages core switches (<=288) to build a 5 stages network
  • use many 24 port switches as core switch for 3 stages network
The main difference between 3 stages and 5 stages IB network is the two switch hub difference: ~2x200 nanosecond difference for 24 port switch chipset.

mellanox InfiniScale IV architecture

Mellanox announced InfiniScale IV architecture in sc2007
the benefits are:
  • 40Gb/s server and storage interconnect (4x QDR)
  • 120Gb/s switch-to-switch interconnect (12x QDR)
  • 60 nanosecond switch hop latency
  • 36-port switch devices for optimal scalability
  • Adaptive Routing to optimize data traffic flow
  • Congestion control to avoid hot spot
Using 36 port silicon to build an non-blocking CLOS switches, in 3 layers setup, one can get(for 24 port chipset we can get 288 port in 3 stages):
  • 72 ports: 6 chips, 9 links between chips, 2 cores and 4 leafs
  • 108 ports: 9 chips, 6 links between chips, 3 cores and 6 leafs
  • 216 ports: 18 chips, 3 links between chips, 6 cores and 12 leafs
  • 324 ports: 27 chips, 2 links between chips, 9 cores and 18 leafs
  • 648 ports: 54 chips, 1 link between chips, 18 cores and 36 leafs
For 5 stage non-blocking CLOS network one can get (24 port chipset we can get to 3456 port in 5 stages)
  • 2 x 648 = 1296 port
  • 3 x 648 = 1944 port
  • 4 x 648 = 2592 port
  • 6 x 648 = 3888 port
  • 9x 648 = 5832 port
  • 12 x 648 = 7776 port
  • 18x 648 = 11664 port

as comparison with InfiniScale IV we list the InfiniScale III 24 port features here:
  • Twenty-four 10 or 20Gb/s Infiniband 4X ports or eight 30 or 60Gb/s Infiniband 12X ports (or any combination)
  • 480Gb/s (SDR version) or 960Gb/s (DDR version) of total switching bandwidt
  • Scalable to thousands of Ports
  • 96 integrated 2.5Gb/s (SDR version) or 5Gb/s (DDR version) SerDes interfaces (physical layer)
  • Auto-negotiation of port link speed
  • Ultra low latency cut-through switching (less than 200 nanoseconds) MTU Size from 256 to 2K bytes
  • Supports Multi-Protocol Applications for Clustering, Communication, and Storage
  • Integrated Subnet Management Agent (SMA)

Wednesday Apr 16, 2008

x2200M2 QC U.S. Education Essentials Matching Grant

This year MG has very aggressive 60% discount on most products The following x2200 M2 QC is particular interesting for HPC customer
  • A85-FFZ2-H-8GB-JL8
  • Sun Fire X2200 M2 x64 Server: 2x Quad Core AMD Opteron Model 2354 processors (2.2GHz/95W, B3), 4x 2GB registered ECC DDR2-667 memory, no hard drives, no optical drive, 1x PSU, Service Processor, 4x 10/100/1000 Ethernet ports, 6x USB 2.0 ports, 1x I/O riser card with 2x PCI-Express x8 slots, no power cords, order Geo-specific x-option. RoHS-5. Standard Configuration.
  • list $2,905.00
  • MG $1,162.00

Tuesday Jan 01, 2008

why blade?

In a very short time span Sun has introdce SB8000, SB8000P , SB6000 and SB6048 chassises and x8400, X8420 and X8440 for SB8000 and SB8000P and X6220, X6250, T6300,T6320 for SB6000 and SB6048.
One should ask what is the advantage of the blade?
  • Serviceability: CMM,PCIE EM are hot pluggabke,NEM is cold replaceable without the need to open any bldes
  • Reduce Cabling: with CMM's build in management network one use one cable to manage 10 or 12 ILOM. with shared power supply, one can use few power cable to support 10 or 12 blade servers
  • Reduce Power requirement: due to the consolidated fan, overall power requirement is less than to corresponding 10 or 12 rack servers:
  • IB leaf switch NEM for SB6048: reduce IB cabling with new IPASS+ connector and cables
  • SB6000 support intel, AMD and UltraSPARC T1 and T2 chips sets

Sunday Oct 14, 2007

Solaris x86, Lustre and Rockcluster

Recent acquisition of lustre by sun just another piece of Solaris HPC strategy.

Now Sun has

  • Solaris
  • LustreFS, QFS, pNFS
  • Compiler
  • OpenMPI, Cluster Tool
  • SGE
  • Rockcluster
to support HPC in Solaris environment.

Plus the HW

  • Constellation System
  • Maganum IB Switchs
Sun is lower the entry barrier for users to adopt Solaris in HPC.

Solaris x86, MATLAB , xVM and V12N, BrandZ

One of roadblock for Solaris x86 adoption in HPC field is lacking of the MATLAB port

MATLAB is a widely used tool in computional and simulation

Solaris 09/07 does add a new feature of brandZ that sill suport 32-bit redhat and run MATLAB in 32-bit mode

There is some work to extend BrandZ support for 2.6 linux in opensolaris since b72, very interesting work by cummunity

Upcoming Solaris xVM can run redhat 5 64-bit in PVM mode or redht 4 64-bit in HVM mode. So one can run MATLAB in 64-bit together with other apps in Solaris 64-bit

Of course one can use other OS as base to run Solaris and linux

  • vmware ESX support Solaris 10 and redhat , suse and window
  • Redhat , CentOS 5 support linux, window, Solaris
  • Suse 10 support linux, window, Solaris
  • xensource support linux and window
One issues need to workout, DRM software like SGE and PBS and LSF need to deal with this new V12N environment.

Thursday May 25, 2006

sge in rocks cluster 4.1

even through current setup of sge in rocks seem to work I have some observations/suggestion on sge in rocks,
  • understood the non mounting of $SGE_ROOT/default, rocks make copy of the $SGE_ROOT/default to each node. This approach does not allow the shadow master to support the qmaster.
  • the rocks' internal DNS domainanme is "local" and should be used e.g. the default domain should be : local and not none. Right now in the configuration the default domain is set to none
  • act_qmaster should be the hostname of 10.1.1.1 e.g. .local instead of the frontend.outsidedomainname. Since the rocks cluster use a private network and private domainanme, there is not need to using any external public name for sge. e.g. the /etc/hosts of the compute nodes doesn't require any external hostname of the frontend
  • since the frontend has dual NIC (at least), one want to set host_aliases (frontend.local frontend.outsidedomainname) http://gridengine.sunsource.net/howto/multi_intrfcs.html
About

hstsao

Search

Top Tags
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today