Saturday Dec 09, 2006

Cedars-Sinai Medical center deploys Sun Compute Grid

The Cedars-Sinai Medical center has deployed the Sun Compute Grid to uncover new ways to treat disease. Look [here]. Researchers at Sinai say that it has quadrapuled their compute power while saving upto $60,000 in two months. Some quotes from the researchers.

"Sun looked at the tasks and the computational needs we had and was able to provide an optimal solution. They were able to meet our needs at every level," said Jonathan Katz, senior scientist and director of operations, Spielberg Family Center for Applied Proteomics, Cedars-Sinai Medical Center.

"We have a remarkable relationship with Sun. The passion of the employees goes far beyond selling equipment. They offer to come in on weekends to help us. The enthusiasm and dedication is something I haven't experienced with any company -- ever," said Dr. David Agus, director, Spielberg Family Center for Applied Proteomics, Cedars-Sinai Medical Center.


 

Visit network.com for more information regarding Sun Grid and Utility Computing.

Monday Nov 27, 2006

An amazing idea!

It is true when you searching the whole world for ideas to increase literacy, one is sit right there in front of you. Please check out PlanetRead. Planet Read is supported by Google.org (a humanitarian initiative by Google). It aims at increasing literacy by telecating TV programming with subtitles in local languages. The good thing is it really works and has also contributed upto 15% is the viewership rating of the sub-titled program. The brilliant thing is that it costs 1$ to giving reading practice to 10,000 people. Kudos to Cornell educated Dr. Brij Kotari. Larry Brilliant the executive director of google.org has also given an interview. I need to contribute! So do you.

PS: Interestingly, Larry Brilliant was hired by Dev Anand to play an extra in the famous "Dum maro Dum" video!


powered by performancing firefox

Wednesday Nov 08, 2006

Sun University initiative India

If you are a University looking to build up a strong industry relationship.
If you are a student who would like to learn the latest technologies and be ahead of the rest in the job market.
If you are a student who just loves technology, which is affecting the world today.

Go here and request Sun to come to your campus and teach you how to broden your tech horizon.

I am so happy to tell all you folks  about University initiative at Sun. Whats more, I am part of it.


powered by performancing firefox

Tuesday Oct 31, 2006

fastDNAml on Solaris 10 x86 with mpi

I have been using this app called fastDNAml, written by Gary J. Olsen, a very useful bio-app that finds out the maximum likelyhood phylogenetic trees from nucleotide sequences. I am interested in the parallel version of the same app, developed by the Indiana University. You can download the same here.
To make it work on Solaris 10 x86 mpi 1.2 . The only tweaks to be done are.
  1. un-tar  the  source package.
  2. Switch to the source directory.
  3. You would want to make changes to Makefile.LINUX.
  4. Open the Makefile.LINUX , and change
    1. gcc path.
    2. set the MPI_ROOT appropriately.
    3. make the MPICC variable point to the $MPI_ROOT/bin/mpicc
  5. Build the project using this command #make -f Makefile.LINUX mpi .
  6. Create a file called machines that looks like this. I have done so cause I need to run four process of fastDNAmlp mpi_foreman, mpi_worker, mpi_fastDNAml, mpi_fastDNAml_mon.
  7. For the mpirun to pick this up we need to use a tool to change the file and assign each of these nodes a process to run on.
    • run this command 'fastDNAml_1.2.2p/testdata/mkp4pg  fastDNAml_1.2.2p/src/machines 1  mpi_dnaml_mon  ~/dev/fastDNAml_1.2.2p/src' > myProc.pg
    • this command generates a proc group file that looks like this.
    • the arguments it takes are
      • the machines file we generated.
      • the number of processor in each node.
      • the first process to be run.
      • the absolute path of the place where all the executables.
  8. Use this command to run the app, I use a test case already bundled with fastDNAml.
    • mpirun -v -p4pg  fastDNAml_1.2.2p/myProc.pg fastDNAml_1.2.2p/src/mpi_dnaml_mon -d4 -nmpi56 -s  fastDNAml_1.2.2p/testdata/test56.phy
    • It should take about 12 minutes to complete this on an Ultra 20 with 1 GB of RAM.
The folks at Indiana have written a nice help file that covers all options, please refer to this fastDNAml_1.2.2p/docs/fastDNAml_1.2.2.txt . I have uploaded it here for your benefit.

Myself and Prem Kumar L (my colleague and friend), have compiled these steps together.




powered by performancing firefox

Thursday Sep 28, 2006

NAMD and Charm++ - some cool research projects from UIUC

UIUC has a cool research project going called NAMD. NAMD, recipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of processors on high-end parallel platforms and tens of processors on commodity clusters using gigabit ethernet. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR. NAMD is distributed free of charge with source code. You can build NAMD yourself or download binaries for a wide variety of platforms. The tutorials show you how to use NAMD and VMD for biomolecular modeling.

                                        I know for sure that the Gordon Bell Award is a big deal. I looked around and realised that they had no distribution for Solaris 10 operating system on the x86 architecture. So I volunteered to do some building. The following is an account of it.



Here is my experience while compiling NAMD and Cham++ on Solaris 10 x86

Machine info: Ultra 20 - AMD Opteron Solaris 10 x86 with mpich 1.1.2

Expectation:

  • We will learn how to compile NAMD2 with tcl on Solaris 10 x86. Compiling namd2 with fftw is still in the works.

Things to download:

  • NAMD2 source code from CVS - Jim Philips was nice enough to give me access.
  • Charm-5.9 source code from here
  • Tcl sol-x86 binaries from here

Notes:

  • Do take a look at the notes.txt under the namd2 directory.
  • Desist from using relative paths while setting the environment variables.

Steps

  1. untar charm-5.9, tcl and namd2

I. Compile charm++ for Solaris 10 x86

  1. cd charm-5.9/src
  2. mkdir arch/mpi-sol-x86
  3. cp mpi-sol/\* mpi-sol-x86/
  4. vi mpi-sol-x86/conv-mach.sh;
    1. set CMK_QT to 'generic'
    2. append ' -lrt' to the value of CMK_LIBS
  5. Since I have mpich installed I give the following command in the charm-5.9 directory
    1. $ ./build charm++ mpi-sol-x86 gcc --incdir=/export/home/mpich/include --libdir=/export/home/mpich/lib
  6. This command creates a directory called 'mpi-sol-x86-gcc' under the 'charm-5.9' directory.
    1. mpi-sol-x86-gcc is your CHARMARCH

II. Compile NAMD on Solaris 10 x86

  1. cd namd2; chmod +x config
  2. vi Make.charm; and set the CHARMBASE to the top-level 'charm-5.9' directory.
  3. vi arch/Solaris-i686-g++.arch; and set the CHARMARCH to mpi-sol-x86-gcc.
  4. vi arch/Solaris-i686.tcl; and set the TCLDIR appropriately.
  5. ./config tcl Solaris-i686-g++
  6. cd Solaris-i686-g++;
  7. vi Makefile; go to the target 'psfgen' and append '-lsocket -lnsl' to the line starting with '$(CC)'
  8. make;
  9. Please check the compilation by running some examples. I haven't yet got time to run tests. I seem to have messed up my ip, machine, automount-config ( I am prone to such unfortunate events.. Sigh!)

About

jeevan

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today