Tuesday Apr 14, 2009

using openmp on a 64 threads system

What do you do when you get a 64 threads machine? I mean other than trying to find the hidden messages in Pi?
Our group recently acquired a T5120 behemoth for builds, and I wanted to see what it was capable of.

|uname -a
SunOS hypernova 5.10 Generic_127127-11 sun4v sparc SUNW,SPARC-Enterprise-T5120
|psrinfo | wc -l
      64

In my case I settled a slightly less ambitious endeavor. I recently had to implement Gaussian elimination as part of a university course work, I converted it to use the OpenMP and compiled with SunStudio.

|cat Makefile
gauss: gauss.omp.c
               /opt/SUNWspro/bin/cc -xopenmp=parallel gauss.omp.c -o gauss
|diff -u gauss.single.c gauss.omp.c
--- gauss.single.c      Tue Apr 14 14:32:57 2009
+++ gauss.omp.c Tue Apr 14 14:44:48 2009
@@ -7,6 +7,7 @@
 #include <sys/times.h>
 #include <sys/time.h>
 #include <limits.h>
+#include <omp.h>

 #define MAXN 10000  /\* Max value of N \*/
 int N;  /\* Matrix size \*/
@@ -35,7 +36,7 @@
     char uid[L_cuserid + 2]; /\*User name \*/

     seed = time_seed();
-    procs = 1;
+    procs = omp_get_num_threads();

     /\* Read command-line arguments \*/
     switch(argc) {
@@ -63,7 +64,7 @@
                 exit(0);
             }
     }
-
+    omp_set_num_threads(procs);
     srand(seed);  /\* Randomize \*/
     /\* Print parameters \*/
     printf("Matrix dimension N = %i.\\n", N);
@@ -170,6 +171,7 @@

 }

+#define CHUNKSIZE 5
 void gauss() {
     int row, col;  /\* Normalization row, and zeroing
                     \* element row and col \*/
@@ -178,7 +180,9 @@

     /\* Gaussian elimination \*/
     for (norm = 0; norm < N - 1; norm++) {
+        #pragma omp parallel shared(A,B) private(multiplier,col, row)
         {
+            #pragma omp for schedule(dynamic, CHUNKSIZE)
             for (row = norm + 1; row < N; row++) {
                 multiplier = A[row][norm] / A[norm][norm];
                 for (col = norm; col < N; col++) {

As you can see, the changes are very simple, and requires very little modification to the code. Below was my result running it in a single thread and next using all 64 threads.

 First the single threaded version.

|time ./gauss 10000 1 4
Random seed = 4
Matrix dimension N = 10000.
Number of processors = 1.
Initializing...
Starting clock.
Stopped clock.
Elapsed time = 1.11523e+07 ms.
(CPU times are accurate to the nearest 10 ms)
My total CPU time for parent = 1.11523e+07 ms.
My system CPU time for parent = 1080 ms.
My total CPU time for child processes = 0 ms.
--------------------------------------------
./gauss 10000 1 4  11163.06s user 1.64s system 99% cpu 3:06:04.96 total

And now using all threads.

|time ./gauss 10000 64 4
Random seed = 4
Matrix dimension N = 10000.
Number of processors = 64.
Initializing...
Starting clock.
Stopped clock.
Elapsed time = 254993 ms.
(CPU times are accurate to the nearest 10 ms)
My total CPU time for parent = 1.53976e+07 ms.
My system CPU time for parent = 37960 ms.
My total CPU time for child processes = 0 ms.
--------------------------------------------
./gauss 10000 64 4  15371.53s user 38.51s system 5757% cpu 4:27.65 total

Now I am all set to look for my name in Pi. :)

\*the gaussian elimination source is here.

Monday Apr 13, 2009

sun hpc clustertools for openmpi

Having migrated originally from Civil Engineering, I have always been interested in parallel programming. Quite a few (almost all?) problems in that domain are what can be called embarrassingly parallel - be it Structural Mechanics, Fluid dynamics, or Virtual Modeling.

Recently I got interested in parallel programming again as part of my studies. While the university has a cluster setup, it is almost always in use, and is dead slow because of the number of users. So I tried setting up a simple OpenMP cluster locally for Ubuntu and Solaris,

Setting up OpenMP on Ubuntu is treated in quite a few places in the web, so I am not listing the steps for that. How ever I found that using the cluster tools from Sun was much more easy than messing with the MPICH distribution in Ubuntu.

Here are my notes on getting it to work.

As a prerequisite,

  • You need some machines with the same OS and ARCH, NM
  • A common NFS exported directory (mounted on the same path) on each machine. I used /home/myname as the NFS mount
  • Ensure that you have password less login either using ssh or rsh.
  • You also need to  install the cluster tools on each.

You can get the cluster tools from here. Ungzip it to directory and execute the ctinstall binary

|cat sun-hpc-ct-8.1-SunOS-sparc.tar.gz |gzip -dc | tar -xvpf -
|sun-hpc-ct-8.1-SunOS-sparc/Product/Install_Utilities/bin/ctinstall -l
|...

This will install the necessary packages. You might need to check the default parameters and verify that they are to your satisfaction.

|/opt/SUNWhpc/HPC8.1/sun/bin/ompi_info --param all all

In my setup, I wanted to use rsh while ssh is the default for clustertools

|/opt/SUNWhpc/HPC8.1/sun/bin/ompi_info --param all all | grep ssh
     MCA plm: parameter "plm_rsh_agent" (current value: "ssh : rsh", data source: default value, synonyms: pls_rsh_agent)
              The command used to launch executables on remote nodes (typically either "ssh" or "rsh")
     MCA filem: parameter "filem_rsh_rsh" (current value: "ssh", data source: default value)
|echo 'plm_rsh_agent = rsh' >> /opt/SUNWhpc/HPC8.1/sun/etc/openmpi-mca-params.conf


Once this is done, create your machines file (my machine names are host1 host2 host3 and host4)

|cat > machines.lst
host1
host2
host3
host4
\^D

Now you are ready to verify that stuff works. Try

|/opt/SUNWhpc/HPC8.1/sun/bin/mpirun -np 4 -machinefile ./machines.lst hostname
host1
host2
host3
host4

This should also work

|/opt/SUNWhpc/HPC8.1/sun/bin/mpirun -np 4 -host host1,host2,host3,host4 hostname
host1
host2
host3
host4

If you get similar output, then you have successfully completed the initial configuration.
If you are unable to modify the /opt/SUNWhpc/HPC8.1/sun/etc/openmpi-mca-params.conf file, then you could try the below

|/opt/SUNWhpc/HPC8.1/sun/bin/mpirun -mca pls_rsh_agent rsh -np 4 -machinefile ./machines.lst hostname
host1
host2
host3
host4

Try an example,

|cat hello.c
#include <stdio.h>
#include <mpi.h>
int main(int argc, char \*\*argv) {
     int my_rank;
     MPI_Init( &argc, &argv);
     MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
     printf("Hello world[%d]\\n", my_rank);
     MPI_Finalize();
     return 0;
}

Try compiling and running

/opt/SUNWhpc/HPC8.1/sun/bin/mpicc -o hello hello.c
|/opt/SUNWhpc/HPC8.1/sun/bin/mpirun -np 4 -machinefile ./machines.lst ./hello
Hello world[2]
Hello world[3]
Hello world[0]
Hello world[1]

Now you are ready to try something larger. You can try with a simple scatter and gather of a matrix that is attached..

\*Many thanks to the Sun HPC team for making this setup so easy.


Thursday Apr 09, 2009

A simple extension to the hg (mercurial) forest (fdiff fcommit and fimport)

New commands to forest extension for mercurial - fdiff fcommit and fimport.
[Read More]

Thursday Aug 30, 2007

The Proxy side

Proxy Servers, Load Balancers, Personal Proxies, Filtering Proxies, and Firewalls
[Read More]

Friday Aug 24, 2007

SUNW -> JAVA

Am I the only one who finds this absolutely brilliant?

Imagine a future sales that goes like this,
Marketing guy from another company to a Customer: We have this absolutely fantastic product that will solve all problems for you,

Customer: Hmm ok, but how different is it from XXYY ?

Marketing guy: ... and it is in java too so it is cool...

Customer: wait a minute, But I saw JAVA ticker going down ...

Now, any company that wants to add that word to their PR will begin to wish for the JAVA ticker to start going up.

I mean, it can produce a class I effect with a class III stimulus.

Wednesday Aug 22, 2007

compile time options for binary distribution of Squid

Squid compilation options Poll.[Read More]

Sunday Apr 22, 2007

A thin slice of Haskell

Notes on Haskell programming language[Read More]

Thursday Apr 19, 2007

Homoiconic languages

Exploring homoiconic languages[Read More]

Wednesday Jan 31, 2007

Scripting with servlets [php - using quercus] - part VII (Sun Java System WebServer 7.0)

Scripting with servlets [php - quercus]. (Using the open sourced jvm implementation of php from Resin.)[Read More]

Friday Jan 19, 2007

Scripting with servlets [groovy - using jsr223] - part VI (Sun Java System WebServer 7.0)

Scripting with servlets [using groovy]  This time using the JDK6 and JSR223 Scripting interface[Read More]

Thursday Jan 18, 2007

Scripting with servlets [jacl] - part V (Sun Java System WebServer 7.0)

Writing servlets with Jacl [tcl over jvm] for Sun Java System WebServer 7.0
[Read More]

Wednesday Jan 17, 2007

Scripting with servlets [sleep] - part IV (Sun Java System WebServer 7.0)

Using sleep (A perl like language over JVM) as servlet on Sun Java System WebServer 7.0.[Read More]

Tuesday Jan 16, 2007

Scripting with servlets [rhino] - part III (Sun Java System WebServer 7.0)

Writing Rhino servlets on Sun Java System WebServer 7.0.[Read More]

Monday Jan 15, 2007

Scripting with servlets [jscheme] - part II (Sun Java System WebServer 7.0)

Writing Jscheme servlets on Sun Java System WebServer 7.0.[Read More]

Sunday Jan 14, 2007

Scripting with servlets [jruby] - part I (Sun Java System WebServer 7.0)

 

Writing Jruby servlets on Sun Java System WebServer 7.0.

[Read More]
About

blue

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today