Tuesday Jul 31, 2012

Leaving Oracle | Long Live Sun!

Last week I tendered my resignation at Oracle.

Friday next week (8/10) will be my last day.

It's been an interesting 17 years, and some of best years were when we were Sun.

I'm leaving to pursue new and exciting opportunities as an independent tech writing and web design contractor.

Carry on.

Tuesday Jul 17, 2012

It Was 50 Years Ago This Month...

IBM 1401

I just realized that it was 50 years ago this month that I wrote my first computer program.

It was for the IBM 1401 in Autocoder.  I was 18  (July 1962  do the math). Ouch!

IBM 1401 I took a class during the summer after my sophomore year in college. My regular summer job fell thru and my Dad knew someone, etc. I took the class on a lark, but it did interrupt my plan to read all of Aldous Huxley's novels on the beach. I guess the rest is history. I started my junior year with a part-time job in the college computer center (IBM 650) writing FORTRAN. In '65 I had my first full-time job as a programmer at the NYU computer center in Greenwich Village. You never know where a lark will lead, I guess.

Wednesday May 30, 2012

Remote Development With Solaris Studio

A new technical article has been published on OTN:

How to Develop Code from a Remote Desktop with Oracle Solaris Studio

by Igor Nikiforov

This article describes the remote desktop feature of the Oracle Solaris Studio IDE, and how to use it to compile, run, debug, and profile your code running on remote servers.

Published May 2012

Introducing the IDE Desktop Distribution
Determining Whether You Need the Desktop Distribution
Creating the Desktop Distribution
Using the Desktop Distribution
See Also
About the Author

Introducing the IDE Desktop Distribution

Sun Studio 12 Update 1 introduced a unique remote development feature that allows you to run just one instance of the IDE while working with multiple servers and platforms. For example, you could run the IDE on an x86-based laptop or desktop running Oracle Linux, and use a SPARC-based server running Oracle Solaris 10 to compile, run, debug, and profile your code. The IDE works seamlessly just as if you had the Oracle Solaris operating system on your laptop or desktop.

....read more

Wednesday Dec 14, 2011

Oracle Solaris Studio 12.3 Released Today!

Oracle Solaris Studio 12.3 was released today with updated compilers and tools.

You can read about what's new in this release at:

What's New in The Oracle Solaris Studio 12.3 Release

More here:  

Thursday Nov 17, 2011

How John Got 15x Improvement Without Really Trying

The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here. 

How I Got 15x Improvement Without Really Trying

John Feo, Sun Microsystems

Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques.


Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible.

Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran.

Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA.

Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes.

Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile.

Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize.

Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive.

Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research.







































Table 1 — Area of improvement and performance gains of 10 codes

The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised.

Optimizing cache performance

Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do.

When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations.

Array Accesses

The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing

      do I = 0, 1010, delta_x
        IM = I - delta_x
        IP = I + delta_x
        do J = 5, 995, delta_x
          JM = J - delta_x
          JP = J + delta_x
          T1 = CA1(IP, J) + CA1(I, JP)
          T2 = CA1(IM, J) + CA1(I, JM)
          S1 = T1 + T2 - 4 * CA1(I, J)
          CA(I, J) = CA1(I, J) + D * S1
        end do
      end do

In code 2, the culprit is conditionals

      do I = 1, N
        do J = 1, N
        If (IFLAG(I,J) .EQ. 0) then
          T1 = Value(I, J-1)
          T2 = Value(I-1, J)
          T3 = Value(I, J)
          T4 = Value(I+1, J)
          T5 = Value(I, J+1)
          Value(I,J) = 0.25 * (T1 + T2 + T5 + T4)
          Delta = ABS(T3 - Value(I,J))
          If (Delta .GT. MaxDelta) MaxDelta = Delta

I fixed both programs by inverting the loops by hand.

Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10.

Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is

    L1: for i   L2: for i   L3: for i
      for l       for l       for j
      for k       for j       for k
      for j       for k       for l

So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache.

Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists.

Array Strides

When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes.

Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes

      do j = 1, GZ
        do i = 1, GZ
          T1 = CA(i+0, j-1) + CA(i-1, j+0)
          T4 = CA1(i+1, j+0) + CA1(i+0, j+1)
          S1 = T1 + T4 - 4 * CA1(i+0, j+0)
          CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1

where CA and CA1 are compressed arrays of size GZ.

Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection.

Data reuse

In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3).

In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4,

      do J = 1, GZ-2, 2
        do I = 1, GZ-2, 2
          T1 = CA1(i+0, j-1) + CA1(i-1, j+0)
          T2 = CA1(i+1, j-1) + CA1(i+0, j+0)
          T3 = CA1(i+0, j+0) + CA1(i-1, j+1)
          T4 = CA1(i+1, j+0) + CA1(i+0, j+1)
          T5 = CA1(i+2, j+0) + CA1(i+1, j+1)
          T6 = CA1(i+1, j+1) + CA1(i+0, j+2)
          T7 = CA1(i+2, j+1) + CA1(i+1, j+2)
          S1 = T1 + T4 - 4 * CA1(i+0, j+0)
          S2 = T2 + T5 - 4 * CA1(i+1, j+0)
          S3 = T3 + T6 - 4 * CA1(i+0, j+1)
          S4 = T4 + T7 - 4 * CA1(i+1, j+1)
          CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1
          CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2
          CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3
          CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4

The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values.

In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before

  for (k = 0; k < NK[u]; k++) {
    sum = 0.0;
    for (y = 0; y < NY; y++) {
      sum += W[y][u][k] * delta[y];

and after code

   for (k = 0; k < KK - 8; k+=8) {
      sum0 = 0.0;
      sum1 = 0.0;
      sum2 = 0.0;
      sum3 = 0.0;
      sum4 = 0.0;
      sum5 = 0.0;
      sum6 = 0.0;
      sum7 = 0.0;
      for (y = 0; y < NY; y++) {
         sum0 += W[y][0][k+0] * delta[y];
         sum1 += W[y][0][k+1] * delta[y];
         sum2 += W[y][0][k+2] * delta[y];
         sum3 += W[y][0][k+3] * delta[y];
         sum4 += W[y][0][k+4] * delta[y];
         sum5 += W[y][0][k+5] * delta[y];
         sum6 += W[y][0][k+6] * delta[y];
         sum7 += W[y][0][k+7] * delta[y];
      backprop[k+0] = sum0;
      backprop[k+1] = sum1;
      backprop[k+2] = sum2;
      backprop[k+3] = sum3;
      backprop[k+4] = sum4;
      backprop[k+5] = sum5;
      backprop[k+6] = sum6;
      backprop[k+7] = sum7;

for one of the loops unrolled 8 times.

Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends.

Reducing instruction count

Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques.

The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent.

Memory operations

The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory.

Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3

   for (y = 0; y < NY; y++) {
      i = 0;
      for (u = 0; u < NU; u++) {
         for (k = 0; k < NK[u]; k++) {
            dW[y][u][k] += delta[y] * I1[i++];

Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as

   for (y = 0; y < NY; y++) {
      i = 0;
      Dy = delta[y];
      for (u = 0; u < NU; u++) {
         for (k = 0; k < NK[u]; k++) {
            dW[y][u][k] += Dy * I1[i++];

Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays

  #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] 
                          + (i)*(a)->strides[3] + (j)*(a)->strides[2] 
                          + (k)*(a)->strides[1])

The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define

  a0 = MAT4D(a,q,0,j,k)

before the loop and then replace all instances of

in the loop with

A similar problem appears in code 6, a Fortran program. The key loop in this program is

    do n1 = 1, nh
    nx1 = (n1 - 1) / nz + 1
    nz1 = n1 - nz * (nx1 - 1)
        do n2 = 1, nh
            nx2 = (n2 - 1) / nz + 1
            nz2 = n2 - nz * (nx2 - 1)
            ndx = nx2 - nx1
            ndy = nz2 - nz1
            gxx = grn(1,ndx,ndy)
            gyy = grn(2,ndx,ndy)
            gxy = grn(3,ndx,ndy)
            balance(n1,1) = balance(n1,1) +
            (force(n2,1) * gxx + force(n2,2) * gxy) * h1
            balance(n1,2) = balance(n1,2) +
            (force(n2,1) * gxy + force(n2,2) * gyy)*h1
        end do
    end do

The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays.

Data operations

Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 ≤ i < N, 0 ≤ j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling.

   for (i = 0; i < N; i+=8) {
      for (j = 0; j < M; j++) {
         sum0 = 0.0;
         sum1 = 0.0;
         sum2 = 0.0;
         sum3 = 0.0;
         sum4 = 0.0;
         sum5 = 0.0;
         sum6 = 0.0;
         sum7 = 0.0;
         for (k = 0; k < K; k++) {
            sum0 += A[i+0][k] * B[j][k];
            sum1 += A[i+1][k] * B[j][k];
            sum2 += A[i+2][k] * B[j][k];
            sum3 += A[i+3][k] * B[j][k];
            sum4 += A[i+4][k] * B[j][k];
            sum5 += A[i+5][k] * B[j][k];
            sum6 += A[i+6][k] * B[j][k];
            sum7 += A[i+7][k] * B[j][k];
         C[i+0][j] = sum0;
         C[i+1][j] = sum1;
         C[i+2][j] = sum2;
         C[i+3][j] = sum3;
         C[i+4][j] = sum4;
         C[i+5][j] = sum5;
         C[i+6][j] = sum6;
         C[i+7][j] = sum7;

This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer.

In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time.

The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index

   for (j = 0; j < N; j++) {
      for (i = 0; i < M; i++) {
         r = i * hrmax;
         R = A[j];
         temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]);
         high = temp * kcoeff * B[j] * PRM[2] * PRM[4];
         low = high * PRM[6] * PRM[6] /
         (1.0 + pow(PRM[4] * PRM[6], 2.0));
         kap = (R > PRM[6]) ?
         high * R * R / (1.0 + pow(PRM[4]*r, 2.0) :
         low * pow(R/PRM[6], PRM[5]);
      < rest of loop omitted >

Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array.

   for (i = 0; i < M; i++) {
      r = i * hrmax;
      TEMP[i] = pow(r, PRM[3]);

[N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is

   for (j = 0; j < N; j++) {
      R = rig[j] / 1000.;
      tmp1 = kcoeff * par[2] * beta[j] * par[4];
      tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]);
      tmp3 = 1.0 + (par[4] * par[4] * R * R);
      tmp4 = par[6] * par[6] / tmp2;
      tmp5 = R * R / tmp3;
      tmp6 = pow(R / par[6], par[5]);
      if ((par[3] == 0.0) && (R > par[6])) {
         for (i = 1; i <= imax1; i++)
            KAP[i] = tmp1 * tmp5;
         } else if ((par[3] == 0.0) && (R <= par[6])) {
            for (i = 1; i <= imax1; i++)
               KAP[i] = tmp1 * tmp4 * tmp6;
         } else if ((par[3] != 0.0) && (R > par[6])) {
             for (i = 1; i <= imax1; i++)
               KAP[i] = tmp1 * TEMP[i] * tmp5;
         } else if ((par[3] != 0.0) && (R <= par[6])) {
             for (i = 1; i <= imax1; i++)
               KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6;

      for (i = 0; i < M; i++) {
         kap = KAP[i];
         r = i * hrmax;
         < rest of loop omitted >

Maybe not the prettiest piece of code, but certainly much more efficient than the original loop,

Copy operations

Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages.

Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code.

Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays.

The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers.

Optimizing loop structures

Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet

      MaxDelta = 0.0
      do J = 1, N
        do I = 1, M
          < code omitted >
          Delta = abs(OldValue ? NewValue)
          if (Delta > MaxDelta) MaxDelta = Delta

      if (MaxDelta .gt. 0.001) goto 200

Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as

      MaxDelta = .false.
      do J = 1, N
        do I = 1, M
          < code omitted >
          Delta = abs(OldValue ? NewValue)
          MaxDelta = MaxDelta .or. (Delta .gt. 0.001)

      if (MaxDelta) goto 200

thereby, eliminating the conditional expression from the inner loop.

A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops.

As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop

      do i = 1, n
        do j = 1, m
          A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i)
          B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i)
          A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i)
          B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i)
          C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i)
          D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i)
          C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i)
          D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i)
        end do
      end do

into two disjoint loops

      do i = 1, n
        do j = 1, m
          A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i)
          B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i)
          A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i)
          B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i)
        end do
      end do
      do i = 1, n
        do j = 1, m
          C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i)
          D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i)
          C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i)
          D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i)
        end do
      end do


Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers.

Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future.

Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization.

I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding.

About the Author

John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

Friday Oct 21, 2011

Supercomputing 2011 - November 12-18 - Seattle, WA


Anyone going to Supercomputing 2011 (SC'11)?

I'll be hanging around the OpenMP.org booth

Stop by and say hello.

Meantime, read this

Why Fortran Still Matters

This is from HPCWire

Why Fortran Still Matters

Steve Lionel, commonly known as “Doctor Fortran” made a convincing argument this week for why the 54 year-old language is still relevant—and why it just doesn’t get the respect it deserves.

To counter the myth that Fortran is the Latin of the programming world, Lionel points to a few new applications that have been written in Fortran, including hurricane weather forecasting applications like the Weather Research and Forecasting Model (WRF) which is written mostly in the venerable language. He also points to PAM-CRASH, an auto crash simulator as a prime example that stands out, claiming that in HPC, there are many valid, fresh uses for Fortran.

He admits that indeed, there are not a large number of applications in Fortran and that indeed, 20 years ago there were far more uses for it. Still, he says it isn’t fading completely even though there are, as he says, “a lot of C and C++ that that is more appropriate for certain things than Fortran is like string processing.

That aside, he says, “if you’re doing number crunching, working with a lot of floating-point data, or doing parallel processing, it’s an excellent choice. Its strengths in array operations -- its wide variety of routines -- make it attractive, and there is a huge library of freely available high-performance routines written over 40 years that still work together.”

Lionel looks to the strengths of Fortran in comparison to other languages, noting that Fortran 2008 has built-in parallel programming capabilities that no other languages have. He says, “Other languages have parallel programming features, but Fortran incorporated modern parallel programming in ways that non of the other languages have.” He points to a “an incredible body of well-written and well-debugged routines” in Fortran that are still open for reuse.

According to Lionel, just because the language is venerable, it doesn’t mean that it hasn’t changed over time. He points to a series of updates, including one just last year, claiming that new capabilities are being added constantly in response to the desires of programmers looking for vendor extensions and other features that became popular in other languages.

Full story at Intelligence in Software

Wednesday Jul 20, 2011

OpenMP 3.1 Specs Released

OpenMP LogoThe OpenMP Architecture Review Board has released an updated version of the OpenMP shared memory parallelization specifications. 3.1 contains some new features but it's mainly a clarification of the 3.0 specs.

The 3.1 version is a minor release that does not break existing, correct OpenMP applications. However, it does include several new features, most notably the addition of predefined min andmax reduction operators for C and C++, and extensions to the atomic construct that allow the value of the shared variable that the construct updates to be captured or written without being read. Also, extensions have been added to bind threads to a processor, and to support optimization of applications that use the OpenMP tasking model.

“Version 3.1 represents a significant effort on the part the OpenMP Language Committee that lays the ground work for future extensions to better support emerging hardware directions,” stated Language Committee Chair Bronis R. de Supinski. “We have added extensions that handle some of the most frequent user requests while also working to make the specification and its associated examples clearer. We expect these extensions will improve usability and performance.”

“Concurrent to our work on version 3.1, we have also been making progress on several significant enhancements to the specification that we expect to serve as the basis for version 4.0,” de Supinski continued. “Topics under consideration include support for accelerators such as GPUs, major enhancements to the tasking model, mechanisms to support error handling and user defined reductions. I welcome inquiries from anyone interested in contributing to these directions.”

The complete 3.1 specification in PDF can be found on the OpenMP.org Specifications page.

new forum to discuss the 3.1 specification is also now available.

Tuesday May 10, 2011

International Workshop on OpenMP (IWOMP) June 13-15, 2011 Chicago

The 7th annual International Workshop on OpenMP (IWOMP) is dedicated to the promotion and advancement of all aspects of parallel programming with the OpenMP API. It is the premier forum to present and discuss issues, trends, recent research ideas and results related to OpenMP parallel programming. The international workshop affords an opportunity for OpenMP users as well as developers to come together for discussions and sharing new ideas and information on this topic.

IWOMP 2011 will be a three-day event. The first day will consist of tutorials focusing on topics of interest to current and prospective OpenMP developers, suitable for both beginners as well as those interested in learning of recent developments in the evolving OpenMP standard. The second and third days will consist of technical papers and panel sessions during which research ideas and results will be presented and discussed.

A complete list of tutorials at IWOMP11: Tutorials

A complete list of activities during IWOMP11: Workshop program

Registration for IWOMP 2011 is now open.

Confused Over Parallel Programming Terminology?

Ever been confused over the difference between terms like multiprogramming, multiprocessing, multithreading, etc?

Then read  Making Sense of Parallel Programming Terms which has just been revised and updated.

It's one of many articles on programming techniques and best practices using the Solaris Studio compilers on the Studio portal.

Thursday Apr 14, 2011


The Solaris Studio C++ FAQ has been updated here.

Wednesday Mar 16, 2011

We've Got Articles

Just a reminder that we've got a bunch of technical articles over at the Oracle Technical Network (OTN) regarding Oracle Solaris Studio. These are deep dives into the technology of compilers and application development:

Recently Published

Stability of the C++ ABI: Evolution of a Programming Language (revised March 2011)
As C++ evolved over the years, the Application Binary Interface (ABI) used by a compiler often needed changes to support new or evolving language features. This paper addresses these issues in Oracle Solaris Studio C++, and what you can expect when you develop programs using Oracle Solaris Studio C+

Mixing C and C++ Code in the Same Program (revised February 2011)

Profiling MPI Applications (Updated January 2011)
Profiling of Message Passing Interface (MPI) applications with the Oracle Solaris Studio Performance Tools.

Oracle Solaris Studio Performance Tools
This article describes the kinds of performance questions users typically ask, and then it describes the Oracle Solaris Studio performance tools and shows examples of what the tools can do.

Taking Advantage of OpenMP 3.0 Tasking with Oracle Solaris Studio
A technical white paper that shows how to use Oracle Solaris Studio 12.2 to implement, profile and debug and example OpenMP program.

Oracle Solaris Studio FORTRAN Runtime Checking Options Whitepaper

Translating gcc/g++/gfortran Options to Oracle Solaris Studio Compiler Options Technical Article

Examine MPI Applications with the Oracle Solaris Studio Performance Analyzer How to Guide

Handling Memory Ordering in Multithreaded Applications with Oracle Solaris Studio 12 Update 2: Part 1, Compiler Barriers Technical Article

Handling Memory Ordering in Multithreaded Applications with Oracle Solaris Studio 12 Update 2: Part 2, Memory Barriers and Memory Fences Technical Article

Developing Enterprise Applications with Oracle Solaris Studio Whitepaper

Developing Parallel Programs — A Discussion of Popular Models Whitepaper

>See the complete list

Tuesday Mar 08, 2011

Where is C++ Going?

An article on the C++ ABI and the Solaris Studio C++ compiler has been updated on the Oracle Technical Network:

Stability of the C++ ABI: Evolution of a Programming Language (revised March 2011)
As C++ evolved over the years, the Application Binary Interface (ABI) used by a compiler often needed changes to support new or evolving language features. This paper addresses these issues in Oracle Solaris Studio C++, and what you can expect when you develop programs using Oracle Solaris Studio C++.

Wednesday Feb 16, 2011

65 Things About Solaris Studio

Oracle Solaris Studio isn't just one thing. It's 65 "things". Check out the man pages. These are all the command-line tools in Solaris Studio: 

    • CC - C++ compiler
    • CCadmin - clean the templates database; provide information from and updates to the database.
    • analyzer - GUI for analyzing a program performance experi ment
    • bcheck - batch utility for Runtime Checking (RTC)
    • binopt - Solaris Binary Optimizer
    • bw - command used to measure system-wide bandwidth consump tion
    • c++filt - c++ name demangler
    • c89 - compile standard C programs
    • c99 - compile standard C programs
    • cb - C program beautifier
    • cc - C compiler
    • cflow - generate C flowgraph
    • collect - command used to collect program performance data
    • collector - subcommands of dbx used for performance data collection
    • cscope - interactively examine a C program
    • ctrace - C program debugger
    • cxref - generate C program cross-reference
    • dbx - source-level debugging tool
    • dbxtool - source-level debugger GUI
    • dem - demangle a C++ name
    • discover - Sun Memory Error Discovery Tool
    • dmake - Distributed Make
    • dumpstabs - utility for dumping out debug information
    • dwarfdump - dumps DWARF debug information of an ELF object
    • er_archive - construct function and module lists for a per formance experiment
    • er_bit - generates an experiment from data collected on a bit-instrumented program (Solaris only)
    • er_cp - copy a performance experiment
    • er_export - dump raw data from a performance experiment
    • er_generic - command used to generate an experiment from text files containing profile information
    • er_html - generate an HTML file from an experiment for browsing the data
    • er_kernel - generate an Analyzer experiment on the Solaris kernel
    • er_mpipp - command used to preprocess MPI VampirTrace data from an experiment
    • er_mv - move a performance experiment
    • er_otfdump - command to dump OTF trace data
    • er_print - print an ASCII report from one or more perfor mance experiments
    • er_rm - remove performance experiments.
    • er_src - print source or dissasembly with index lines and interleaved compiler commentary
    • er_vtunify - command process raw MPI VampirTrace data into OTF format
    • f95 - Fortran 95 compiler
    • fbe - assembler
    • fdumpmod - utility for displaying Fortran 95 module informa tion
    • fpp - the Fortran language preprocessor for FORTRAN 77 and Fortran 95.
    • fpr - convert FORTRAN carriage-control output to printable form
    • fpversion - print information about the system CPU and FPU
    • fsplit - split a multi-routine FORTRAN 90 or FORTRAN 77 source file into individual files.
    • indent - indent and format a C program source file
    • inline - in-line procedure call expander
    • intro - introduction to Oracle Solaris Studio command-line manual pages
    • lint - a C program checker
    • lock_lint - verify use of locks in multi-threaded programs
    • ptclean - clean up the parameterized types database
    • register_solstudio - Oracle Solaris Studio registration utility
    • ripc - collect performance counter information from an application
    • rtc_patch_area - patch area utility for Runtime Checking
    • solstudio - Oracle Solaris Studio 12.2 integrated develop ment environment
    • spot - run a tool chain on an executable, and generate a website for browsing the data
    • spot_diff - compare the output of two or more spot runs and write results to a HTML file to be viewed in a browser.
    • ss_attach - start a debugging session in the Sun Studio IDE attached to a specified process
    • tcov - source code test coverage analysis and source line profile
    • tha - GUI for analyzing a Thread Analyzer experiment
    • traps - command used to measure system-wide traps
    • uncover - Code Coverage Tool
    • version - display version identification of object file or binary
    • xprof_atob - ascii/binary profile data conver sion
    • xprof_btoa - ascii/binary profile data conver sion

  • Monday Feb 14, 2011

    Darryl on Multicore

    Darryl Gove interview

    Nice discussion on Informit.com between Jim Mauro and Darryl Gove about multicore/multithreaded programming and Darryl's new book. Click on the image above to go there.

    Discover and Uncover

    There are two new tools in the 12.2 release of Oracle Solaris Studio. Discover detects memory leaks, and Uncover measures code coverage in an application:


    Memory-related errors in programs are notoriously difficult to find. Discover allows you to find such errors easily by pointing out the exact place where the problem exists in the source code. For example, if your program allocates an array and does not initialize it, then tries to read from one of the array locations, the program will probably behave erratically. Discover can catch this problem when you run the program in the normal way.

    Other errors detected by Discover include:

    • Reading from and writing to unallocated memory
    • Accessing memory beyond allocated array bounds
    • Incorrect use of freed memory
    • Freeing the wrong memory blocks
    • Memory leaks

    Discover is simple to use. Any binary (even a fully optimized binary) that has been prepared by the compiler can be instrumented with a single command, then run in the normal way. During the run, Discover produces a report of the memory anomalies, which you can view as a text file, or as HTML in a web browser.


    Uncover is a simple and easy to use command-line tool for measuring code coverage of applications. Code coverage is an important part of software testing. It gives you information on which areas of your code are exercised in testing and which are not, enabling you to improve your test suites to test more of your code. The coverage information reported by Uncover can be at a function, statement, basic block, or instruction level.

    Uncover provides a unique feature called uncoverage, which allows you to quickly find major functional areas that are not being tested. Other advantages of Uncover code coverage over other types of instrumentation are:

    • The slowdown relative to uninstrumented code is fairly small.
    • Since Uncover operates on binaries, it can work with any optimized binary.
    • Measurements can be done by instrumenting the shipping binary. The application does not have to be built differently for coverage testing.
    • Uncover provides a simple procedure for instrumenting the binary, running tests, and displaying the results.
    • Uncover is multithread safe and multiprocess safe.
    Complete documentation on Discover and Uncover is here.

    Friday Feb 11, 2011

    On the Spot!

    SPOT ArchitectureOne of the new tools in the latest 12.2 release of Oracle Solaris Studio is SPOT -  The Simple Performance Optimization Tool.

    SPOT simplifies the process of performance analysis by running an application under a common set of tools and producing HTML reports of its findings, which provides the following benefits:

    • By creating reports in HTML format, SPOT lets you place the reports on a server that can be accessed by an entire development team. For example, a SPOT report can be examined by remote colleagues, or referred to during a meeting. You could even email a URL of a particular line of source code, or disassembly, to a colleague for further review.

    • The SPOT report archives the compiler build commands as well as the profile for the active parts of the application. By comparing the current application profile with an older profile, you can easily check for changed code or changed compiler build flags.

    • SPOT can also profile the application according to the most frequently occurring hardware events, indicating which routines are encountering which problems.

    Complete documentation on SPOT is here

    Tuesday Feb 08, 2011

    Overview of Oracle Solaris Studio Compilers & Tools

    There's a great overview of the components and features of Oracle Solaris Studio compilers and tools now available in HTML and PDF:

    »Oracle Solaris Studio Overview - HTML - PDF

    Oracle Solaris Studio provides everything you need to develop C, C++, and Fortran applications to run in Oracle Solaris 10 on SPARC or x86 and x64 platforms, or in Oracle Linux on x86 and x64 platforms. The compilers and tools are engineered to make your applications run optimally on Oracle Sun systems.

    In particular, Oracle Solaris Studio tools are designed to leverage the capabilities of multicore CPUs including the Sun SPARC T3, UltraSPARC T2, and UltraSPARC T2 Plus processors, and the Intel® Xeon® and AMD Opteron processors. The tools allow you to more easily create parallel and concurrent software applications for these platforms.

    The components of Oracle Solaris Studio include:

    • IDE for application development in a graphical environment. The Oracle Solaris Studio IDE integrates several other Oracle Solaris Studio tools and uses Oracle Solaris technologies such as DTrace.

    • C, C++, and Fortran compilers for compiling your code at the command line or through the IDE. The compilers are engineered to work well with the Oracle Solaris Studio debugger (dbx), and include the ability to optimize your code by specifying compiler options.

    • Libraries to add advanced performance and multithreading capabilities to your applications.

    • Make utility (dmake) for building your code in distributed computing environments at the command line or through the IDE.

    • Debugger (dbx) for finding bugs in your code at the command line, or through the IDE, or through an independent graphical interface (dbxtool).

    • Performance tools that employ Oracle Solaris technologies such as DTrace can be used at the command line or through independent graphical interfaces to find trouble spots in your code that you cannot detect through debugging.

    These tools together enable you to build, debug, and tune your applications for high performance on Oracle Solaris running on Oracle Sun systems. Each component is described in greater detail later in this document.

    Monday Feb 07, 2011

    OpenMP 3.1 Draft Spec for Public Comment

    OpenMPThe OpenMP Architecture Review Board  has announced the release of a draft of the OpenMP specification, version 3.1 for comment by the community.

    This draft will serve as the basis of an official 3.1 update of the specification, which is expected to be ratified in time for the International Workshop on OpenMP (IWOMP) 2011 in Chicago.

    All interested users and implementers are invited to review the comment draft and to provide feedback through the Draft 3.1 Public Comment OpenMP Forum.

    The 3.1 version is intended as a minor release that will not break existing, correct OpenMP applications. However, it does include several new features, most notably the addition of predefined min and max operators for C and C++, and extensions to the atomic construct that allow the value of the shared variable that the construct updates to be captured or written without being read. It also includes extensions to the OpenMP tasking model that support optimization of its use.

    Thursday Feb 03, 2011

    Updated Article: Mixing C and C++

    Steve Clamage's article on mixing C and C++ in a single program has been updated for Oracle Solaris Studio 12.2.

    The C++ language provides mechanisms for mixing code that is compiled by compatible C and C++ compilers in the same program. You can experience varying degrees of success as you port such code to different platforms and compilers. This article shows how to solve common problems that arise when you mix C and C++ code, and highlights the areas where you might run into portability issues. In all cases we show what is needed when using Oracle Solaris Studio C and C++ compilers.

    Mixing C and C++ Code in the Same Program

    Wednesday Feb 02, 2011

    Where's the Books?!

    Looking for a manual that used to be on docs.sun.com but can't find it any more?

    Best way to find it is to use a site-directed Google search. All the docs at Oracle are under download.oracle.com.

    So say you're looking for the Solaris Linkers & Libraries manual. Put this into a Google search box:

    site:download.oracle.com  "Linkers and Libraries"

    and here's what you get.

    (Fixed links.)

    Where To Find Oracle Solaris Studio Resources

    Here's where to find information and discussions for the latest Oracle Solaris Studio compilers and tools at it's new home on the Oracle Technical Network (OTN):

    There are also pages focused on primary topics regarding Solaris Studio compilers and tools:

    Oracle Solaris Studio C, C++, and Fortran compilers include advanced features for building applications on Oracle Solaris SPARC and x86/x64 platforms.

    Successful program debugging is more an art than a science. dbx is an interactive, source-level, post-mortem and real-time command-line debugging tool plus much more.

    Performance analysis is the first step toward program optimization. Oracle Solaris Studio Performance Analyzer can help you assess the performance of your code, identify potential performance problems, and locate the part of the code where the problems occur.

    Oracle Solaris Studio C, C++, and Fortran compilers offer a rich set of compile-time options for specifying target hardware and advanced optimization techniques. 

    Multicore/Parallel Programming
    High Performance and Technical Computing (HPTC) applies numerical computation techniques to highly complex scientific and engineering problems. Oracle Solaris Studio compilers and tools provide a seamless, integrated environment from desktop to TeraFLOPS for both floating-point and data-intensive computing.

    The floating-point environment on Oracle Sun SPARC and x86/x64 platforms enables you to develop robust, high-performance, portable numerical applications. The floating-point environment can also help investigate unusual behavior of numerical programs written by others. The Sun Performance Library provides highly optimized versions of many advanced math function routines.

    Still under development, there's more to do. Open for suggestions.

    Tuesday Jan 11, 2011

    CALL FOR PAPERS: International OpenMP Workshop

    7th International Workshop on OpenMP IWOMP 2011
    June 13 – 15, 2011   Chicago, IL

    Submission deadline January 31, 2011

    The 2011 International Workshop on OpenMP (IWOMP 2011) will be held in Chicago, IL. It is the premier forum to present and discuss issues, trends, recent research ideas and results related to parallel programming with OpenMP. The international workshop affords an opportunity for OpenMP users as well as developers to come together for discussions and sharing new ideas and information on this topic. IWOMP 2011 will be a three-day event. The first day will consist of tutorials focusing on topics of interest to current and prospective OpenMP developers, suitable for both beginners as well as those interested in learning of recent developments in the evolving OpenMP standard. The second and third days will consist of technical papers and panel session(s) during which research ideas and results will be presented and discussed.

    We solicit submissions of unpublished technical papers detailing innovative, original research and development related to OpenMP. All topics related to OpenMP are of interest, including OpenMP applications in any domain (e.g., scientific computation, video games, computer graphics, multimedia, information retrieval, optimization, text processing, data mining, finance, signal and image processing and numerical solvers), OpenMP performance analysis and modeling, OpenMP performance and correctness tools and proposed OpenMP extensions.

    Advances in technologies, such as multi-core processors and accelerators (e.g., GPGPU, FPGA), the use of OpenMP in very large-scale parallel systems, and recent developments in OpenMP itself (e.g., tasking) present new opportunities and challenges for software and hardware developers. IWOMP 2011 solicits submissions that highlight OpenMP work on these fronts.

    Submitted papers for review should be limited to 12 pages and follow LNCS guidelines. Submission deadline is Jan. 31, 2011. Submit your paper to: http://www.easychair.org/conferences/?conf=iwomp11. Authors of accepted papers will be asked to prepare a final paper of up to 15 pages.

    Important Dates:

    • Paper submission deadline: January 31, 2011
    • Notification of acceptance: February 28, 2011
    • Camera-ready version of paper due: March 21, 2011
    • Tutorial and Workshop in Chicago: June 13-15, 2011

    Oracle Solaris Studio 12 Documentation Library

    The migration from sun.com to oracle.com includes the product documentation on docs.sun.com.

    You can now find all the Solaris Studio 12 product documentation on the Oracle Solaris Studio 12 Documenation Library, which includes the 12, 12 update 1, and current 12.2 releases.

    Thursday Jun 03, 2010

    We're Back! With a New Release!

    Oracle Solaris Studio Express 6/10 release is now available .. follow this link.

    Lots of updates and new compiler features, like optimizations for the latest SPARC and x86/x64 platforms, new dbx commands for debugging OpenMP programs, and new performance analyzer features, like the ability to compare two runtime experiment.

    Glad to be back! Stay tuned.


    Deep thoughts on compiling C, C++, and Fortran codes with Oracle Solaris Studio compilers, especially optimization and parallelization, from the Solaris Studio documentation lead, Richard Friedman. Email him at
    Richard dot Friedman at Oracle dot com

    When Run Was A Compiler


    « April 2014