Friday Oct 16, 2009

Fortress: Parallel by Default

I gave a short talk about Fortress, a new parallel language, at the Sun HPC Workshop in Regensburg, Germany and thought I'd post the slides here with commentary. Since I'm certainly not a Fortress expert by any stretch of the imagination, my intent was to give the audience a feel for the language and its origins rather than attempt a deep dive in any particular area. Christine Flood from the SunLabs Programming Languages Research Group helped me with the slides. I also stole liberally from presentations and other materials created by other Fortress team members.

The unofficial Fortress tag line, inspired by Fortress's emphasis on programmer productivity. With Fortress, programmer/scientists express their algorithms in a mathematical notation that is much closer to their domain of expertise than the syntax of the typical programming language. We'll see numerous examples in the following slides.

At the highest level, there are two things to know about Fortress. First, that it started as a SunLabs research project, and, second, that the work is being done in the open under as the Project Fortress Community, whose website is here. Source code downloads, documentation, code samples, etc., are all available on the site.

Fortress was conceived as part of Sun's involvement in a DARPA program called High Productivity Computing Systems (HPCS,) which was designed to encourage the development of hardware and software approaches that would significantly increase the productivity of the application developers and users of High Performance Computing systems. Each of the three companies selected to continue past the introductory phase of the program proposed a language designed to meet these requirements. IBM chose essentially to extend Java for HPC, while both Cray and Sun proposed new object-oriented languages. Michèle Weiland at the University of Edinburgh has written a short technical report that offers a comparison of the three language approaches. It is available in PDF format here.

I've mentioned productivity, but not defined it. I recommend visiting Michael Van De Vanter's publications page for more insight. Michael was a member of the Sun HPCS team who focused with several colleagues on the issue of productivity in an HPCS context. His considerable publication list is here.

Because I don't believe Sun's HPCS proposal has ever been made public, I won't comment further on the specific scalability goals set for Fortress other than to say they were chosen to complement the proposed hardware approach. Because Sun was not selected to proceed to the final phase of the HPCS program, we have not built the proposed system. We have, however, continued the Fortress project and several other initiatives that we believe are of continuing value.

Growability was a philosophical decision made by Fortress designers and we'll talk about that later. For now, note that Fortress is implemented as a small core with an extensive and growing set of capabilities provided by libraries.

As mentioned earlier, Fortress is designed to accommodate the programmer/scientist by allowing algorithms to be expressed directly in familiar mathematical notation. It is also important to note that Fortress constructs are parallel by default, unlike many other languages which require an explicit declaration to create parallelism. Actually to be more precise, Fortress is "potentially parallel" by default. If parallelism can be found, it will be exploited.

Finally, some code. We will look at several versions of a factorial function over the next several slides to illustrate some features of Fortress. (For additional illustrative Fibonacci examples, go here.) The first version of the function is shown here beneath a concise, mathematical definition of factorial for reference.

The red underlines highlight two Fortressisms. First, the condition in the first conditional is written naturally as a single range rather than as the more conventional (and convoluted) two-clause condition. And, second, the actual recursion shows that juxtaposition can be used to imply multiplication as is common when writing mathematical statements.

This version defines a new operator, the "!" factorial operator, and then uses that operator in the recursive step. The code has also been run through the Fortress pretty printer that converts it from ASCII form to a more mathematically formatted representation. As you can see, the core logic of the code now closely mimics the mathematical definition of factorial.

This non-recursive version of the operator definition uses a loop to compute the factorial.

Since Fortress is parallel by default, all iterations of this loop could theoretically be executed in parallel, depending on the underlying platform. The "atomic" keyword ensures that the update of the variable result is performed atomically to ensure correct execution.

This slide shows an example of how Fortress code is written with a standard keyboard and what the code looks like after it is formatted with Fortify, the Fortress pretty printer. Several common mathematical operators are shown at the bottom of the slide along with their ASCII equivalents.

A few examples of Fortress operator precedence. Perhaps the most interesting point is the fact that white space matters to the Fortress parser. Since the spacing in the 2nd negative example implies a precedence different than the actual precedence, this statement would be rejected by Fortress on the theory that its execution would not compute the result intended by the programmer.

Don't go overboard with juxtaposition as a multiplication operator -- there is clearly still a role for parentheses in Fortress, especially when dealing with complex expressions. While these two statements are supposed to be equivalent, I should point out that the first statement actually has a typo and will be rejected by Fortress. Can you spot the error? It's the "3n" that's the problem because it isn't a valid Fortress number, illustrating one case in which a juxtaposition in everyday math isn't accepted by the language. Put a space between the "3" and the "n" to fix the problem.

Here is a larger example of Fortress code. On the left is the algorithmic description of the conjugate gradient (CG) component of the NAS parallel benchmarks, taken directly from the original 1994 technical report. On the right is the Fortress code. Or do I have that backwards? :-)

More Fortress code is available on the Fortress by Example page at the community site.

Several ways to express ranges in Fortress.

The first static array definition creates a one dimensional array of 1000 32-bit integers. The second definition creates a one dimensional array of length size, initialized to zero. I know it looks like a 2D array, but the 2nd instance of ZZ32 in the Array construct refers to the type of the index rather than specifying a 2nd array dimension.

The last array subexpression is interesting, since it is only partially specified. It extracts a 20x10 subarray from array b starting at its origin.

Tuple components can be evaluated in parallel, including arguments to functions. As with for loops, do clauses execute in parallel.

In Fortress, generators control how loops are run and generators generally run computations in any order, often in parallel. As an example, the sum reduction over X and Y is controlled by a generator that will cause the summation of the products to occur in parallel or at the least in a non-deterministic order if running on a single-processor machine.

In Fortress, when parallelism is generated the execution of that work is handled using a work stealing strategy similar to that used by Cilk. Essentially, when a compute resource finishes executing its tasks, it pulls work items from other processor's work queues, ensuring that compute resources stay busy by load balancing the available work across all processors.

Essentially, a restatement of an earlier point: In Fortress, generators play the role that iterators play in other languages. By relegating the details of how the index space is processed to the generator, it is natural to then also allow the generator to control how the enclosed processing steps are executed. A generator might execute computations serially or in parallel.

A generator could conceivably also control whether computations are done locally on a single system or distributed across a cluster, though the Fortress interpreter currently only executes within a single node. To me, the generator concept is one of the nicer aspects of Fortress.

Guy Steele, who is Fortress Principal Investigator along with Eric Allen, has been working in the programming languages area long enough to know the wisdom of these statements. Watch him live the reality of growing a language in his keynote at the 1998 ACM OOPSLA conference. Be amazed at the cleverness, but listen to the message as well.

The latest version of the Fortress interpreter (source and binary) is available here. If you would like to browse the source code online, do so here.

Some informational pointers. Christine also tells me that the team is working on an overview talk like this one. Except I expect it will be a lot better. :-) Though I only scratched the surface in a superficial way, I hope this brief overview has given you at least the flavor of what Project Fortress is about.


Wednesday Jul 30, 2008

Fresh Bits: Attention all OpenMP and MPI Programmers!

The latest preview release of Sun's compiler and tools suite for C, C++, and FORTRAN users is now available for free download. Called Sun Studio Express 07/08, this release of Sun Studio marks an important advance for HPC customers and for any customer interested in extracting high performance from today's multi-threaded and multi-core processors. In addition to numerous compiler performance enhancements, the release includes beta-level support for the latest OpenMP standard, OpenMP 3.0. It also includes some nice Performance Analyzer enhancements that support simple and intuitive performance analysis of MPI jobs. More detail on both of these below.

As the industry-standard approach for achieving parallel performance on multi-CPU systems, OpenMP has long been a mainstay of the HPC developer community. Version 3.0, which is supported in this new Sun Studio preview release, is a major enhancement to the standard. Most notably it includes support for tasking, a major new feature that can help programmers achieve better performance and scalability with less effort than previous approaches using nested parallelism. There are a host of other enhancements as well. The OpenMP expert will find the latest specification useful. For those new to parallelism who have stumbled into a maze of twisty passages all alike, you may find Using OpenMP: Portable Shared Memory Parallel Programming to be a useful introduction to parallelism and OpenMP.


A parallel quicksort example, written using the new OpenMP tasking feature supported in Sun Studio Express 07/08

Sun Studio Express 07/08 also includes enhancements for programmers of parallel, distributed applications who use MPI. With this release of Sun Studio Express we have introduced tighter integration with Sun's MPI library (Sun HPC ClusterTools). Sun's Performance Analyzer has been enhanced to include the ability to examine the performance of MPI jobs by viewing information related to message transfers and messaging performance using a variety of visualization methods. This extends Analyzer's already-sophisticated on-node performance analysis capabilities. Some screenshots below give some idea of the types of information that can be viewed. You should note the idea of viewing "MPI states" (e.g. MPI Wait and MPI Work) to get a high level view of the performance of the MPI portion of an application: an ability to understand how much time is spent doing actual work versus sitting in a wait state can motivate useful insights into the performance of these parallel, distributed codes.

A source code viewer window augmented with several MPI-specific capabilities, one of which is illustrated here: the ability to quickly see how much work (or waiting) is performed within a function.

In addition to supporting direct viewing of specific MPI performance issues within an application, Analyzer now also supports a range of visualization tools useful for understanding the messaging portion of an MPI code. Zoomable timelines with MPI events are supported, as is an ability to map various metrics against the X and Y axis of a plotting area to display various interesting characteristics of the MPI run, as shown below.

Just one example of Sun Studio's new MPI charting capabilities. Shown here is a display showing the volume of messages transferred between communicating pairs of MPI processes during an application run.

This blog entry has barely scratched the surface of the new OpenMP and MPI capabilities available in this release. If you are a Solaris or Linux HPC programmer, please take these new capabilities for a test drive and let us know what you think. I know the engineering teams are excited by what they've accomplished and I hope you will share their enthusiasm once you've tried these new capabilities.

Sun Studio Express 07/08 is available for Solaris 9 & 10, OpenSolaris 2008.05, and Linux (SLES 9, RHEL 4) and can be downloaded here.


About

Josh Simons

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today