Tuesday Apr 22, 2008

Parallel Programming Patterns: Part 5

This is the last (but not the least!) design pattern in our series of parallel programming design patterns: the ring pattern.

This pattern can be applied to those set of problems which can be modeled as a ring of processes communicating with each other in a circular fashion. The requirement in such applications is that a set of data is to be repeatedly operated upon by a fixed set of operations. This pattern can be considered as an extension of pipeline Pattern where the output of last process goes as input to the first process and the data keeps on rotating in the initial set of processes.

Now that we have seen five parallel programming design patterns in previous posts, we should get our hands dirty with MPI code, next time we will see an example of MPI program, for doing matrix vector multiplication(for very large matrices).

Wednesday Apr 02, 2008

Parallel Programming Patterns: Part 4

As promised, we will see the divide and conquer pattern implemented in MPI Plugin for Netbeans.

This pattern is employed in solving many sequential problems where a problem can be split into number of smaller problems which can be solved independently. The intermediate solutions are merged to get the final solution. The sub problems are generally independent of each other and structured such that they can not be further sub divided. Also if the program correctness is independent of whether the subproblems are solved sequentially or concurrently, a hybrid system can be designed that sometimes solves them sequentially and sometimes concurrently, based on which approach is likely to be more efficient. With this strategy, the subproblems can be solved directly, or they can in turn be solved using the same divide-and-conquer strategy, leading to an overall recursive program structure.In summary program following this pattern should have a recursive process creation, base case solving mechanism, and merging the result. Maintaining the right level or recursion depth and problem size may need to be tuned.

The final pattern of our series would be the ring pattern, for more information, please see MPI Plugin design patterns.

Tuesday Apr 01, 2008

Parallel Programming Patterns: Part 3

The pipeline pattern implemented in MPI Plugin, can be applied to those set of problems which can be modeled as a set of data flowing through multiple sets of computations.

The computations are ordered and independent and can also be seen as series of time-step operations. In a sequential execution scenario, the output of first step of computation would serve as input to the second step of computation, and so on for all the sets of computation. Parallelism is introduced in the application by overlapping the operations through different time step operations. The first step of component start operating as soon as the input is available, and the output of this step is passed to the second step component. Not during the nest time unit, the first time step component is free to accept more input and it does so making available the output to the second time step component ion next iteration. In next iteration the second time step component passes on its output to the third time step component and it accepts the output produced by first time step component in this iteration. this cycle keeps on continuing till all input is exhausted and all sets of operations are completely applied over the ordered data. Also a point to be noted is that each computation step must be comparably equal in size to have equal length of time steps, only then substantial parallelism can be achieved.

Next pattern which we will see in this series is Divide and Conquer Pattern. Stay tuned!!

Monday Mar 31, 2008

Parallel Programming Patterns: Part 2

This is the second pattern which is implemented in MPI Plugin, called Master Worker Pattern.

This pattern is used to solve those class of problems which need performing same set of operations over multiple data sets. The set of operations are generally independent of each other and can be performed concurrently. Parallelism is achieved here by dividing the number of computations amongst available processes and each process creating identical number of processes. There is generally a Master process also called a managerial component) present which is responsible for distributing the work amongst the worker processes and then collating the data as the computation completes. Also the distribution of data among the worker processes can generally be done in any specific order, but it is important to preserve the order of processed data. The responsibility of each worker task is to perform each computation repeatedly on multiple sets of data as given by the master process. The decisive factors for choosing this pattern are among (but not limited to) Load Balancing, data integrity and data distribution.

In the next post, we will have a look at the Pipeline pattern.

Thursday Mar 06, 2008

Parallel Programming Patterns: Part 1

Recently we released MPI Development environment for Netbeans IDE, and this series is a consolidated summary of Parallel Programming Patterns implemented in the Plugin. The first Pattern which we will see is SPMD(Single Process Multiple Data) Pattern. This is a technique used to achieve data level parallelism. One of the dominant style of parallel programming, where all processors use the same program, though each has its own data, SPMD pattern exploits data parallelism in applications where a large mass of data of a uniform type needs the same instruction performed on it. The data is divided among processes to be independently operated. The example provided in the MPI Netbeans plugin shows following:
  1. An array of elements is created on main process which is then distributed amongst other processes.
  2. All processes do independent processing of data which is sent to them.
  3. If the main process wants, it can collect the data from other processes for some final processing, etc.
For more details please refer to MPI Plugin Download page and its Development guide. This is the link to Parallel Programming Patterns documentation.

Tuesday Aug 14, 2007

MPI Development environment for Netbeans IDE

Recently we released a MPI plugin for Netbeans. The purpose of this plugin is to allow application developers to access Netbeans platform to develop, test, debug MPI applications for the Sun Grid Compute Utility. This plugin includes an early access version of the new MPI Development Plugin for NetBeans(tm) IDE, which is targeted at C/C++ developers who are working with MPI applications that can be modeled as a set of independent, compute-bound tasks. The software is published under the GNU General Public License.

MPI Development Plugin for NetBeans(tm) IDE project offers following in its current early access state:

  • MPI programming model to simplify the design and development of C/C++ MPI applications.
  • Netbeans IDE framework built in features enhanced to support the efficient execution of C/C++ MPI applications on the Sun Grid Compute Utility.
  • MPI testing plug-in for the NetBeans IDE to ease local development and testing of C/C++ MPI applications.
  • Pre built collection of Sample MPI applications for illustrating effective use of Parallel Programming Patterns to build C/C++ MPI applications for Sun Grid.

Learn More:

More in this series:

In the next posts, look out for Parallel Programming Patterns and related examples for this plugin, which we have developed.

Hardik Dave


« September 2016