Tuesday Apr 22, 2008

Parallel Programming Patterns: Part 5

This is the last (but not the least!) design pattern in our series of parallel programming design patterns: the ring pattern.

This pattern can be applied to those set of problems which can be modeled as a ring of processes communicating with each other in a circular fashion. The requirement in such applications is that a set of data is to be repeatedly operated upon by a fixed set of operations. This pattern can be considered as an extension of pipeline Pattern where the output of last process goes as input to the first process and the data keeps on rotating in the initial set of processes.

Now that we have seen five parallel programming design patterns in previous posts, we should get our hands dirty with MPI code, next time we will see an example of MPI program, for doing matrix vector multiplication(for very large matrices).

Wednesday Apr 02, 2008

Parallel Programming Patterns: Part 4

As promised, we will see the divide and conquer pattern implemented in MPI Plugin for Netbeans.

This pattern is employed in solving many sequential problems where a problem can be split into number of smaller problems which can be solved independently. The intermediate solutions are merged to get the final solution. The sub problems are generally independent of each other and structured such that they can not be further sub divided. Also if the program correctness is independent of whether the subproblems are solved sequentially or concurrently, a hybrid system can be designed that sometimes solves them sequentially and sometimes concurrently, based on which approach is likely to be more efficient. With this strategy, the subproblems can be solved directly, or they can in turn be solved using the same divide-and-conquer strategy, leading to an overall recursive program structure.In summary program following this pattern should have a recursive process creation, base case solving mechanism, and merging the result. Maintaining the right level or recursion depth and problem size may need to be tuned.

The final pattern of our series would be the ring pattern, for more information, please see MPI Plugin design patterns.

Tuesday Apr 01, 2008

Parallel Programming Patterns: Part 3

The pipeline pattern implemented in MPI Plugin, can be applied to those set of problems which can be modeled as a set of data flowing through multiple sets of computations.

The computations are ordered and independent and can also be seen as series of time-step operations. In a sequential execution scenario, the output of first step of computation would serve as input to the second step of computation, and so on for all the sets of computation. Parallelism is introduced in the application by overlapping the operations through different time step operations. The first step of component start operating as soon as the input is available, and the output of this step is passed to the second step component. Not during the nest time unit, the first time step component is free to accept more input and it does so making available the output to the second time step component ion next iteration. In next iteration the second time step component passes on its output to the third time step component and it accepts the output produced by first time step component in this iteration. this cycle keeps on continuing till all input is exhausted and all sets of operations are completely applied over the ordered data. Also a point to be noted is that each computation step must be comparably equal in size to have equal length of time steps, only then substantial parallelism can be achieved.

Next pattern which we will see in this series is Divide and Conquer Pattern. Stay tuned!!

Monday Mar 31, 2008

Parallel Programming Patterns: Part 2

This is the second pattern which is implemented in MPI Plugin, called Master Worker Pattern.

This pattern is used to solve those class of problems which need performing same set of operations over multiple data sets. The set of operations are generally independent of each other and can be performed concurrently. Parallelism is achieved here by dividing the number of computations amongst available processes and each process creating identical number of processes. There is generally a Master process also called a managerial component) present which is responsible for distributing the work amongst the worker processes and then collating the data as the computation completes. Also the distribution of data among the worker processes can generally be done in any specific order, but it is important to preserve the order of processed data. The responsibility of each worker task is to perform each computation repeatedly on multiple sets of data as given by the master process. The decisive factors for choosing this pattern are among (but not limited to) Load Balancing, data integrity and data distribution.

In the next post, we will have a look at the Pipeline pattern.
About

Hardik Dave

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today