This is the second pattern which is implemented in MPI Plugin, called Master Worker Pattern.
This pattern is used to solve those class of problems which need performing same set of operations over multiple data sets. The set of operations are generally independent of each other and can be performed concurrently. Parallelism is achieved here by dividing the number of computations amongst available processes and each process creating identical number of processes. There is generally a Master process also called a managerial component) present which is responsible for distributing the work amongst the worker processes and then collating the data as the computation completes. Also the distribution of data among the worker processes can generally be done in any specific order, but it is important to preserve the order of processed data. The responsibility of each worker task is to perform each computation repeatedly on multiple sets of data as given by the master process. The decisive factors for choosing this pattern are among (but not limited to) Load Balancing, data integrity and data distribution.
In the next post, we will have a look at the Pipeline pattern.