For a while now Open MPI
has had heterogeneity support back into the source base. In the current version of Sun HPC ClusterTools 7.1
that is based on Open MPI, the heterogeneity testing did not fair so well base on that rev. of the source base. Big thanks to George and Brian for the work that was put into getting the code to work and I am sure many more along the way for additional support. Fast forward to the current source base that I have built and install on some nodes in the lab, Solaris/SPARC and Solaris/x64, and I kicked off the heterogeneity testing again. I have to say that I am happy to say that the limited testing of MPI jobs that I kicked off all passed. So I was seeing MPI communication on the mixed architecture cluster in the lab. From a ClusterTools perspective this is not officially qualified but looks to be something we might see in a future release. So what's next? Back to the lab to test out some other configurations like Solaris/SPARC, Solaris,x64 and Linux/x64.
So why is this anything to worry about? Think about places that have several "clusters" sitting around that they have had for a while... each time they built a new cluster it was a different OS or a different architecture... now they take an start using the entirety of their resources with a version of Open MPI that they can kick off large scale MPI jobs on. Why waste idle CPU cycles when you can put them to use.
For what it's worth.