|Posted on November 4, 2011 at 11:05 AM|
I first learnt about parallelization when I was still in Aarhus. Where else? I still have the impression I spent most of my time in Aarhus learning... this or that thing. We had a brief introduction to MPI and OpenMP protocols and like many other things I learnt, I left it aside for 'some time'. That time turned out to be a couple of years... After a year in Poland, it was pretty obvious that I had to retake those notions if I wanted to perform highly accurate calculations on harmonium.
After browsing the net for tutorials (there is tones of information about parallel computing) I started teaching myself some basics of MPI. A few weeks later I had parallelized a simple matrix multiplication program, and after a few months I got an MPI version of a the FCI code we were working with. Right after that, I parallelized a joint version of this FCI code with an external interface, so that we could use this software in the Barcelona Supercomputer Center (BSC, aka Mare Nostrum) with up to 64 cores. Thereafter, I converted many different versions of this code to MPI protocol.
A few months passed and I left Poland, with many ongoing projects behind. One of those needed the calculation of very expensive four-components CCSD(T) frequency calculations. We already had the software for that (thanks to Pedro Salvador) but it seems it would take ages to obtain the results we needed. So, once again, I undertook to parallelize the software. Last summer I finally fulfilled my promise and we got a twelve-fold parallel version of Pedro's code. I still have to parallelize the geometry optimization with the same code, but as many other things that are not in a rush... it's on the todo list.
Recently, we started working with Jerzy and Krzysztof Strasburger on three-electron harmonium for small values of the confinement parameter. These calculations are even tougher than those I did in BSC, so that we had to use Krzystof's code with explicit correlated gaussians to obtain some meaningful results. His code produced very accurate energies and wavefunctions that had to be treated by a software that Jerzy developed in order to obtain natural orbitals and their occupancies. Soon enough, it became clear that we would need to parallelize Jerzy's code in order to get those results asap. This time we achieved a succesful parallelization on up to 256-cores [personal best ;-)] with BSC resources. I have the feeling the code is good enough to do 1024-fold parallelizations, but we did not need that much.
Two weeks ago I put myself a new challenge, to parallelize a quite large code that performs calculations using a natural-orbital based energy functional (NOF theory, NOFT). The code is known as PNOFID and was developed by Mario Piris. This is the first of such codes (to the best of my knowledge) and the success of this parallelization could open the way for NOFT calculations on 'big' molecules. If we get something parallel-worthy I will post something here...
Obviously, parallelization can become very tricky, especially when want to parallelize different tasks inside the same code, or when the problems we are dealing with are not embarassingly parallel or almost-embarassing parallel. Then, parallelizing a code can become a real pain. However, after these short experiences on parallelization I draw some conclusions:
Almost anything is paralellizable. In many cases, there is a unique task that is consuming most of time, and often enough, this task is simple. We just need to detect the bottleneck and apply the divide-and-conquer strategy that fits. Most of tasks we routinely perform in computing are trivial and thus can be parallelized.
Parallelization can be achieved easily. Sometimes it can be done really easily. Once you learn the MPI commands and gain some experience, you realize that the things to change to obtain a paralell code are relatively few and identical from code to code.
So, sometimes it is just a matter of getting some basic training on MPI (which I guess may take up to a month time) and try to parallelize our favorite code. Things may get easier than you thought.