It's come to my attention (one helpful email, plus some snarky subtweets) that the --cpu
flag of HMMER3 search programs may have a bad interaction with some cluster management software.
The --cpu n
argument is documented as the "number of parallel CPU workers to use for multithreads". Typically, you want to set n
to the number of CPUs you want to use on average. But it is not the total number of threads that HMMER creates, because HMMER may also create additional threads that aren't CPU-intensive. The number of threads that most HMMER3 programs will use is currently n+1
, I believe. The HMMER4 prototype is currently using n+3
, I believe.
The reason for the +1
is that we have a master/worker parallelization scheme, with one master and n workers. The master is disk intensive (responsible for input/output), and the workers are CPU intensive.
The reason for the +3
is that we are making more and more use of a technique called asynchronous threaded input to accelerate reading of data from disk. We fork off a thread dedicated to reading input, and it reads ahead while other stuff is happening. Another thread, in our current design, is there for decompression, if necessitated by the input file.
Apparently some cluster management software requires that you state the maximum number of CPUs your job will use, and if the job ever uses more than that, your job is halted or killed. So if HMMER starts n+1 threads, and the +1 thread -- however CPU-nonintensive it may be -- gets allocated to a free CPU outside your allotment of n, then your job is halted or killed. Which is understandably annoying.
The workaround with HMMER3 is to tell your cluster management software that you need a maximum of n+1
CPUs, when you tell HMMER --cpu n
. You won't use all n+1 CPUs efficiently (at best, you'll only use n of them on average), but then, HMMER3 is typically i/o bound on standard filesystems, so it doesn't scale to more than 2-4 CPUs well anyway.
I find it hard to believe that cluster management tools aren't able to deal smartly with multithreaded software that combines CPU-intensive and i/o-intensive threads. I presume that there's a good reason for these policies, and/or ways for cluster managers to configure or tune appropriately. I'm open to suggestions and pointers.