Skip to content

The effective number of parallel jobs decreases during the computation when the total number of jobs is larger than number of cores #644

Discussion options

You must be logged in to vote

Hello. If I understand your description of the problem correctly, it sounds like a so-called "load balancing" issue, where you basically have parallel workers sitting idle at the end.

The default behavior for doFuture, but also for siblings future.apply and furrr, is to take all N iterations and chunk them up into W equally-sized portions, where W = number of workers. Each worker gets to process one chunk. In your case, with N=1000 and W=64, each worker processes on 15-16 Stan models. It sounds like some chunks finish much sooner than the others, so at the end there are only a few workers actually using the CPU.

There are ways to configure what chunking strategy to use. See Section 'Load …

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@HenrikBengtsson
Comment options

@zhengchencai
Comment options

@HenrikBengtsson
Comment options

@zhengchencai
Comment options

Answer selected by zhengchencai
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants