Replies: 2 comments 14 replies
-
> nworkers <- 4
> table(unlist(parallel::mclapply(1:100, FUN=function(i) Sys.getpid(), mc.preschedule=TRUE, mc.cores=nworkers)))
5089 5090 5091 5092
25 25 25 25
> future::plan("multicore", workers=nworkers)
> table(unlist(future.apply::future_lapply(1:100, FUN=function(i) Sys.getpid(), future.scheduling=TRUE)))
5093 5094 5095 5096
25 25 25 25 So, "one-to-one". |
Beta Was this translation helpful? Give feedback.
-
In my own application, I'm getting different behavior. Here's a relevant section of code, inside a function that takes a prog <- progressor(along = ppe)
parallelize <- ifelse(any(grepl("mc.", names(list(...))))
, mclapply
, future_lapply)
cpe_parts <- parallelize(ppe, function(ppe_) {
prog()
path_m <- matrix(ppe_, nrow=2, dimnames=list(c("D","T")))
## Let's be sure to skip the stopped paths, tho!
if (any(path_m["D",] <= 0, na.rm=TRUE))
return(list(ppe_))
level <- factor(path_m["D",1:unroll], levels=1:self$max_dose())
tox <- path_m["T",1:unroll]
enr <- cohort_sizes[1:unroll]
n <- as.vector(xtabs(enr ~ level))
x <- as.vector(xtabs(tox ~ level))
paths.(n, x, unroll+1, path_m, cohort_sizes)
}
, ...)
cpe <- do.call(c, cpe_parts)
Here's what happens when I run under each parallelizer: > parallax(C=13, mc.cores=4)
C J elapsed
1 13 11571 3.036
pid cached evals skips calc.ms us/calc
1: 58143 2478 2478 6969 1008 407
2: 58144 1021 1021 3089 430 422
3: 58145 437 437 1099 182 418
4: 58146 365 365 496 153 421
5: 58143 2694 2694 7278 1077 400
6: 58144 1096 1096 3170 459 419
7: 58145 542 542 1159 240 444
8: 58146 407 407 511 169 417
9: 58143 2702 2702 7282 1079 399
10: 58144 1227 1227 3411 524 427
11: 58145 557 557 1165 246 443
12: 58146 426 426 513 177 417
13: 58143 2717 2717 7288 1081 398
14: 58144 1246 1246 3413 533 428
15: 58145 565 565 1169 248 440
16: 58146 429 429 513 178 417
17: 58143 2720 2720 7288 1082 398
18: 58144 1249 1249 3413 534 428
19: 58146 532 532 620 233 439
20: 58143 2739 2739 7290 1092 399
21: 58144 1252 1252 3413 535 428
pid cached evals skips calc.ms us/calc
> parallax(C=13) # with empty ..., will default to future.apply::future_lapply
C J elapsed
1 13 11571 6.385
pid cached evals skips calc.ms us/calc
1: 58147 2478 2478 6969 1052 424
2: 58147 3427 3427 10055 1392 406
3: 58147 3761 3761 11182 1509 401
4: 58147 3996 3996 11733 1603 401
5: 58147 4061 4061 12193 1625 400
6: 58147 4124 4124 12286 1644 398
7: 58147 4187 4187 12388 1664 397
8: 58147 4196 4196 12436 1669 397
9: 58147 4204 4204 12440 1673 397
10: 58147 4299 4299 12717 1708 397
11: 58147 4314 4314 12723 1714 397
12: 58147 4330 4330 12728 1719 396
13: 58147 4330 4330 12749 1719 396
14: 58148 73 73 23 16 219
15: 58148 81 81 27 19 234
16: 58148 84 84 27 20 238
17: 58148 85 85 29 20 235
18: 58148 86 86 31 21 244
19: 58149 157 157 128 68 433
20: 58149 161 161 145 69 428
21: 58149 164 164 145 70 426
pid cached evals skips calc.ms us/calc
> Of course, the progress bars don't work as desired under mclapply; this is what motivated me to investigate the futureverse. |
Beta Was this translation helpful? Give feedback.
-
The relationship between the scheduling described in
parallel::mclapply
and that offuture.apply::future_lapply
isn't entirely clear to me. Do the concepts map 1-to-1, or does the futureverse introduce a higher level of abstraction? How might I replicate themc.preschedule=TRUE
behavior in the futureverse?The
future.scheduling
andfuture.chunk.size
parameters seem not to give me the same level of control over the number of worker threads forked underplan(multicore)
, nor over the way they are assigned to the jobs.Beta Was this translation helpful? Give feedback.
All reactions