You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, this package's exported functions lets the user set up and run one simulation with one set of parameters. In most use cases, the user would be running N simulations for each of M different set of parameters. I think it would make much more sense to parallelize by splitting either the N simulations or M parameter sets across the workers/threads. So this would be something external to the package, users would write a script that sets up the different parameter sets and runs the simulation (by calling this package) and that's where the parallelism should go.
If we try to parallelize within one simulation (within this package), then the best place I can think of is to split up the active cases across threads and "evaluate" them (i.e. generate new infections, promote them etc.). But by design, this simulation model is meant for small number of cases so I think this is unlikely to be very effective due to the overheads. In addition, we would have to do the task splitting and merging at every time step if we want the branches to know about each other. And if we didn't need that (i.e. no need to know about total number of cases or total number of new cases that require contact tracing, which could impact tracing effectiveness) then it would be simpler to just run N simulations of the same set of parameters and then add them up at the end.
No description provided.
The text was updated successfully, but these errors were encountered: