Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalability question #11

Open
Helveg opened this issue Jul 29, 2019 · 1 comment
Open

Scalability question #11

Helveg opened this issue Jul 29, 2019 · 1 comment

Comments

@Helveg
Copy link

Helveg commented Jul 29, 2019

Can this generator handle 100,000 or a million particles? What do you estimate the runtime would be using supercomputers (these)? (Using some Or what are the O(...) runtimes for the LS or FBA algorithms?

@VasiliBaranov
Copy link
Owner

Hi Helveg,

the running time varies with the contraction rate (which determines your final density). See Fig. 2a and 2c in https://pubs.rsc.org/en/content/articlehtml/2014/sm/c3sm52959b . If i recall correctly running time is roughly linear with the contraction/compression rate and for the compression rate 1e-2 for the LS algorithm (10 000 particles) it is ~2 minutes, for 1e-5 it is a couple of days (which roughly fits the linear scaling). Also, it is roughly linear with the number of particles. I.e. for compression rate 1e-2 for one million particles you will need ~200 minutes (but it is just a rough guess, also if my recollections are correct). I ran the algorithm up to 300 000 particles, it worked fine. A colleague of mine could run his own implementation of the Jodrey-Tory algorithm for one million particles (but this algorithm produces much lower densities for a comparable running time). I presume the program shall run fine for 1 million particles.

The algorithms here are not well parallelizable, so they run on a single CPU thread on a supercomputer as well, so generation will be roughly the same in speed or even slower then on your PC or laptop (depends on supercomputer CPUs). You can benefit from a supercomputer if you need to generate thousands or millions of packings, then you can run generation for all of them in parallel. Also, a supercomputer may have better or more suitable for the hardware compilers (IBM or Intel). But i don't remember if i tested the program with them. GCC is for sure a safe option.

Best,
Vasili

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants