-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Elemental built against system MPI #37
Comments
Elemental absolutely supports every modern MPI implementation. I assume that you mean Elemental.jl? |
@poulson Oh, Yes. I meant "Elemental.jl". Thank you for the correction. |
For some reason, I'd unwatched my package here so I've only seen this issue now. Soon, we'll change |
We no longer build the sources as part of the package installation like we used to. So the only options are - use what ships with BinaryBuilder, or provide your own custom build. Note that while MPI.jl allows system MPI to be used, Elemental.jl needs an update to allow a system build (that lets it opt out of the BB provided binaries). |
Hi, is there a way we can speed up this update? I volunteer my time. At NERSC we need to build against the system MPI, so I am happy to help out if this means we can deploy Elemental.jl on our systems sooner. I am still a bit new to BB, so can someone give be some guidance how I can "opt out of" BB provided binaries? |
Btw @ViralBShah in Julia 1.6.0 we get
As a workaround I tried:
but that doesn't fix it. TBH, I can only see one place where Then again, I don't know much about BinaryBuilder but it looks like |
Follow-up: is there a way to drop the |
MPI.jl has a mechanism that allows for using a system MPI instead of the BB provided MPI. However, it's really not clear to me how that can work here unless we reintroduce the code for building Elemental as part of this package. We could try to mimic the MPI.jl code and link against a system provided libelemental but, historically, it was important to keep a tight connection between the version of the wrappers here and the version of libelemental since the API was evolving. If you already have a build of libelemental that links against your MPI then you can try to remove |
@andreasnoack if you can share with me the BB code that was used to generate |
It's here, https://github.com/JuliaPackaging/Yggdrasil/blob/master/E/Elemental/build_tarballs.jl, but I don't see why it would be easier to modify |
Thanks @andreasnoack I'll take a stab at this and let you know. This is easier because I don't need elemental. Several of our users do. So I could explain to each of our users how to modify |
Another question @andreasnoack -- why do you use the deprecated elemental repo instead of https://github.com/LLNL/Elemental ? The LLNL version comes with CUDA support. |
I think the last time I checked, I could not get the build to work in the new repo. We are not yet ready to enable CUDA support in BinaryBuilder, because we don't have infrastructure to distribute CUDA-built binaries. @maleadt may be able to say when we can expect that. In the meanwhile, we can certainly switch to the new upstream repo for building Elemental_jll. @would you be able to submit a PR? |
What I don't understand is why it would be easier to put |
@andreasnoack a user might want a specific version/make their own changes. This seems to be more maintainable: unless the Also: we don't have an Elemental module, so I would have to write a build script anyway. |
What we should really do is update Elemental_jll to be from the new repo. Then update Elemental.jl to use those new binaries, and whatever features are needed to use a system Elemental - put them behind an environment variable. We are happy to update the upstream package to allow whatever local configuration is necessary. |
Here's some of the problems trying to build the Elemental from LLNL: |
Are there any new developments concerning this issue? I wanted to test |
To make the With #64 extended to a few more strings that might be sufficient for a reasonably wide range of uses. |
If someone want to take on JuliaPackaging/Yggdrasil#4776 that would be great. Then we can provide binaries for OpenMPI & MPICH as well as our portability layer MPItrampoline |
I see there has been some progress here (JuliaPackaging/Yggdrasil#5130), but I can't tell what, if anything, still needs to happen. Any advice? |
Is there any plan to support MVAPICH2 & intel MPI?
Intel MPI (which has binary compatibility with MVAPICH2) is also one of widely used MPI implementation.
It would be nice to Elemental to support MVAPICH2!
The text was updated successfully, but these errors were encountered: