-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v4-CMake #430
base: v4
Are you sure you want to change the base?
v4-CMake #430
Conversation
Wew sounds good! I'm happy to cop the Windows testing. I'll setup CI too once the build is runnable (and probably a linter, since I've already slopped in some spaghetti). I agree it's clumsy/overbearing/amateurish to build the user's executable, though I suspect a good chunk of QuEST's user base have physics backgrounds and lack the requisite skills to make their own build, or integrate QuEST into a bigger stack. I want to make compilation as easy as possible for that group somehow. I'm open to better ideas! |
…threading enabled and disabled
Can confirm that I can build and run a (very simple) example code using the available new API with and without multithreading enabled (though it doesn't deploy with MT at one qubit...), so now I can get started on the fun stuff. Notes:
|
Bump to CMake 3.14 incoming for policy CMP0082. |
Ahh I'm sorry for the rebasing and conflicting the PR! I hadn't realised the source files were being modified too. I'm intending to rename some files ( Btw are the relative path includes changed in b87b7f5 essential for CMake building? I was following Kolpackov's project structure (but preserving // API
#include "quest/include/modes.h"
#include "quest/include/environment.h"
#include "quest/include/qureg.h"
#include "quest/include/structures.h"
// core
#include "quest/src/core/errors.hpp"
#include "quest/src/core/bitwise.hpp"
#include "quest/src/core/memory.hpp"
#include "quest/src/core/utilities.hpp"
#include "quest/src/comm/comm_config.hpp"
#include "quest/src/cpu/cpu_config.hpp"
#include "quest/src/gpu/gpu_config.hpp"
// external library
#include <iostream>
#include <cstdlib>
#include <vector>
#include <map> It was also consistent in that all paths were relative to the directory containing If I understand right, with the new includes, some are relative to #include "types.h" and the rest are relative to the source file location (e.g. this in #include "../core/errors.hpp" This seems less elegant and consistent - is it necessary for compilation, or offer another benefit I've missed? |
No problem! Any conflicts should be pretty trivial to resolve, as I wouldn't expect to touch anything more than includes, so happy to just keep merging as and when. Apologies, I'd assumed it was just an artifact of the make build 😅 Advantages of the way I've done it:
However, through CMake, all things are possible 😉
So very happy to change it all back, it's just a find/replace job! It's your project, so let me know which you'd prefer. |
Library renaming is now available -- the user can either just provide a custom string to produce GPU acceleration and MPI distribution are now also available, but there are two issues:
|
@TysonRayJones is the plan to support HIP in the same way (i.e. by setting |
Oh and what's the minimum required CUDA version? |
Ahh annoying - it's a little unexpected that they provide GNU make examples, but it makes me feel less out of touch :^) . Maybe they're expecting most users to build cuStateVec via CUDA Quantum (which we should expressly not do).
Yea! But you can choose any name that makes sense / is consistent, like Note that the GPU code (even when not compiling
This requires that HIP can wrap/map the Thrust functors and calls to rocThrust, but I don't know if that's always included with ROCm. I've just been hoping and praying! In any case, feel free to defer HIP support for the moment in case it causes a problem, and we can re-enable HIP in a later patch.
It will be build-configuration dependant:
QuEST v3's default CC was 3.0, so let's just use that as a place-holder when not compiling cuStateVec ¯\(ツ)/¯ |
Almost certainly! I will probably just relent and write a simple findCuQuantum. This works fine as long as standard installations of it have a consistent directory structure.
For the compiler flags
Good to know! Seems like both Thrust and rocThrust export CMake targets, so should at least be able to make this an explicit dependency. I'll look into it (no point adding it if it breaks builds that actually should work...).
I actually looked into it already on the ARCHER2 GPU testbed, and enabling HIP support is super easy with CMake 3.21+ (for context, the latest is 3.30, so we're still well behind the curve). Shouldn't be an issue therefore +/- potential Thrust issues. I can at least get HIP support in and building for now, and then see if it actually runs in the testing/benchmarking phase. Perfect, thanks for the versions, I'll add them in! |
Note: Currently |
Eurgh, so although Thrust has a CMake target, but for some reason CUDAToolkit doesn't expose it 🙄 Best option for now to make this an explicit dependency is probably to figure out what version of CUDAToolkit bundled the minimum version of Thrust and then set that as a minimum version of CUDAToolkit... We could insist on explicitly linking Thrust, but I feel it's not worth getting the user to dig through NVIDIA install directories when the code would actually link fine. |
Hi @TysonRayJones,
Creating a PR to discuss and critique my work on the new build system :) I'll use this as a place to keep notes and thoughts too.
I've been through and identified the following open issues which I'll try to address, as well as my own biases:
Notes: