-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WG application draft #15
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. That's a lot of questions that they want answered!
In general, I'd suggest to keep the answers as short as possible. We have a public Github, Zulip, etc. so everything that we have done or discussed is available for everyone to see. We also have a fairly realistic roadmap, so I think it might be simpler to just stick to it in the answers. We can hyperlink it everywhere, and then in the "What are your short-term goals" just copy&paste it maybe?
I'd also rather avoid any language that can raise the question "Why not SPIRV" or similar (e.g. why not "generic GPGPU language feature" that can target all GPGPU targets, why not something like SYCL or C++AMP, etc.).
Also, if there is a questions where you might feel that the answer does not add any new information, for example, because that's already answered somewhere else, then I think it is better to just skip the question. I have the feeling that they ask the same thing multiple times.
documents/wg-application.md
Outdated
## What value do you want to bring to the project? | ||
|
||
The Working Group is an attempt to combine efforts to bring reliable support of writing safe GPGPU CUDA code with Rust. | ||
The work that will be done is supposed to clear the path for other GPGPU platforms in future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think i'd just leave this out. You are right that some work here could be reused by other GPGPU Rust targets, but it feels quite speculative to say anything about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't quite agree here. One of the main and probably the biggest topic we will discuss, is the safety in SIMT code. So far, without diging too much into the details, it seems, this will be nearly completely platform-agnostic.
All those compilation warnings and lints we will define, could be applied to any other "coming next" GPGPU platform.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the main and probably the biggest topic we will discuss, is the safety in SIMT code.
This feels like something that ought to be done by the Unsafe Code Guidelines WG / Lang team, and only tangentially here.
is the safety in SIMT code. So far, without diging too much into the details, it seems, this will be nearly completely platform-agnostic.
I don't know. Previous implementations of SIMT in Rust, like this extension, look quite different from what we will able to do for the time being in Rust without significantly extending the language. At the same time, this is all different from OpenMP, rayon, ISPC, and the other dozens different approaches to this.
Tackling this problem changes the roadmap of the WG from "getting CUDA support kind-of-working" to "language extensions for SIMT".
While this is something that could be tackled later, I think it is unrealistic to try to tackle any of this at this point in time.
All those compilation warnings and lints we will define, could be applied to any other "coming next" GPGPU platform.
Safe Rust code not invoking undefined behavior cannot be a lint or a warning. It has to be something that is impossible to do 100% of the time, and rejected by the type system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not saying that this issues are not important. Maybe it would be worth it to address them in a different question, e.g., about which kind of support we need from other teams, or which with other teams we expect to collaborate. We could add there that we expect to collaborate with the UCG WG and the Lang team to make sure that the SIMT programming model exposed by CUDA kernels remains sound or something like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tackling this problem changes the roadmap of the WG from "getting CUDA support kind-of-working" to "language extensions for SIMT".
Okay, thanks, I see the point!
I forgot, thank you for doing this! I never thought it was going to be this much work! |
Co-Authored-By: gnzlbg <[email protected]>
@gnzlbg thanks for reviewing and valuable suggestions! I'm going to address remaining comments in the next days. |
I think we are ready to go with the application, except the decision about leadership. Pinging WG members for the final discussion. Any last thouths or suggestions? @bheisler @gnzlbg @termoshtt @vadixidav @Dylan-DPC @DiamondLovesYou @peterhj @AndrewGaspar |
So I talked with @skade from the community team, and they clarified that this application process is for "domain" working groups (e.g. game development, etc.). In particular, something like "allocators" isn't really the domain that they have in mind, which is why WG-Allocator ended up just being part of the libs team. Thinking about domains, the domain that we would belong to would be probably "heterogenous computing" or something like that, and such a domain WG would attack the problem as a whole, as opposed to just focusing on CUDA. For this reason, it probably makes no sense for us to apply as a Domain WG, but rather, e.g., as a t-compiler/wg-cuda team or something like that. Sadly, this wasn't really clear to any of us from the announcement in internals. I've asked the compiler team if they would take us as a sub-team. |
I would like to add that I would still send this document to the relevant groups for inspection, it helps them a lot assessing what you want and also to come back to it every time they forget :). |
Uh-oh! On the bright side, now we have one more document about our mission. @gnzlbg let's merge the PR then? |
Webassembly is a domain WG, as it also has ample amount of work in building the wider ecosystem. The whole thing isn't precisely defined... |
An early draft of the CUDA WG application. Source: internals.
Nothing written here is carved in stone, we can discuss alternative goals or wording. I would prefer that we are on the same page about the mentioned here things.
If anybody has experience leading or participating in WGs, I would suggest that person as a leader. Otherwise, I'd be happy to give a try with leadership, but I've never participated in WGs. Does anybody else want to suggest candidacy? By the way, as mentioned in the application form, we'd preferably have several leaders.
Also, It's not yet complete, please let me know if you have suggestions about missing parts.