Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with single threaded SPIR-V test #735

Closed
reeselevine opened this issue Sep 11, 2024 · 2 comments
Closed

Issues with single threaded SPIR-V test #735

reeselevine opened this issue Sep 11, 2024 · 2 comments

Comments

@reeselevine
Copy link

I'm investigating some properties of the outcomes of data races in GPU memory models, and I'm interested in seeing what the Vulkan models currently do. Starting with some simple tests, I'm seeing outcomes that seem buggy. I recently opened an issue against the Vulkan Alloy model about this: KhronosGroup/Vulkan-MemoryModel#39

It looks like I'm seeing some of the same behaviors when using Dartagnan. For example, given the following test:

Vulkan test
{
P0:r0=0;
x=0;
}
 P0@sg 0, wg 0, qf 0 ;
 st.av.dv.sc0 x, 1 ;
 ld.vis.dv.sc0 r0, x ;
filter
(P0:r0 == 0)

And running against the default checks (program_spec,cat_spec,liveness), Dartagnan reports this as PASS. However, it seems wrong that r0 should be able to read 0, since x is set to 1 in the same thread.

I realize these are not typical litmus tests, in that they only consider one thread, but I wanted to get some thoughts on these behaviors, as they affect how I can model multi-threaded tests with data races. Or, if I'm doing something wrong in specifying the test, I would appreciate some guidance!

@hernanponcedeleon
Copy link
Owner

Any particular reason why you are using filter rather than exists?

The former tells dartagan to check the property in those executions satisfying the filtering condition. Since (as you said) no execution should satisfy this condition, the program is trivially correct.

If you use the later dartagan will report FAIL/No. Keep in mind that PASS/FAIL might be counter intuitive depending if the condition is (~)exists/forall. For litmus tests it is easier to check the No/Ok (similar to herd7)

@reeselevine
Copy link
Author

Ah I was just pattern matching from the mp-filter test. Using exists it fails, thanks for explaining the difference!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants