-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add HLSLcc flag to let users disable bit-cast temp registers #23
Comments
I have a strong preference to fixing the current code and not adding this flag. The old method of managing temp variables was not correct, even though there are shaders which will not hit the problematic cases. The current code is new and it would not surprise me that some instructions don't have the correct casts being applied. If you don't mind sharing the bad glsl shader then I can take a look. Otherwise you could try to spot the problem in the shader and I could eyeball the recent compiler changes for anything missing/suspicious. |
Here are example GLSL shaders generated before and after bit-cast temp registers were implemented in HLSLcc: BEFORE (correct output): AFTER (corrupted output): I am thikning that the problem is that HLSLcc is casting the 0xFFFFFFFF uint to an int at this line, whereas it should really be casting to uint in this case (and was doing that correctly before): " Temp_int[0].y = floatBitsToInt(((intBitsToFloat(Temp_int[0]).y)< (intBitsToFloat(Temp_int[0]).z)) ? int(0xFFFFFFFF) : 0);\n" |
That is a bad instruction. Then the comparison passes it would evaluate to |
Using the latest version from main, the bit-cast register change is generating invalid GLSL on another shader. Before bit-cast registers were added, the GLSL for this particular shader contained this instruction:
And now, HLSLcc is generating this:
I am now getting these 2 GLSL compiler errors on this line: Error: Failed to compile GLSL shader I think it is invalid because "g_c0_12" and "g_c0_13" are scalars. (They are floats stored in a UBO.) |
Has anyone been looking into this? I'm having issues using this to target WebGL at the moment since there is no GL_ARB_shader_bit_encoding to work with. For targetting GLSL ES 1.00 I'm using the 4_0_level_9_3 targets, do the same SM4+ typeless register rules apply here, or perhaps should level_9_x be handled differently when casting int to float? There also seems to be a few other strangenesses with it:
For the mean time I may just drop back to prior to this change, and fix up anything else by carefully writing the source HLSL in such a way to avoid other issues I've spotted but are logical given the input HLSL bytecode (bitshifting instead of multiplying for example, unsupported in GLSL ES 1.00) |
This commit has introduced corruption in multiple GLSL 150 fragment shaders that I have been generating with HLSLcc:
"Bit-cast temp registers - issues #8, #20 and #21"
fa593f4
The previous HLSLcc versions worked fine for me, using SM5 pixel shaders as input and "HLSLcc.exe -lang=150 -flags=1 ...".
Would it be possible for you to add a new HLSLcc flag to let users disable bit-cast temp register strategy for specific shaders (reverting to the old strategy that uses separate arrays of registers for each type)?
This will have the advantage to remove a dependency with GL_ARB_shader_bit_encoding, and hopefully will fix the regression on my end.
The text was updated successfully, but these errors were encountered: