-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Normalise angle fix #103
Normalise angle fix #103
Conversation
… correctly for negative inputs and doesn't stall when passed very large inputs.
Thanks for this iteration on the code (that IIRC I put in)... the diff itself looks fine to me but I have a few questions about the intention of this code change:
The above are my concerns, but aren't a change request, as this is not my repo :) |
I didn't actually test that. Might be worth trying though; it would be simple enough to write something that calls normalise_angle in a loop and time it.
The magnitude of the input angle is a limiting factor for having useful precision here, but I expect this kind of precision loss probably would have happened with the previous implementation too, as there would be a little precision loss with each addition/subtraction of 360. In the example I gave (input of 1e36), there is too much precision loss for the output to be usefully precise. However, if one gives an input of say 1e14 there is enough precision remaining to be potentially useful and the time to run the loop is already very large at that point. |
I just wrote a small test that calls normalise_angle 10,000,000 times with random inputs between -720 and 720. Both implementations average about 1.68-1.73 ms per call. I'd have to do some more careful testing to know for sure if there is a performance change for "small" inputs, but it looks like the impact is on the order of 3% or less. |
I considered adding something like this, but it would have required making an arbitrary decision on how large an angle should be allowed before the exception is triggered, and that arbitrary threshold would then be imposed on client code. Not checking this effectively leaves the decision up to the calling code how much precision loss is acceptable, which IMO is a more appropriate place since the calling code has visibility into how the output will be used. |
Thank you for confirming this!
I agree with everything you've written. Producing a low-precision output (as in this PR) for "large" inputs is definitely an improvement over hanging (as in develop), so this PR sounds like a good improvement to me. We'll see what the ECMWF team prefers regarding precision-loss vs erroring... |
Not saying you should, but a hybrid solution could also be created?
|
@wdeconinck, implementing a switch between the two methods as you describe would be straightforward to do, but I'm not sure there's a benefit to keeping both since the modulo method produces identical output in comparable or faster time for every case I've tested. |
@jhaiduce if the output is bit identical, then I am OK with this change. |
@wdeconinck in the case of very large numbers being input to the function the output will have fewer significant figures than the input value. This is true for both methods. If the input is around 3,600 degrees for example, the output will have one less significant digit than the input for both methods but the new method will return faster. If the input is 36,000 degrees the output will have two fewer significant digit than the input did, and this will be true for both methods but the new method will give a result in about 100x less time. This is limited by the floating-point precision of the |
Hi @jhaiduce, @wdeconinck, maybe there is, in the future, room to pursue a solution removing integer multiples of 360 so as not to have any loss of precision whatsoever (as argued) but the solution found is quite elegant already. I'm of the opinion the performance motivation isn't the best though, as values wildly different from the typical ones are probably symptomatic of other (likely bigger) issues -- but it doesn't detract from that this is an improvement. The only final thing I would add is to run clang-format, my eyes are now quite atuned to that :-) |
I suspect in the extreme cases (where the last digit significant digit is already larger than 360), the only way to improve the precision is to change the type of the input arguments.
Agreed, if one passes in values much larger than 360 it probably reflects the function being used for something other than its intended use, perhaps passing in other coordinate types where an angle was expected (incidentally the way I noticed that an improvement could be made here was by accidentally passing in uninitialized data).
Just ran clang-format and committed the change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jhaiduce for your contribution! Looks perfect now.
Co-authored-by: John Haiducek <[email protected]> Co-authored-by: Willem Deconinck <[email protected]>
My statement that they were identical was based on passing the test built in to eckit, but those only covered a few cases. Apparently the downstream tests are more thorough. |
This modifies `eckit::geometry::normalise_angle`` so that it uses a modulo operator to find the correct output value, rather than repeatedly adding or subtracting 360 from the input. This makes the function orders of magnitude faster for very large input values.
Also added a few additional checks to test_coordinate_helpers to verify that
normalise_angle
works correctly on negative inputs and very large (positive or negative) inputs.Fixes #102.