You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Our application spends a lot of time applying the 'nonlinear' adjustment to the interpolation weights during interpolation. I believe this is a significant opportunity for performance improvements.
Describe the solution you'd like
I am imagining we could speed this up by utilising a similar approach to that used by other systems. We could cache the weights including applying a land mask at every level. This has the downside of increasing the size of the weights in memory by a multiple of the number of levels. However, it has the upside that we could use standard sparse matrix multiplication techniques to apply the weights when interpolating, rather than re-adjusting the weights for every field and every level at each application in here:
// We cannot apply the same matrix to full columns as e.g. missing values could be present in only certain parts.
I think this would be the equivalent of adding 4 extra fields to a fieldset in terms of memory cost, but lead to a O(n_levels*n_fields) speed up in the interpolation application for multiple field, multiple level fieldsets.
Describe alternatives you've considered
There are other potential improvements to the interpolation with missing values (such as reducing the number of copies). One suggestion in a comment in the code is to perform only copy on write. I think my suggestion should be considerably more performant than anything else I can think of, but I am open to suggestions by all means.
Additional context
No response
Organisation
Met Office
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Our application spends a lot of time applying the 'nonlinear' adjustment to the interpolation weights during interpolation. I believe this is a significant opportunity for performance improvements.
Describe the solution you'd like
I am imagining we could speed this up by utilising a similar approach to that used by other systems. We could cache the weights including applying a land mask at every level. This has the downside of increasing the size of the weights in memory by a multiple of the number of levels. However, it has the upside that we could use standard sparse matrix multiplication techniques to apply the weights when interpolating, rather than re-adjusting the weights for every field and every level at each application in here:
atlas/src/atlas/interpolation/method/Method.cc
Line 125 in 5dc85c9
I think this would be the equivalent of adding 4 extra fields to a fieldset in terms of memory cost, but lead to a O(n_levels*n_fields) speed up in the interpolation application for multiple field, multiple level fieldsets.
Describe alternatives you've considered
There are other potential improvements to the interpolation with missing values (such as reducing the number of copies). One suggestion in a comment in the code is to perform only copy on write. I think my suggestion should be considerably more performant than anything else I can think of, but I am open to suggestions by all means.
Additional context
No response
Organisation
Met Office
The text was updated successfully, but these errors were encountered: