You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that for all python tools that calculate voronoi indexes, it can only take all data points to calculate voronoi indexes for all data points. However, for above 250,000 data points, it will cause memory error.
A way to circumvent this issue is to calculate voronoi indexes for a subset of all data points. I am wondering if this option can be allowed.
What i have tried:
Try to calculate the voronoi index for single data point out of all data points,
Based on how current pyvoro works, i have to calculate the neighboring points for this single point,
then i combine neighboring points with this single point as the whole data points, feeding these points into pyvoro for calculating the voronoi cells for all points. Only take the voronoi cell for the single point for my purpose.
However, i get the following errors:
missing cells X X
Error sending result: 'VoronoiPlusPlusError('number of cells found was not equal to the number of particles.',)'. Reason: 'PicklingError("Can't pickle <class 'pyvoro.voroplusplus.VoronoiPlusPlusError'>: it's not found as pyvoro.voroplusplus.VoronoiPlusPlusError",)'
The text was updated successfully, but these errors were encountered:
I assumed that this error is due to the spherical shape of neighboring data points and I found that this is indeed the true cause. The regular nearest neighboring algorithm to find data points surrounding the point in terms of sphere would not work due to the fact that spherical shape will cause the missing cell error as referred above.
In my package, I have implemented the following:
take each of the interested points and find its neighbors in terms of a cube surrounding this point considering periodic boundary condition. Then I calculate the voronoi indexes of this small sample and only output the voronoi index of this point. This approach not only avoid the large sample memory error, but also improve the voronoi calculation speed substantially by using only a subset of data points to get correct voronoi results for the interested point.
This successfully circumvent this intrinsic limitation of pyvoro and tess by avoiding calculating voronoi indexes for the whole sample.
It seems that for all python tools that calculate voronoi indexes, it can only take all data points to calculate voronoi indexes for all data points. However, for above 250,000 data points, it will cause memory error.
A way to circumvent this issue is to calculate voronoi indexes for a subset of all data points. I am wondering if this option can be allowed.
What i have tried:
Try to calculate the voronoi index for single data point out of all data points,
Based on how current pyvoro works, i have to calculate the neighboring points for this single point,
then i combine neighboring points with this single point as the whole data points, feeding these points into pyvoro for calculating the voronoi cells for all points. Only take the voronoi cell for the single point for my purpose.
However, i get the following errors:
missing cells X X
Error sending result: 'VoronoiPlusPlusError('number of cells found was not equal to the number of particles.',)'. Reason: 'PicklingError("Can't pickle <class 'pyvoro.voroplusplus.VoronoiPlusPlusError'>: it's not found as pyvoro.voroplusplus.VoronoiPlusPlusError",)'
The text was updated successfully, but these errors were encountered: