You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've got several reports of severely increased memory usage, where the node slowly grows RAM consumption until about 30 GB, and then is killed and restarted.
While investigating this issue, we were able to diagnose on aspect of increased memory usage, due to inefficiencies in our netwroking code. This particular issues manifested itself as node consuming up to 8 gigs of memory shortly after startup, when trying to sync. We've fixed this networking issue which should help a bit, but we weren't able to reproduce the originally reported bug with node slowly growing memory usage to dozens of gigabytes. So far, we didn't have luck with reproducing that: all the nodes we are observing sit at about 10 GB.
So, if you have any information which would help us to pinpoing why sometimes node uses far more than 10 GB, please post it here. Logs would be especially useful.
If you can reproduce the problem, the absolutely perfect thing to do would be to get a flamegraph of live allocations using bytehound memory profiler.
To capture that:
build bytehound from source, using instructions from readme.
set MEMORY_PROFILER_CULL_TEMPORARY_ALLOCATIONS=1
run neard with LD_PRELOAD=./path/to/libbytehound.so
The text was updated successfully, but these errors were encountered:
We've got several reports of severely increased memory usage, where the node slowly grows RAM consumption until about 30 GB, and then is killed and restarted.
While investigating this issue, we were able to diagnose on aspect of increased memory usage, due to inefficiencies in our netwroking code. This particular issues manifested itself as node consuming up to 8 gigs of memory shortly after startup, when trying to sync. We've fixed this networking issue which should help a bit, but we weren't able to reproduce the originally reported bug with node slowly growing memory usage to dozens of gigabytes. So far, we didn't have luck with reproducing that: all the nodes we are observing sit at about 10 GB.
So, if you have any information which would help us to pinpoing why sometimes node uses far more than 10 GB, please post it here. Logs would be especially useful.
If you can reproduce the problem, the absolutely perfect thing to do would be to get a flamegraph of live allocations using bytehound memory profiler.
To capture that:
MEMORY_PROFILER_CULL_TEMPORARY_ALLOCATIONS=1
neard
withLD_PRELOAD=./path/to/libbytehound.so
The text was updated successfully, but these errors were encountered: