Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigating increased memory usage #134

Open
matklad opened this issue Aug 29, 2022 · 0 comments
Open

Investigating increased memory usage #134

matklad opened this issue Aug 29, 2022 · 0 comments

Comments

@matklad
Copy link

matklad commented Aug 29, 2022

We've got several reports of severely increased memory usage, where the node slowly grows RAM consumption until about 30 GB, and then is killed and restarted.

While investigating this issue, we were able to diagnose on aspect of increased memory usage, due to inefficiencies in our netwroking code. This particular issues manifested itself as node consuming up to 8 gigs of memory shortly after startup, when trying to sync. We've fixed this networking issue which should help a bit, but we weren't able to reproduce the originally reported bug with node slowly growing memory usage to dozens of gigabytes. So far, we didn't have luck with reproducing that: all the nodes we are observing sit at about 10 GB.

So, if you have any information which would help us to pinpoing why sometimes node uses far more than 10 GB, please post it here. Logs would be especially useful.

If you can reproduce the problem, the absolutely perfect thing to do would be to get a flamegraph of live allocations using bytehound memory profiler.

To capture that:

  • build bytehound from source, using instructions from readme.
  • set MEMORY_PROFILER_CULL_TEMPORARY_ALLOCATIONS=1
  • run neard with LD_PRELOAD=./path/to/libbytehound.so
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant