Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ovh1.janitor.technology took too long to respond #342

Open
jankeromnes opened this issue Jun 18, 2018 · 1 comment
Open

ovh1.janitor.technology took too long to respond #342

jankeromnes opened this issue Jun 18, 2018 · 1 comment

Comments

@jankeromnes
Copy link
Member

These days, our Docker server OVH1 is seeing frequent periods of 1-2 hours of general unresponsiveness (all containers unusable, proxy not working, no SSH access to troubleshoot, no metrics reported to datadog).

We suspect that it's because of very high RAM usage, causing the system to swap a lot (extremely slow on a rotational disk).

The mitigations we've used before are:

  • wait for the server to finish its RAM-heavy workload (e.g. chrome build) and for responsiveness to come back (can take a few hours)
  • just reboot the server

Looking for better solutions to this problem, I found https://superuser.com/a/1142197 which seems to suggest disabling swap (or having a smaller swap to trigger OOM-killer faster) and/or preventing some critical processes from swapping.

Maybe we could try this:

  • disable swap (since we have a rotational disk, swap is too slow for our needs)
  • prevent node, docker and ssh from swapping, to guarantee their continued responsiveness (this only works well if they don't use much memory to begin with)

@ishitatsuyuki @beaufortfrancois @etiennewan do you agree with my diagnostic? Would trying the ideas above really help us? Do you have other ideas to guarantee consistent responsiveness of our service?

@jankeromnes
Copy link
Member Author

From IRC:

15:42:25 ishitatsuyuki> janx: disabling swap is a good way, although SSD doesn't help much
15:42:57 ishitatsuyuki> In our case, we probably can just remove that trivial 1GB and disable overcommit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant