-
Notifications
You must be signed in to change notification settings - Fork 784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to constrain client connections? #15
Comments
Hello, @eprothro First of all, while I greatly value theoretical optimizations, they are no replacement for experimental evidence. Have you tried benchmarking a Unicorn/NGINX/Heroku stack with various permutations of the backlogs? If so, care to share your results? It has been a while since I did this type of experimentation, and the last time I did, I neglected to publish my results. Perhaps we can collaborate on such a project. Secondly, what is the problem that you are trying to solve? Do you have latency problems? Are you trying to trim down your 99th percentile latency? Do you have connection timeout issues? Is the Heroku router returning H12s? Finally, let me take a stab at addressing some of your direct questions:
Correct. Since NGINX is in a reverse-proxy mode, and since we have worker_connections set to 1024 and since we are using 4 worker processes, the maximum number of clients we can handle is 1024.
Absolutely. This is one of the great advantages of using NGINX in the way we have it configured. If you are working with slow clients, NGINX will use one of its worker connections to handle the byte transfer. This transfer happens prior to occupying a slot in the Unicorn backlog (i.e. the listen backlog on the socket).
From the docs, it appears that a HTTP 503 response is issued. I would like to test this locally to make sure. |
Hi @ryandotsmith, thanks very much for your response. My sincere apologies for just now getting back to you. I had to revert to not using NGINX and move on from this problem, and then I also just recently had a new kiddo. However, I at least owe you a response: To answer your question directly, no I didn't have time to benchmark tweaking the backlog with NGINX in the stack. I had already spent a few days (that I didn't have to begin with) benchmarking potential correlations between client bandwidth and request queueing. I'd be more than happy to collaborate with you as time allows, but I also know that we're both very busy ;). Ultimately, the problem I'm trying to solve is 'zombie' dynos (100% of requests sent to a dyno result in an H12 for anywhere from a few seconds to a few minutes). I've inherited a couple different poorly performing applications over the years that suffered from this problem. Recently, for a slew of reasons, I started to wonder if client bandwidth could be a factor. I hesitate to mention this as the problem I'm trying to solve because the implications and discussion/troubleshooting around those implications are so application specific and way outside this scope of this issue. Moreover, the 'correct' answer here is 'make the app's slow actions faster', however accomplishing that in a timely manner isn't necessarrily possible for a lot of reasons not worth going into. However, I don't feel limiting the backlog is only a theoretical optimization given Heroku's random routing algorithm and my experience. Given either of the above applications, when I limit the backlog depth (using Unicorn's configuration), I've seen (experimentally, with real traffic) that the frequency and impact (number of timeouts before the H12s stop) are both reduced with a very small backlog (like 2N, where N=unicorn concurrency). This makes perfect sense, statistically, to me; and I'd be happy to discuss if desired. However, it sounds like trying to achieve this (configuring the stack to use backlog depth to achieve load balancing) with NGINX in the stack would be a problem, as when max backlog is reached, instead of refusing the connection from the router mesh, it responds to the client with a 503. Hope this helps with some context. |
nginx solo mode
I seriously hesitate to post this here, but figure others may have the same question, and it is so specific to this particular implementation on Heroku... Please just let me know if there's a better place to have this answered.
I agree with others, most will be using this with Unicorn. Those experienced with configuring that server have probably discovered and tuned the
backlog
parameter.Example:
Advantages described in the comment, but in short, this most importantly allows the server/dyno to feedback to the routing layer that it can't handle more.
Short version of question: What is the best way to maintain these benefits with NGINX in the stack?
Longer version:
With NGIX in as reverse proxy, we lose much of this benefit from tuning backlog.
I assume
events.worker_connections
is the key config param here. From NGINX doc:worker_connections
?The text was updated successfully, but these errors were encountered: