Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make reader and writer buffers configurable #2014

Merged
merged 2 commits into from
Dec 14, 2024

Conversation

tony2001
Copy link
Contributor

Replace global sync pools with per-DB pools and make reader and writer buffers configurable.
Default buffer sizes: 1Mb for reader buffer, 64Kb for writer buffer.

Context:
DB connections are pooled, usually there are not much of them available, so this is a resource used by goroutines, which have to wait for their turn in order to get a connection and use it before returning it back to the pool.

Before getting into the waiting line every goroutine allocates a read and a write buffer from the sync.Pool of buffers.
Currently hardcoded reader buffer size is 1Mb, so when 1000 goroutines wait in the queue, you get 1000Mb of buffers pre-allocated. So when an application unexpectedly gets a spike of traffic and all the database connections are being used, we don't get request timeouts as one could be expecting, the application is being OOM-killed instead.

The patch addresses this issue by trading some allocations and (probably, though my benchmarks don't really show it) latency for the ability to serve more simultaneous connections.

Replace global sync pools with per-DB pools and make reader and writer buffers configurable.
Default buffer sizes: 1Mb for reader buffer, 64Kb for writer buffer.

Context:
DB connections are pooled, usually there are not much of them available,
so this is a resource used by goroutines, which have to wait for their
turn in order to get a connection and use it before returning it back to
the pool.

Before getting into the waiting line every goroutine allocates a read
and a write buffer from the sync.Pool of buffers.
Currently hardcoded reader buffer size is 1Mb, so when 1000 goroutines
wait in the queue, you get 1000Mb of buffers pre-allocated. So when an
application unexpectedly gets a spike of traffic and all the database
connections are being used, we don't get request timeouts as one could
be expecting, the application is being OOM-killed instead.

The patch addresses this issue by trading some allocations and
(probably, though my benchmarks don't really show it) latency for the
ability to serve more simultaneous connections.
@elliotcourant elliotcourant self-requested a review December 13, 2024 02:47
@elliotcourant
Copy link
Collaborator

At a glance this looks fine to me, let me pull it down locally this afternoon and poke around a bit with it.

Copy link
Collaborator

@elliotcourant elliotcourant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this makes sense, it also cleans up a global variable which I think is good. But I'm wondering if now initiating the pool per DB instance will incur any additional cost, like when running tests where a DB instance might be created per test.

Overall the tradeoff though I think is worth it even if that does have a negative effect (which I'm not sure it even would).

@elliotcourant elliotcourant merged commit e55fd63 into go-pg:v10 Dec 14, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants