-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batching with array #108
base: new-index
Are you sure you want to change the base?
Batching with array #108
Conversation
Should we limit the number of requests per batch? |
Do we have limits for "new-line separated" batch requests? Is it different here? How much do you propose as the limit here? |
We don't, but it is different here. New-lines separated requests are streamed, and individual responses are sent as they become available. We only hold up to 10 pending requests in the queue, then block and stop reading from the TCP socket until a pending request gets processed to free room (a bounded With array batching all the requests and all the response has to be buffered in-memory at once, which could use significant resources and be a DoS vector if there are no limits in place. |
Do you think we can afford 20 as the limit for batch requests? I think it's common number because is used as gap limit... |
20 seems fine, yes 👍 However it seems the tests are failing. The failure isn't directly related to the batching but to the TestRunner used to test it. |
0bd3299
to
d08cf38
Compare
I didn't realize locally because i launched the test singularly, I think we don't support having multiple concurrent tests. |
At the moment we cannot concurrently tests because the server will conflict with each other, this make the test_electrum_raw test ignored by default, and add a test in CI specifically for this test.
c2458ae
to
250fb9b
Compare
Rebased on |
Update cargo setup method and enable arm64 builds again
I tracked down the cause. It's not specific to the tests, it fails whenever This appears to happen because the threads sending parallel RPC requests to bitcoind are being reused across different electrs instances. Each RPC thread has a If you apply this patch: diff --git a/src/daemon.rs b/src/daemon.rs
index 6808907..48498c7 100644
--- a/src/daemon.rs
+++ b/src/daemon.rs
@@ -493,9 +493,9 @@ impl Daemon {
// Store a local per-thread Daemon, each with its own TCP connection. These will
// get initialized as necessary for the `rpc_threads` pool thread managed by rayon.
thread_local!(static DAEMON_INSTANCE: OnceCell<Daemon> = OnceCell::new());
DAEMON_INSTANCE.with(|daemon| {
- daemon
- .get_or_init(|| self.retry_reconnect())
- .retry_request(&method, ¶ms)
+ let d = daemon.get_or_init(|| self.retry_reconnect());
+ debug!("rpc thread {:?} of {} got {}", std::thread::current().id(), self.daemon_dir.display(), d.daemon_dir.display());
+ d.retry_request(&method, ¶ms)
})
})
}) And run:
You'll see:
Note how the second time its trying to get a I was expecting the threads not to be reused across electrs instances, because each is using its own |
Currently electrs support batching by separating requests via new lines, however, other impls support it also via json array as explained in the cited issue. This add support for batching requests via json array
close Blockstream/esplora#516