-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server: syncronize ws ping/pong messages #1437
Conversation
server/src/transport/ws.rs
Outdated
|
||
async fn ping_pong_task(mut rx: mpsc::Receiver<KeepAlive>, max_inactive_limit: Duration, max_inactive: usize) { | ||
let mut polling_interval = IntervalStream::new(interval(max_inactive_limit)); | ||
let mut pending_pings: VecDeque<Instant> = VecDeque::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only annoyance with this approach is that this VecDeque may become big if the max_inactive_limit is way bigger than the ping interval however that's configurable thingy and I don't think it's a real issue as we also regard received messages as pong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I guess it has an upper bound!
I do feel like if there are a few pings that haven't been replied to yet, then any pong or data received afterwards is prob good enough to clear them all, since it means the connection is still noted as being alive after all said pings :)
tokio::spawn(async move { | ||
ping_tx.send(KeepAlive::Ping(Instant::now())).await.ok(); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is no ping_rx (ie because no ping config, so it's dropped ie on https://github.dev/paritytech/jsonrpsee/blob/705148f0c05dca1bf945e2387a866561a58beac5/server/src/transport/ws.rs#L92), I guess we are just doing this work for no gain. Is it worth putting an if around the task spawning to avoid in this case or does it over complicate things?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, I can make it an option instead.
server/src/transport/ws.rs
Outdated
let elapsed = start.elapsed().saturating_sub(end.elapsed()); | ||
|
||
if elapsed >= max_inactive_limit { | ||
missed_pings += 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What decrements this again? If somebody misses a few pings but then is ok and replies to the next 100 pings after that, I'd expect this counter to be reset to 0 again, else it might gradually accumulate until it hits the limit.
I'd prob reset it each time a successful ping/pong happens, eg an else
on this?
@@ -38,7 +39,6 @@ pub(crate) async fn send_message(sender: &mut Sender, response: String) -> Resul | |||
} | |||
|
|||
pub(crate) async fn send_ping(sender: &mut Sender) -> Result<(), SokettoError> { | |||
tracing::debug!(target: LOG_TARGET, "Send ping"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed this, it's quite useless and possible get it by enabling soketto logs
Close #1433