-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JsonRpcConnection: Don't drop client from cache prematurely #10210
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually need two warnings?
Actually, no. However, I don't know which log level to use for which of these. So, if you have any better ideas, then please suggest! |
99be923
to
d51d6a7
Compare
I now have degraded the first log to info. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now as I'm thinking about it, in monitoring in general you use WARNING as a soon indicator (before CRITICAL) that something may be not right. But let's also let Julian come to word...
I'm not really sure how well that comparison works. But yes, the question for choosing the log severity should also be "does it require attention?". So during a normal reload/config deployment, there ideally shouldn't be any warnings (I do know that we aren't there yet). For a user, having both messages doesn't sound too helpful: like it says pretty much the same thing twice within at most 10 seconds, so I'd go even further and log the first one at notice.
That makes it sound like something that should only be necessary to debug very specific issues, not something that would be useful logging for every user. |
d51d6a7
to
b7dfdd3
Compare
lib/remote/jsonrpcconnection.cpp
Outdated
ApiListener::GetInstance()->RemoveAnonymousClient(this); | ||
} | ||
Log(LogNotice, "JsonRpcConnection") | ||
<< "Disconnecting API client for identity '" << m_Identity << "'"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 What about logging this...
@@ -208,18 +208,8 @@ void JsonRpcConnection::Disconnect() | |||
JsonRpcConnection::Ptr keepAlive (this); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... ASAP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that would be useful. We're only concerned with whether the I/O threads are executing the coroutines correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And that's why I think we should, while on it, log this even before the coroutine spawns.
PR #7445 incorrectly assumed that a peer that had already disconnected and never reconnected was due to the endpoint client being dropped after a successful socket shutdown. However, the issue at that time was that there was not a single timeout guards that could cancel the `async_shutdown` call, petentially blocking indefinetely. Although removing the client from cache early might have allowed the endpoint to reconnect, it did not resolve the underlying problem. Now that we have a proper cancellation timeout, we can wait until the currently used socket is fully closed before dropping the client from our cache. When our socket termination works reliably, the `ApiListener` reconnect timer should attempt to reconnect this endpoint after the next tick. Additionally, we now have logs both for before and after socket termination, which may help identify if it is hanging somewhere in between.
b7dfdd3
to
ecc4b43
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Neither the PR description nor the commit message really tell the purpose of moving the
Here, underlying problem refers to the possibly blocking TLS shutdown without a timeout, i.e. something that's fixed already and not supposed to be fixed by this PR?
Why do we want to wait? Like the PR claims that currently it's premature but not why that's the case and what's improved by changing this. |
Something that should have been fixed already!
Firstly, you don't want to mark an endpoint as disconnected if it actually isn't. Before #7445, the shutdown flow was as it should be, i.e. first the socket is completely shut down, then the endpoint is marked as disconnected. However, this was changed with #7445 due to an incorrect assumption that disconnected clients being never reconnected again were due to this change, when in fact it might've been stuck somewhere in |
In contrast, do you want to keep treating an endpoint as connected after the code decided that the connection is dead (for example in the "no messages received" case)? It's more in some in-between state of cleaning up the connection.
In itself, that doesn't sound bad. Like once the connection is declared dead, it should be fine to establish a new one. It's just problematic if that would turn into a resource leak. Is that the main change here that if the code had a problem in
Though that's not related to moving the Interestingly, there seems to be quite a connection to #10005, so this might be yet another reason to revive that PR, quoting from that PR's description:
Now thinking about this in context, my intuition says yes. If we consider a connection failed, why should we bother attempting a clean TLS shutdown instead of just killing it with fire? That would then also change things here as a (forceful) disconnect should be pretty much instant. |
Yes, generally I would treat an endpoint as connected as long as its socket is not fully shut down, but the
Yes. If you just drop the client before anything else and log that it's disconnected when in fact it's not, we won't be able to tell for sure if the shutdown is complete afterwards. Unless there's something that we really messed up, the endpoint should be kept as connected for a maximum of
When looking just at the log entry, then yes, but why should the endpoint be allowed to initiate another connection at all if the current one is not completely closed? I'm just trying to recreate theat least for me logical flow as how it should be, i.e. first either gracefully or forcibly close the current one before marking the endpoint as disconnected.
As I have already talked to you lately about that PR, I'm perfectly happy with it regardless of this one, and forcibly closing such a dead connection doesn't sound like a bad idea too.
What exactly would change here otherwise? For me, the referenced PR is just something I would include on top of this, but I don't see how those exclude one another. |
That's what I meant with saying it's in some in-between state. The rest of the code also makes a difference between connected and connecting, likewise there's a difference between disconnecting and disconnected (though I don't that distinction is done explicitly in the code). Now the question is whether a new connection should already be attempted while in that disconnecting state. Currently it is done, you suggest to wait to be fully disconnected with this PR. I think neither is wrong and would probably tend towards the suggested change. You just use a quite high level of "should" here and whether the change is a good idea boils down to how sure we are that this doesn't delay the
Ideally, that forceful disconnect would only be a Footnotes
|
Given this quote of yours, does anythings speak against not doing anything and letting the stream destructor do its thing? |
If the destructor does just the right thing, that could be the "Asio needs to be aware of this, so it has to be done using Asio." part. |
Upon closer inspection, there still seem to be things that could block the disconnect. This PR also moves the icinga2/lib/remote/jsonrpcconnection.cpp Line 216 in ecc4b43
That basically waits for the icinga2/lib/remote/jsonrpcconnection.cpp Lines 98 to 132 in ecc4b43
That means that this PR changes the behavior so that a new connection is only attempted after that There is #10216 which in combination with this PR would reduce it down to waiting for sending a single message or waiting for the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #10210 (comment) (I should have submitted that as a review in the first place)
Yes, but I'm wondering whether a |
I only used a high level of should because I can't tell you for sure that it will always be that way in every production system, because I can't and neither can you. I'm just saying that if there's no other hidden bug waiting to pop up, it definitely should take a maximum of To prove that PR #7445 did not solve the real problem, I just did a test where 09:03:39: The SSL stream's async_shutdown method is hanging, and I didn't interrupt it
...
10:03:22 Exception thrown in coroutine - Thread stopped
Server::~Server()
Session::~Session() So, merge or close it, it's entirely up to you! |
Sorry! I didn't see your intermediate comments while submitting my previous comment.
And how long is the too long you are referring to? Why do you expect an endpoint to be instantly flagged as disconnected as soon as someone has requested a disconnect? If the current connection is still alive, we obviously need to flush its buffers first before closing it, and to me that's just the natural flow. But what advantages do you hope to gain from immediately cancelling the connection? To initiate a new connection while the current one is still sending data? And for what purpose? I don't get that. I've already said in my previous comments that the |
That's the thing, there's no explicit timeout set. So however long it takes until the kernel decides itself that the socket is dead (which can take hours).
I don't want it, I'm just saying there's a high chance that this PR actually reintroduces the problem described in #7444 (hence the request changes, so that it doesn't get merged before that is investigated as it was already approved).
Not every disconnect is for this reason, but if there is one for this reason, reconnecting afterwards should work reliable. |
Indeed it does. I managed to trigger it with this PR by breaking a TCP connection by dropping all of its packets in a firewall ( Both nodes then logged that no messages were received in the last 60 seconds but didn't attempt to reconnect until 15 minutes later (not sure which timeout caused it to unblock, I'm not aware of related 15 minute timeouts in Icinga 2, so it probably was some kernel timeout).
So yes, it does exactly that, see also #10005 (comment) (describes the resource leak that happens when doing the same on the current master) 😅 |
Indeed. Too many connections are bad. But none at all "just" due to a, possibility little, resource leak are worse IMAO. Let's stall this.
PR #7445 incorrectly assumed that a peer that had already disconnected and never reconnected was due to the endpoint client being dropped after a successful socket shutdown. However, the issue at that time was that there was not a single timeout guards that could cancel the
async_shutdown
call, petentially blocking indefinetely. Although removing the client from cache early might have allowed the endpoint to reconnect, it did not resolve the underlying problem. Now that we have a proper cancellation timeout, we can wait until the currently used socket is fully closed before dropping the client from our cache. When our socket termination works reliably, theApiListener
reconnect timer should attempt to reconnect this endpoint after the next tick. Additionally, we now logs both before and after socket termination, which may help identify if it is hanging at any point in between.Tests