-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new static_primary config option #1
base: master
Are you sure you want to change the base?
Conversation
1308387
to
27fe9b9
Compare
27fe9b9
to
9023f56
Compare
9023f56
to
a7d1a8e
Compare
Timescale folks, I have a fair amount of logging updates to the code in my local tree. Should we go ahead and add additional logging to leadership transition code points? Not sure what upstream folks will think of that, but it might be useful. |
85c747c
to
5713908
Compare
# If the cluster has been configured with a static primary, and we are not that primary, then do not proceed. | ||
if self.is_static_primary_configured() and not self.is_static_primary(): | ||
return 'patroni cluster is configured with a static primary, and this node is not the primary, refusing to start' | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexeyklyukin && @feikesteenbergen I need to do some additional testing with the operator to get it to work nicely with the updated config, while also being able to add a replica. That said:
- this should allow for replicas to be added to the cluster, while also having Patroni recognize that the new nodes may not be the static primary (if configured) and will then abort the control loop pass;
- this statement will be logged from the program entrypoint;
- if the config is updated in the DCS or locally, then on the next control loop pass, that changed will be observed and the node will then be able to boot.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This behavior has now been tested and verified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is wrong, as Patroni doesn't necessary start PostgreSQL: it can be either attached to a running instance, or the static primary flag could be set in the running system configuration with the subsequent reload. In that case, there will be a couple of issues:
- the error message doesn't reflect the actual state (it is running)
- the error message is shown even when the node is a primary, as is_static_primary doesn't check that.
- exiting the HA loop prematurely would prevent the running primary with a
static_primary
to be ever demoted. This would prevent any other member from being promoted, including a member with the name matching the static_primary. Unless the running primary becomes unreachable, in which case other member will promote itself and we will get a split-brain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to stop the instance, also demote it if it was a primary.
Perhaps use something like this (I didn't test it much though):
# If the cluster has been configured with a static primary,
# and we are not that primary, then do not proceed.
if self.is_static_primary_configured() and not self.is_static_primary():
actions = []
if self.state_handler.is_running():
stop_mode = 'fast'
if self.has_lock():
self._delete_leader()
actions.append("demoted")
stop_mode = 'immediate'
self.state_handler.stop(mode=stop_mode)
actions.append("stopped")
msg = 'refusing to continue because this cluster is configured with a different static_primary'
if len(actions) > 0:
msg = '{}({})'.format(msg, ' and '.join(actions))
return msg
We likely want to move it a bit down, perhaps after the touch member.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably want to disable watchdog as well for such an instance, otherwise, if it is enabled, the instance will be killed because of no response to the watchdog requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey boss, thanks for the review. Responses:
the error message doesn't reflect the actual state (it is running)
True. I had not considered that because it is not the way we run things. Will update.
the error message is shown even when the node is a primary, as is_static_primary doesn't check that.
Hmm, that has not happened to me at all during my testing, and testing it now on prod us-west-2, it is not showing that message incorrectly. This message should only be shown on a replica when static primary is configured, and it is not the host node. The boolean logic appears to be correct, and currently appears to be working correctly.
On the last point ... I suppose there is a misunderstanding here. The point of this configuration option, as stated in the docs, is that it be used for a single node cluster.
- If a user has an existing cluster with multiple nodes, then this type of misconfiguration is on par with the user configuring a different DCS for each member.
- There are still areas where a user can shoot themselves in the foot with misconfig, and I don't think we can reasonably prevent every single case.
That said, the docs for the setting say: This is useful for cases where a cluster is running as a single-node cluster.
- Perhaps we need to word this more strongly.
- Perhaps we need to directly warn the user that using this with a live multi-node HA cluster, or adding additional members while this is configured is incorrect.
- We can include details in the docs that Patroni will do its best to protect itself in the case of misconfig, specifically in the case of adding new nodes without removing the static primary config, but that Patroni can not account for all forms of misconfig.
- Personally, I would prefer to keep this feature as minimal as possible. So blocking demoting of the static primary, and preventing startup of new nodes (or at least preventing Patroni from managing them, which is technically what this code does), seems to be a reasonable trade-off of complexity.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we must at least shutdown the node that is detecting it isn't supposed to be running as a primary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@feikesteenbergen agreed. The changes here assume a pristine replica, however that is not necessarily a safe or accurate assumption. I will update to the code to shutdown the PG instance if such a case is detected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I've added a simple call to shutdown() in these cases. Will do some testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok folks. This is now the default across all dev clusters. Let me know if you have any additional thoughts on this front. Otherwise, I think it might be time to squash this code, re-tag, and open a PR for upstream.
ffc5aaa
to
71f470d
Compare
e3cf3a6
to
dd8b57b
Compare
e32dd64
to
62b8791
Compare
In essence, this configuration option will ensure that a Patroni cluster running with a static primary will not demote the master unnecessarily. Transient failures to update the leader lock in the DCS will not cause a demotion when running with a static primary. When running as leader under normal circumstances, DCS exceptions will not cause a demotion when running with `static_primary=thisNode`. Even if replicas are added to the Patroni cluster, Patroni will be able to protect itself from entering into unsafe states by checking the value of static_primary. If the configured static_primary is not the host node, then the replica will refuse to progress to postmaster boot.
62b8791
to
0d7b7ba
Compare
6854f7e
to
0e2c15a
Compare
0e2c15a
to
07c7ff8
Compare
# If the cluster has been configured with a static primary, and we are not that primary, then do not proceed. | ||
if self.is_static_primary_configured() and not self.is_static_primary(): | ||
return 'patroni cluster is configured with a static primary, and this node is not the primary, refusing to start' | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we must at least shutdown the node that is detecting it isn't supposed to be running as a primary.
In essence, this configuration option will ensure that a
Patroni cluster running with a static primary will not demote
the master unnecessarily.
Transient failures to update the leader lock in the DCS will not cause a
demotion when running with a static primary. When running as leader
under normal circumstances, DCS exceptions will not cause a demotion
when running with
static_primary=thisNode
.Even if replicas are added to the Patroni cluster, Patroni will
be able to protect itself from entering into unsafe states by checking
the value of static_primary. If the configured static_primary is not the
host node, then the replica will refuse to progress to postmaster boot.
todo
This work has been squashed and opened for upstream merge at Patroni/pull/2318. We will keep this PR open here for reference and any other internal coordination needed.