Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new static_primary config option #1

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

thedodd
Copy link
Member

@thedodd thedodd commented Mar 2, 2022

In essence, this configuration option will ensure that a
Patroni cluster running with a static primary will not demote
the master unnecessarily.

Transient failures to update the leader lock in the DCS will not cause a
demotion when running with a static primary. When running as leader
under normal circumstances, DCS exceptions will not cause a demotion
when running with static_primary=thisNode.

Even if replicas are added to the Patroni cluster, Patroni will
be able to protect itself from entering into unsafe states by checking
the value of static_primary. If the configured static_primary is not the
host node, then the replica will refuse to progress to postmaster boot.

todo

  • add tests for correct parsing and usage of env config values.
  • fix unit tests.
  • create a new branch with squashed changes; open upstream PR; start praying 🙏

This work has been squashed and opened for upstream merge at Patroni/pull/2318. We will keep this PR open here for reference and any other internal coordination needed.

@thedodd thedodd force-pushed the single-node-optimizations branch 3 times, most recently from 1308387 to 27fe9b9 Compare March 22, 2022 17:13
@thedodd thedodd changed the title Do not demote upon DCSError when leader & single-node Add new static_single_node config option Mar 22, 2022
@thedodd thedodd self-assigned this Apr 4, 2022
@thedodd thedodd added the enhancement New feature or request label Apr 4, 2022
@thedodd
Copy link
Member Author

thedodd commented Apr 5, 2022

Timescale folks, I have a fair amount of logging updates to the code in my local tree. Should we go ahead and add additional logging to leadership transition code points? Not sure what upstream folks will think of that, but it might be useful.

patroni/ha.py Outdated Show resolved Hide resolved
patroni/ha.py Show resolved Hide resolved
# If the cluster has been configured with a static primary, and we are not that primary, then do not proceed.
if self.is_static_primary_configured() and not self.is_static_primary():
return 'patroni cluster is configured with a static primary, and this node is not the primary, refusing to start'

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexeyklyukin && @feikesteenbergen I need to do some additional testing with the operator to get it to work nicely with the updated config, while also being able to add a replica. That said:

  • this should allow for replicas to be added to the cluster, while also having Patroni recognize that the new nodes may not be the static primary (if configured) and will then abort the control loop pass;
  • this statement will be logged from the program entrypoint;
  • if the config is updated in the DCS or locally, then on the next control loop pass, that changed will be observed and the node will then be able to boot.

Thoughts?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This behavior has now been tested and verified.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is wrong, as Patroni doesn't necessary start PostgreSQL: it can be either attached to a running instance, or the static primary flag could be set in the running system configuration with the subsequent reload. In that case, there will be a couple of issues:

  • the error message doesn't reflect the actual state (it is running)
  • the error message is shown even when the node is a primary, as is_static_primary doesn't check that.
  • exiting the HA loop prematurely would prevent the running primary with a static_primary to be ever demoted. This would prevent any other member from being promoted, including a member with the name matching the static_primary. Unless the running primary becomes unreachable, in which case other member will promote itself and we will get a split-brain.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to stop the instance, also demote it if it was a primary.
Perhaps use something like this (I didn't test it much though):

# If the cluster has been configured with a static primary,
# and we are not that primary, then do not proceed.
if self.is_static_primary_configured() and not self.is_static_primary():
    actions = []
    if self.state_handler.is_running():
        stop_mode = 'fast'
        if self.has_lock():
            self._delete_leader()
            actions.append("demoted")
            stop_mode = 'immediate'
        self.state_handler.stop(mode=stop_mode)
        actions.append("stopped")
    msg = 'refusing to continue because this cluster is configured with a different static_primary'
    if len(actions) > 0:
        msg = '{}({})'.format(msg, ' and '.join(actions))
    return msg

We likely want to move it a bit down, perhaps after the touch member.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably want to disable watchdog as well for such an instance, otherwise, if it is enabled, the instance will be killed because of no response to the watchdog requests.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey boss, thanks for the review. Responses:

the error message doesn't reflect the actual state (it is running)

True. I had not considered that because it is not the way we run things. Will update.

the error message is shown even when the node is a primary, as is_static_primary doesn't check that.

Hmm, that has not happened to me at all during my testing, and testing it now on prod us-west-2, it is not showing that message incorrectly. This message should only be shown on a replica when static primary is configured, and it is not the host node. The boolean logic appears to be correct, and currently appears to be working correctly.

On the last point ... I suppose there is a misunderstanding here. The point of this configuration option, as stated in the docs, is that it be used for a single node cluster.

  • If a user has an existing cluster with multiple nodes, then this type of misconfiguration is on par with the user configuring a different DCS for each member.
  • There are still areas where a user can shoot themselves in the foot with misconfig, and I don't think we can reasonably prevent every single case.

That said, the docs for the setting say: This is useful for cases where a cluster is running as a single-node cluster.

  • Perhaps we need to word this more strongly.
  • Perhaps we need to directly warn the user that using this with a live multi-node HA cluster, or adding additional members while this is configured is incorrect.
  • We can include details in the docs that Patroni will do its best to protect itself in the case of misconfig, specifically in the case of adding new nodes without removing the static primary config, but that Patroni can not account for all forms of misconfig.
  • Personally, I would prefer to keep this feature as minimal as possible. So blocking demoting of the static primary, and preventing startup of new nodes (or at least preventing Patroni from managing them, which is technically what this code does), seems to be a reasonable trade-off of complexity.

Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we must at least shutdown the node that is detecting it isn't supposed to be running as a primary.

Copy link
Member Author

@thedodd thedodd May 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@feikesteenbergen agreed. The changes here assume a pristine replica, however that is not necessarily a safe or accurate assumption. I will update to the code to shutdown the PG instance if such a case is detected.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I've added a simple call to shutdown() in these cases. Will do some testing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok folks. This is now the default across all dev clusters. Let me know if you have any additional thoughts on this front. Otherwise, I think it might be time to squash this code, re-tag, and open a PR for upstream.

@thedodd thedodd force-pushed the single-node-optimizations branch 3 times, most recently from ffc5aaa to 71f470d Compare April 22, 2022 16:41
@thedodd thedodd changed the title Add new static_single_node config option Add new static_primary config option Apr 22, 2022
@thedodd thedodd requested review from alexeyklyukin and feikesteenbergen and removed request for alexeyklyukin and feikesteenbergen April 22, 2022 16:43
.github/workflows/tests.yaml Outdated Show resolved Hide resolved
@thedodd thedodd force-pushed the single-node-optimizations branch 2 times, most recently from e3cf3a6 to dd8b57b Compare April 22, 2022 16:55
@thedodd thedodd force-pushed the single-node-optimizations branch 7 times, most recently from e32dd64 to 62b8791 Compare April 22, 2022 17:41
In essence, this configuration option will ensure that a
Patroni cluster running with a static primary will not demote
the master unnecessarily.

Transient failures to update the leader lock in the DCS will not cause a
demotion when running with a static primary. When running as leader
under normal circumstances, DCS exceptions will not cause a demotion
when running with `static_primary=thisNode`.

Even if replicas are added to the Patroni cluster, Patroni will
be able to protect itself from entering into unsafe states by checking
the value of static_primary. If the configured static_primary is not the
host node, then the replica will refuse to progress to postmaster boot.
docs/SETTINGS.rst Outdated Show resolved Hide resolved
@thedodd thedodd force-pushed the single-node-optimizations branch 5 times, most recently from 6854f7e to 0e2c15a Compare April 28, 2022 15:43
patroni/ha.py Outdated Show resolved Hide resolved
# If the cluster has been configured with a static primary, and we are not that primary, then do not proceed.
if self.is_static_primary_configured() and not self.is_static_primary():
return 'patroni cluster is configured with a static primary, and this node is not the primary, refusing to start'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we must at least shutdown the node that is detecting it isn't supposed to be running as a primary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants