Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensuring Command Sequencing Integrity during Horizontal Scaling with Sequent on Kubernetes #408

Open
eduscrakozabrus opened this issue Feb 29, 2024 · 5 comments
Labels

Comments

@eduscrakozabrus
Copy link

Hello,

I am the administrator of a Kubernetes-based system comprising multiple Pods that run Rails instances with the Sequent framework. A pressing issue I've encountered involves sequentially processing commands for a single aggregate. These commands are received from diverse sources like webhooks and user interactions. It is paramount to ensure the strict sequencing of these commands, even in the face of failed attempts, with a robust mechanism for retries.

My inquiry pertains to the behavior of Sequent in handling commands destined for the same aggregate when those commands are dispatched concurrently from various Pods. Specifically, my concerns are:

  1. Will commands for a single aggregate, when triggered from different Pods at the same time, be canceled or will they wait until the sequential processing is available?
  2. How can I maintain sequencing integrity and ensure that commands awaiting processing aren't lost during scale-out operations?
  3. What strategies are recommended for keeping users informed of the command processing status and incorporating this feedback into an alerting or notification system?

I am seeking guidance on how to best uphold sequential command integrity and implement horizontal scaling while using Sequent in a distributed environment. Any insights on the best practices for this scenario would be greatly appreciated.

Thank you for taking the time to address this matter and for your valuable suggestions.

Best regards Pavel

@lvonk
Copy link
Member

lvonk commented Feb 29, 2024

Hi Pavel,

Sequent does not have out of the box support for sequential processing of commands outside the scope of Sequent.command_service.execute_commands.

Reading your use case I wonder why strict processing of these commands are needed? If it is that strict I'd recommend only dispatching any subsequent command if the previous one succeeded.

  1. Currently, if two "transactions" execute a command on the same Aggregate Sequent will fail one of the "transactions". The first transaction (Sequent.command_service.execute_commands) that finishes "wins".
  2. If you really need to have multiple pods and want to ensure that commands run sequentially you should put some fifo queue in between and then process them one by one (and only process the next one if the previous one succeeded).
  3. This queue can than be used for some sort of command processing status.

Does this help?

@eduscrakozabrus
Copy link
Author

Thank you for your input. As for my implementation plan to ensure command sequencing and retries upon failure, it goes as follows:

  • CommandBuffer Table Implementation: I will create a CommandBuffer table in the database to queue commands for sequential processing. This will serve as a centralized hub for all incoming commands pending execution.

  • Queueing Commands: Upon receiving a command, it will first be logged into the CommandBuffer before being processed. This will ensure an ordered execution managed by a background process. Fields in the table will include the aggregate identifier, command type, command data, among other metadata relevant for queue management.

  • Deferred Execution via Sidekiq Scheduler: Utilizing Sidekiq Scheduler, I will set up a regular job that periodically checks the CommandBuffer for new commands to process. This will ensure that commands are executed in the strict order they are received.

  • Handling Failures and Retries: In case of failure during command execution, the worker will increment a retry counter in the corresponding CommandBuffer entry and reschedule the command for a later attempt, allowing for recovery from transient issues.

  • User Notifications: Additional fields will be integrated into the CommandBuffer to link with user notifications, enabling feedback on the execution status of the commands.

Implementing such a robust and scalable solution will allow for horizontal scaling while maintaining the integrity of command sequencing.

Moreover, incorporating this functionality directly into the Sequent framework as an out-of-the-box feature would be a marvelous addition. Built-in support for ordered command processing in horizontally scaled environments would be a significant enhancement, directly addressing the vital needs for fault tolerance and scalability in distributed systems. This kind of innovation would not just alleviate the complexity of managing distributed command processing in large-scale applications, but also provide Sequent framework users with a powerful tool for building robust and flexible systems.

The integration of such a component "out of the box" would be highly beneficial for the community using Sequent, potentially becoming a highlighted feature that underscores the framework's readiness for modern technical challenges and enterprise demands for scaling and real-time data management. It would allow developers to focus on business logic without being bogged down by the intricacies of coordinating task sequencing in a microservices architecture.

@lvonk lvonk added the question label Mar 1, 2024
@lvonk
Copy link
Member

lvonk commented Mar 1, 2024

Hi, we have no plans to add this as part of Sequent out of the box in the near future. We do have a somewhat similar setup ourselves for running commands (and workflows) in the background using homegrown postgres table (like your command buffer) in combination with sqs. We might add this at some point but currently we are focussing on scaling the eventstore itself using postgres partitioning.

@eduscrakozabrus
Copy link
Author

Hi. Have you considered such a solution for large databases as 'trimming to the last year', for example, running a procedure once a year that creates a new data set calculated based on events since the beginning of the year?

@lvonk
Copy link
Member

lvonk commented Mar 5, 2024

You mean as in a sort of roll up of an aggregate? Or as partitioning key for the database?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants