Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runbook: clarify MimirIngesterReachingSeriesLimit errors and retries #9410

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion docs/sources/mimir/manage/mimir-runbooks/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,15 @@ If nothing obvious from the above, check for increased load:

### MimirIngesterReachingSeriesLimit

This alert fires when the `max_series` per ingester instance limit is enabled and the actual number of in-memory series in an ingester is reaching the limit. Once the limit is reached, writes to the ingester will fail (5xx) for new series, while appending samples to existing ones will continue to succeed.
This alert fires when the `max_series` per ingester instance limit is enabled and the actual number of in-memory series in an ingester is reaching the limit.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This alert fires when the `max_series` per ingester instance limit is enabled and the actual number of in-memory series in an ingester is reaching the limit.
This alert fires when the `max_series` per ingester instance limit is enabled and the actual number of in-memory series in an ingester reaches that limit.

The threshold is set at 80%, to give some chance to react before the limit is reached.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The threshold is set at 80%, to give some chance to react before the limit is reached.
The threshold is set at 80% to give the chance to react before the limit is reached.

Once the limit is reached, writes to the ingester will fail for new series. Appending samples to existing ones will continue to succeed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Once the limit is reached, writes to the ingester will fail for new series. Appending samples to existing ones will continue to succeed.
After the limit is reached, writes to the ingester fail for new series. Appending samples to existing ones continue to succeed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid using "writes" as a noun? Could we say "write requests" or something else, if it's more accurate. We could then also say "Appending samples to existing requests continue to succeed" for greater clarity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We avoid using "will" in the docs.


Note that the error responses sent back to the sender are classed as "server error" (5xx), which should result in a retry by the sender.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

classed = classified?
Also, could we say "server errors" for agreement?

While this situation continues, these retries will stall the flow of data, and newer data will queue up on the sender.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
While this situation continues, these retries will stall the flow of data, and newer data will queue up on the sender.
While this situation continues, these retries stall the flow of data, and newer data queues up on the sender.

If the condition is cleared in a short time, service can be restored with no data loss.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If the condition is cleared in a short time, service can be restored with no data loss.
If the condition is cleared in a short time, service is restored with no data loss.


This is different to what happens when the `max_global_series_per_user` is exceeded, which is considered a "client error" (4xx) where excess data is discarded.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This is different to what happens when the `max_global_series_per_user` is exceeded, which is considered a "client error" (4xx) where excess data is discarded.
This is different to what happens when the `max_global_series_per_user` limit is exceeded, which is considered a "client error" (4xx). In this case, excess data is discarded.


In case of **emergency**:

Expand Down
Loading