You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the Logstash documentation, there is no mention of how the internal memory queue is sized - although there is mention of number of in-flight events in the pipeline configuration and performance documentation.
Adding this information, plus including the fact that these settings are per-pipeline, may help clarify user questions and help them size pipelines appropriately.
The text was updated successfully, but these errors were encountered:
I recently had reason to write this up again, so perhaps we have a good starting point:
The memory queue has an upper bound of pipeline.workers (default: number of CPU's) times the queue.batch.size (default: 125) events, so it depends on how you have Logstash tuned. Doubling the number of workers OR doubling the batch size will effectively double the memory queue's capacity (and memory usage), while doubling both will _quadruple the capacity (and usage).
If you need to absorb "bursty" traffic, then the PQ can be configured and the queue will then be bound to allocated capacity on disk.
In the Logstash documentation, there is no mention of how the internal memory queue is sized - although there is mention of number of in-flight events in the pipeline configuration and performance documentation.
Adding this information, plus including the fact that these settings are per-pipeline, may help clarify user questions and help them size pipelines appropriately.
The text was updated successfully, but these errors were encountered: