Skip to content

Commit

Permalink
First draft to describe how to size the native memory used by Beats i…
Browse files Browse the repository at this point in the history
…nput to decode a batch of events.
  • Loading branch information
andsel committed Sep 25, 2023
1 parent 339913c commit b66ae42
Showing 1 changed file with 22 additions and 0 deletions.
22 changes: 22 additions & 0 deletions docs/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,28 @@ Setting direct memory too low decreases the performance of ingestion.

NOTE: Be sure that heap and direct memory combined does not exceed the total memory available on the server to avoid an OutOfDirectMemoryError

[id="plugins-{type}s-{plugin}-memory-sizing"]
===== How to size the direct memory used

To correctly size the direct memory to sustain the flow of incoming Beats connections, the medium size of the transmitted
log lines has to be known and also the batch size used by Beats (default to 2048). Overall the connections, only a
subset of them are actively processed in parallel by Netty, corresponding to the number of workers which equals the
number of CPU cores available. For each under processing channel a batch of events is read and due to the way
the decompressing and decoding part works, it keeps two copies of the batch in memory.
The expression used to calculate the maximum direct memory usage is:
["source","text"]
-----
event size * batch size * 2 * netty workers
-----

Supposing a 1Kb event size, there a small overhead of ~500 bytes of metadata transferred, considering 12 core CPU the memory
consumption could be estimated as:
["source","text"]
-----
1,5 KB * 2048 * 2 * 12
-----
This totalling to about 140MB. So if you have some data about the medium size of the events to process you can size
the memory accordingly without risking to go in Out-Of-Memory error on the direct memory space in production environment.

//Content for Beats
ifeval::["{plugin}"=="beats"]
Expand Down

0 comments on commit b66ae42

Please sign in to comment.