Replies: 2 comments 2 replies
-
If you enable cached metrics, there would be a call from each scaler every 30s (pollingInterval), that's all. Based on that you can calculate the maxium time when the system will get updated metrics. |
Beta Was this translation helpful? Give feedback.
-
Hi, first of all thanks for the amazing work and responses here. 'useCachedMetrics' does indeed have some explanation available in the docs, but I find it rather vague and missing some crucial details that might be hard to understand without a deep dive in the code.
My current understanding after having a brief look at the code:
Main reason for asking this, is because we are having problems with hpa's scaling slow because of high latency to keda metrics server because of slow external metrics. We are hoping caching metrics will (at least partially) solve this problem, and are trying to calculate/understand the impact of this change. The exact implementation of this caching logic actually has a pretty big impact on the end result, so I'm hoping to shed some more light on this with these questions. |
Beta Was this translation helpful? Give feedback.
-
I would like to understand how the parameter useCachedMetrics works.
Let's say I have 36 scalers, a pollingInterval of 30s, and a --horizontal-pod-autoscaler-sync-period of 15s.
How many calls will be made to Datadog? How frequently? In case of a metric change, what is the maximum amount of time that will elapse before the system takes it into account?
And how can I calculate all this myself ?
Beta Was this translation helpful? Give feedback.
All reactions