Kafka Scaler Issue in Combination with CPU and Memory Scalers #5658
Unanswered
ganesh-kr
asked this question in
Q&A / Need Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We are encountering an issue with our KEDA scaled object triggers configuration. When there is a lag on the configured Kafka topic, the metrics value is being displayed as "673m" and "3122m" as the lag increases. Ideally, it should represent the number of messages, but it's being shown in this format.
This issue is happening with the Kafka scaler when used in conjunction with CPU and memory metrics. However, when used separately, the Kafka scaler works as expected, showing the value as the number of messages.
We need assistance in understanding why the values are represented in this manner and whether it's possible to use a combination of CPU, memory, and Kafka scalers, or if any modifications are needed in the configuration.
Below is my scaled object configuration
triggers:
- metadata:
bootstrapServers: ip:port
consumerGroup: group_name
lagThreshold: '10'
offsetRestPolicy: latest
topic: topic_name_1
type: kafka
- metadata:
bootstrapServers: ip:port
consumerGroup: group_name
lagThreshold: '10'
offsetRestPolicy: latest
topic: topic_name_2
type: kafka
- metadata:
containerName: xyz
value: '60'
metricType: Utilization
type: cpu
- metadata:
containerName: xyz
value: '60'
metricType: Utilization
type: memory
HPA metrics output
Metrics: ( current / target )
resource cpu of container "xyz" on pods (as a percentage of request): 4% (573m) / 60%
resource memory of container "xyz" on pods (as a percentage of request): 58% (16434180096) / 60%
"s0-kafka-topic_name_1" (target average value): 1522m / 10 ( exact lag is around 32 messages )
"s1-kafka-topic_name_2" (target average value): 672m / 10
Beta Was this translation helpful? Give feedback.
All reactions