Pods spawned by KEDA are immediately terminated even whilst their scaling threshold is crossed #5350
-
Hi there, I've setup KEDA to consume from a kafka message queue. The deployment seems to work correctly and the HPA is set up. When I I send messages to the kafka queue, KEDA scales the pods up correctly. Unfortunately though, as soon as KEDA starts up the pods, the pods are then almost immediately terminated even while the threshold is still crossed. I'm unsure why this would be happening. I've attached the ScaledObject manifest here for reference apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: test-scaled-object
namespace: backend
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test
pollingInterval: 5
cooldownPeriod: 1200
maxReplicaCount: 3
triggers:
- type: kafka
metricType: AverageValue
metadata:
bootstrapServers: <to be patched by infrustructure overlay>
consumerGroup: <to be patched by infrustructure overlay>
topic: <to be patched by infrustructure overlay>
lagThreshold: '5'
activationLagThreshold: '1'
allowIdleConsumers: "true"
offsetResetPolicy: latest
scaleToZeroOnInvalidOffset: 'false'
excludePersistentLag: 'false'
version: 2.6.1 I'm unsure how to debug this correctly - are there logs anywhere where we can view what KEDA is doing and why these pods might be being terminated? I see that the HPA stays above the activation threshold even whilst the pods are being terminated. I've attached the logs from the operator. We can see the total lag stays high, but during this time the pods that are created are immediately terminated after being created. Why?
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Ah, so I figured the issue. In the Deployment manifest, you need to omit the |
Beta Was this translation helpful? Give feedback.
Ah, so I figured the issue. In the Deployment manifest, you need to omit the
spec.replicas
field otherwise it tries to override the HPA replica settings. This issue helped #3578