Scaler crashing frequently when using with a deployment with pods on fargate instances. #576
Unanswered
prakhar-ti
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi I am trying to use KEDA HTTP scaler on an EKS cluster. The deployment that I am trying to scale down to zero is supposed to have pods running on Fargate instances. When scaled down to zero and sending a request to scale it up, the keda-add-ons-http-external-scaler pod crashes a lot of times. There is an inherent problem here that the Fargate instances are freed when the deployment is scaled down to zero so scaling up requires reprovisioning them which takes a considerable amount of time. On top of that, I believe that the HTTP scaler pod failing is causing an additional delay due to which the scaling from 0 to 1 pod takes a very long time and often results in no response on browsers(probably due to their time limits on expecting a response).
I am unable to dive deeper into finding a way to change this behaviour and for that I would like to see what is happening in the scaler due to which it is crashing but I can't really find anything.
I have tried kubectl logs but it doesn't give any information about crashes.
I have tried kubectl describe but it only states that the pod crashed and doesn't give a reason.
Is there anyway to find out the reason of this crashing and is there a possible way to fix it?
Beta Was this translation helpful? Give feedback.
All reactions