Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MinReplicaCount for ScaledObject can't be set to 0 #5570

Closed
yorik opened this issue Mar 4, 2024 · 13 comments · May be fixed by #5571
Closed

MinReplicaCount for ScaledObject can't be set to 0 #5570

yorik opened this issue Mar 4, 2024 · 13 comments · May be fixed by #5571
Labels
bug Something isn't working stale All issues that are marked as stale due to inactivity

Comments

@yorik
Copy link
Contributor

yorik commented Mar 4, 2024

Report

Keda reports error:

MinReplicaCount=1 must be less than MaxReplicaCount=0

If MinReplicaCount=0 and MaxReplicaCount=0

Expected Behavior

No errors reported, the object scaled to 0.

Actual Behavior

Keda reports error and don't do anything.

Steps to Reproduce the Problem

  1. Create deployment with ScaledObject, set MinReplicaCount=0, MaxReplicaCount=10 and apply.
  2. Change MaxReplicaCount to 0 and apply.
  3. Check the logs for errors.

Logs from KEDA operator

MinReplicaCount=1 must be less than MaxReplicaCount=0

KEDA Version

2.13.0

Kubernetes Version

1.29

Platform

Google Cloud

Scaler Details

Prometheus

Anything else?

No response

@yorik yorik added the bug Something isn't working label Mar 4, 2024
@yorik yorik changed the title MinReplicaCount for ScaleObject can't be set to 0 MinReplicaCount for ScaledObject can't be set to 0 Mar 4, 2024
@JorTurFer
Copy link
Member

Hello,
I know that you've already opened a PR with a fix, but I'm interested in why you need to set MaxReplicaCount: 0.
Is it for pausing the autoscaling? If yes, KEDA already supports a better approach

@yorik
Copy link
Contributor Author

yorik commented Mar 26, 2024

Hi @JorTurFer,
there is a huge difference between pausing Keda and setting MaxReplicaCount to 0:
pausing leaves all the running pods running;
setting MaxReplicaCount to 0 shuts all the pods down.

In some cases we need to scale the service down completely, currently we have to add pause annotation and run kubectl scale --replicas=0 deployment .... This is inconvenient, requires people to run manual unreviewed commands (there is always chance of a mistake) and leaves records in different places, so it's harder to debug during incidents.

@JorTurFer
Copy link
Member

JorTurFer commented Mar 26, 2024

Yeah, I totally understand the requirement of scaling to 0 for some reasons, but that's already supported too xD

If you annotate the ScaledObject with autoscaling.keda.sh/paused: "true", you are right and KEDA will lock current instances as the desired instances, leaving all the pods running but, If you annotate the ScaledObject with autoscaling.keda.sh/paused-replicas: "0", KEDA will kill all the pods scaling to 0 the workload instantly and locking the scaling out.

In fact, you can use autoscaling.keda.sh/paused-replicas: "YOUR_DESIRED_VALUE" to enforce an arbitrary amount of pods meanwhile the ScaledObject is annotated. Then when you remove the annotation, the scaling will start from where it is on the annotation removal

image

@yorik
Copy link
Contributor Author

yorik commented Mar 26, 2024

I missed that part somehow, thanks!

We are migrating from https://github.com/jthomperoo/custom-pod-autoscaler and we used to scale down by setting maxReplicas to 0.

I still think that setting MaxReplicaCount to 0 is simpler for the task, no need to create custom annotations (it's easy to make small mistake in them and spend a lot of time trying to understand what's going on) and we won't need to re-educate people in our company. Also my PR would fix very strange error MinReplicaCount=1 must be less than MaxReplicaCount=0 for all people trying to set MaxReplicaCount to 0. I spend hours debugging what's going on.

@JorTurFer
Copy link
Member

Yeah, definitivelly, I don't have anything against the PR, I just suggested as an already available option to speed up your processes xD

@zroubalik
Copy link
Member

autoscaling.keda.sh/paused-replicas: "0" solves this, it also removes the HPA. I am not sure we should allow max == 0. Fixint the error would be nice though.

@yorik
Copy link
Contributor Author

yorik commented Mar 27, 2024

Removing HPA is undesired in our case, if HPA would stay in place it will make sure that the deployment will stay scaled down, even if someone scaled it up by mistake.

@yorik
Copy link
Contributor Author

yorik commented Mar 28, 2024

BTW, there are more people expecting this behaviour, see #3013 and #3014.

What would be a downside of supporting it?

Copy link

stale bot commented May 27, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label May 27, 2024
@yorik
Copy link
Contributor Author

yorik commented May 28, 2024

This issue can be fixed by the #5571, any reason not to merge it?

@stale stale bot removed the stale All issues that are marked as stale due to inactivity label May 28, 2024
Copy link

stale bot commented Jul 27, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Jul 27, 2024
@yorik
Copy link
Contributor Author

yorik commented Jul 29, 2024

This issue can be fixed by the #5571, any reason not to merge it?

Copy link

stale bot commented Aug 5, 2024

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Aug 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale All issues that are marked as stale due to inactivity
Projects
Status: Ready To Ship
Development

Successfully merging a pull request may close this issue.

3 participants