-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid restarting pods when Deployment doesn't change #2772
Comments
@grosser am I wrong in my analysis? Is this way of tracking a leftover from the pre-Deployment days? |
I'd recommend having a 1-off stage that can be used to deploy to these (possibly with changing the deploy-group as needed). Historically samson was never a I would be possible to do this deplyment->replicaset->pod lookup, but then it also needs to work for Pod/DaemonSet/StatefulSet/Job ... so smells like a lot of work. |
The problem is that we'd need a lot more knowledge on the parts of operators in order to be able to safely do resource/replication changes while under load – causing a full restart of existing processes each time is not great if we're already running up against our capacity. This will be true of any service. |
In practice, we'd resort to mutating the Deployment directly through |
sounds like we want autoscaling :)
... or a dedicated "scale this" action, otherwise it would still redeploy
if the env changed or some other config ...
…On Fri, Jun 1, 2018 at 1:32 AM Daniel Schierbeck ***@***.***> wrote:
In practice, we'd resort to mutating the Deployment directly through
kubectl, which would have unforeseen consequences wrt Samson.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2772 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAsZ7I46ZNYJAKW7qisJAoh-9W8HRjGks5t4PwogaJpZM4UU2oS>
.
|
This is not just due to scaling, it's also when we realize we need to change the resource limits. |
I think changing resources always requires a re-deploy, whereas updating
replicas could be done without.
Having a lightweight UI that changes scale sounds fine to me, but one that
re-deploys everything might be a bit dangerous.
…On Mon, Jun 4, 2018 at 1:03 AM Daniel Schierbeck ***@***.***> wrote:
This is not just due to scaling, it's also when we realize we need to
change the resource limits.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2772 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAsZyxyBsZFJj83YrTe3842rC0oCDhqks5t5OnhgaJpZM4UU2oS>
.
|
The problem is that we typically modify the resources for just a single Deployment, and sometimes only in a single Deploy Group – this doesn't actually change any other Deployments, yet all are restarted. |
Restarting, scaling down and lowering the number of resources might kinda
dangerous though,
so we'd need that to be documented somewhere and buddy-checked.
So it need to be a deploy ... so either "Make a stage that only goes to a
single deploy group" or add a "Only scale" deploy flag that then executes
the scale action and does no real updates (if old resources == new
resources) or updates/restarts only the changed deployments. That sounds
like a fair amount of work though :D
…On Tue, Jun 5, 2018 at 2:54 AM Daniel Schierbeck ***@***.***> wrote:
The problem is that we typically modify the resources for just a single
Deployment, and sometimes only in a single Deploy Group – this doesn't
actually change any other Deployments, yet all are restarted.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2772 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAsZw5VI6B_Zy0GnflvMpx5agSxdcA6ks5t5lVvgaJpZM4UU2oS>
.
|
Hmm... |
In our setup, we have a single stage updating an increasing number of Deployments. In cases where we need to modify the resource requests for a single role, we would like to avoid disrupting unrelated Deployments.
I believe the default behavior of Deployments is to no-op if there are no changes to the pod template. However, since Samson injects extra metadata into pods on each release (the release id, for instance), every Deployment gets updated.
I don't think it's this is necessary if the purpose is to track the individual pods in a Deployment. For a given Deployment, you can get the current ReplicaSet, and for that ReplicaSet, you can get the
pod-template-hash
that each pod gets labelled with, and this allows you to query the pods.The text was updated successfully, but these errors were encountered: