Skip to content

update deployment via template hash change

Currently, when a continuous job needs to be updated i.e. due to the configuration being changed, we delete and then create the Deployment object.

While this works to transition the Deployment and Pod objects into the new configuration, it has some nasty side effects namely:

There is no 'rolling restart', even-though the Deployment supports it

tools.cluebotng-review@tools-bastion-15:~$ kubectl get deployment cluebotng-reviewer -o json | jq .spec.strategy
{
  "rollingUpdate": {
    "maxSurge": "25%",
    "maxUnavailable": "25%"
  },
  "type": "RollingUpdate"
}

Once the Deployment object is deleted, the Pod objects are also deleted, resulting in a service outage, until the new Deployment object brings the new Pod objects into Ready.

This can take anywhere from a few seconds to a many minutes, especially when the cluster is resource constrained and the Scheduler delays the new Pod objects from creating containers.

Allowing the runtime (kubernetes) to handle the configuration change causes the Pod objects to be created before the old ones are Terminated, also executing in a rolling manner which preserves availability and capacity inline with replicas being > 1

Depends-On: !218 (merged)
Bug: T403321

Edited by David Caro

Merge request reports

Loading