Skip to content

restart deployment via template hash change

Bug/Feature: T403321

Currently the restart logic delete's all Pod objects, which the Deployment object then re-creates.

While this works to 'restart' the 'service' (continuious job), it has some nasty side effects namely:

There is no 'rolling restart', eventhough the Deployment supports it

tools.cluebotng-review@tools-bastion-15:~$ kubectl get deployment cluebotng-reviewer -o json | jq .spec.strategy
{
  "rollingUpdate": {
    "maxSurge": "25%",
    "maxUnavailable": "25%"
  },
  "type": "RollingUpdate"
}

The pods are Terminated, then the Deployment picks up new pods are required and they go through ContainerCreating and eventually into Ready.

During the launch and until they are Ready the load balancer will not send traffic to them, causing a service outage. At moments when the cluster is resource constrained this outage is several minutes long.

Allowing the runtime (kubernetes) to handle the generation change causes the Pod objects to be created before the old ones are Terminated, also executing in a rolling manner which preserves availability and capacity inline with replicas being > 1

Edited by DamianZaremba

Merge request reports

Loading