This is a lightweight design document for simple
rolling update in kubectl
.
Complete execution flow can be found here. See the example of rolling update for more information.
Assume that we have a current replication controller named foo
and it is
running image image:v1
kubectl rolling-update foo [foo-v2] --image=myimage:v2
If the user doesn't specify a name for the 'next' replication controller, then the 'next' replication controller is renamed to the name of the original replication controller.
Obviously there is a race here, where if you kill the client between delete foo, and creating the new version of 'foo' you might be surprised about what is there, but I think that's ok. See Recovery below
If the user does specify a name for the 'next' replication controller, then the
'next' replication controller is retained with its existing name, and the old
'foo' replication controller is deleted. For the purposes of the rollout, we add
a unique-ifying label kubernetes.io/deployment
to both the foo
and
foo-next
replication controllers. The value of that label is the hash of the
complete JSON representation of thefoo-next
orfoo
replication controller.
The name of this label can be overridden by the user with the
--deployment-label-key
flag.
If a rollout fails or is terminated in the middle, it is important that the user
be able to resume the roll out. To facilitate recovery in the case of a crash of
the updating process itself, we add the following annotations to each
replication controller in the kubernetes.io/
annotation namespace:
desired-replicas
The desired number of replicas for this replication
controller (either N or zero)update-partner
A pointer to the replication controller resource that is
the other half of this update (syntax <name>
the namespace is assumed to be
identical to the namespace of this replication controller.)Recovery is achieved by issuing the same command again:
kubectl rolling-update foo [foo-v2] --image=myimage:v2
Whenever the rolling update command executes, the kubectl client looks for
replication controllers called foo
and foo-next
, if they exist, an attempt
is made to roll foo
to foo-next
. If foo-next
does not exist, then it is
created, and the rollout is a new rollout. If foo
doesn't exist, then it is
assumed that the rollout is nearly completed, and foo-next
is renamed to
foo
. Details of the execution flow are given below.
Abort is assumed to want to reverse a rollout in progress.
kubectl rolling-update foo [foo-v2] --rollback
This is really just semantic sugar for:
kubectl rolling-update foo-v2 foo
With the added detail that it moves the desired-replicas
annotation from
foo-v2
to foo
For the purposes of this example, assume that we are rolling from foo
to
foo-next
where the only change is an image update from v1
to v2
If the user doesn't specify a foo-next
name, then it is either discovered from
the update-partner
annotation on foo
. If that annotation doesn't exist,
then foo-next
is synthesized using the pattern
<controller-name>-<hash-of-next-controller-JSON>
foo
and foo-next
do not exist:
foo
exists, but foo-next
does not:
foo-next
populate it with the v2
image, set
desired-replicas
to foo.Spec.Replicas
foo-next
exists, but foo
does not:
foo
and foo-next
exist:
foo-next
is missing the desired-replicas
annotation
desired-replicas
annotation to foo-next
using the
current size of foo
foo-next
< desired-replicas
annotation on foo-next
foo-next
foo
> 0
decrease size of foo
foo
foo
that is identical to foo-next
foo-next
foo-next
doesn't exist
foo
doesn't exist
foo-next
and foo
both exist
desired-replicas
annotation on foo
to match the annotation on
foo-next
foo
and foo-next
trading places.