1. Overview

Kubernetes is a powerful container orchestration tool that can be used to manage microservice architectures. However, one common issue when deploying such architectures is that certain services must be started in a specific order due to their interdependencies.

The best practice when designing our applications is to ensure that containers are stateless. Stateless containers can be restarted, replaced, or scaled at any time without affecting the system’s overall health. However, in real-world scenarios, there can be cases where some level of sequencing is necessary.

In this tutorial, we’ll explore different ways of initializing Services, Pods, and Deployments in a specific order.

2. Init Containers

Let’s start with a case where we need a specific initialization order within a single Pod. To accomplish that, we can use the init containers feature. Init containers are a Kubernetes mechanism to carry out specialized tasks before the application containers start. Unlike regular containers, these containers are responsible for setting up the proper environment for the main application.

Let’s review two key aspects that differentiate init containers:

  • Init containers execute to completion before the application containers boot up.
  • The sequential execution of init containers ensures each completes successfully before the next begins.

These traits mean init containers do not support standard Kubernetes readiness checks (livenessProbe, readinessProbe, startupProbe, or lifecycle). Kubernetes runs multiple init containers in a Pod sequentially, ensuring each completes successfully before proceeding. When an init container fails, Kubernetes restarts it until it successfully completes. However, a Pod with a restartPolicy of Never will be marked as failed if an init container does not successfully execute.

Let’s explore an example of using init containers:

apiVersion: v1
kind: Pod
metadata:
  name: init-demo
spec:
  containers:
  - name: main-app
    image: main-app-image
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: alpine
    command: ['sh', '-c', 'until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done;']
  - name: init-mydb
    image: alpine
    command: ['sh', '-c', 'until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done;']

In this example, two init containers, init-myservice and init-mydb, are defined to wait for the availability of myservice and mydb services, respectively.

3. Pods Startup Order

Next, we’ll explore the case when we need to wait for a Pod to get into the Ready state. This technique is particularly useful when we need to ensure that a Pod is fully operational before moving on to the next step in our workflow or pipeline.

In Kubernetes, we achieve this by combining the kubectl run and kubectl wait commands. Let’s see an example using the Apache HTTP Server image:

$ kubectl run myhttpd -n default --image=httpd:latest --restart=Never
$ kubectl wait pods -n default -l run=myhttpd --for condition=Ready --timeout=90s

Certainly, we can wait for any other state. To check the state of a Pod, we can run:

$ kubectl get pod -n default -l run=myhttpd -o jsonpath="{.items[*].status.conditions}" | jq

In this case, we initiate by deploying a new Pod named myhttpd in the default namespace using the httpd image. The –restart=Never option ensures the Pod does not restart automatically.

Next, we leverage the kubectl wait command to pause the execution until the Pod’s status is Ready. This state signifies that the Pod is fully operational and can effectively handle requests. The –timeout=90s parameter limits how long we’re willing to wait for the Pod to reach this state. If the Pod does not become Ready within 90 seconds, the command will exit with an error.

In summary, this approach provides a reliable method to deploy a Pod and confirm its readiness before proceeding.

4. Deployments Startup Order

Finally, we have to find out how to wait for a Deployment state. A Deployment is a higher-level concept that manages ReplicaSets, which, in turn, control Pod lifecycles. Deployments are especially valuable for updating applications, scaling the number of Pod instances, and rolling back to earlier application versions.

Deploying an application and monitoring its rollout status in a Deployment follows a similar pattern to waiting for Pod readiness. However, Deployments add an extra layer of control, making monitoring their rollout crucial in maintaining application health.

Let’s illustrate this using an NGINX Deployment:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/deployment.yaml

This command creates a Deployment that manages ReplicaSets and, subsequently, Pods, to ensure our NGINX application maintains the desired state.

To ensure our Deployment effectively reflects recent updates, we trigger a rollout and monitor its status:

$ kubectl rollout restart deployment nginx-deployment -n default
$ kubectl rollout status deployment nginx-deployment -n default --timeout=90s

With kubectl rollout restart, we initiate a rollout of the Deployment, causing the Deployment to update the Pods with the most recent image or configuration changes. We then use kubectl rollout status to monitor the status of this rollout, waiting until the rollout completes or until our specified timeout of 90 seconds is reached.

5. Conclusion

Throughout this article, we’ve specifically focused on strategies for maintaining a specific order of operations in our clusters. We’ve discovered that despite Kubernetes’s inherent desire for deploying stateless and independent containers, there are still ways to control the sequence of deployments when required.

By leveraging the power of init containers, we’ve seen how we can establish necessary prerequisites and settings before our primary application containers are launched.

When working with individual Pods, we’ve utilized kubectl wait to control the workflow execution by waiting for a Pod to reach the Ready state before progressing. This tool allows us to create a sequence of operations where one task is dependent on the successful completion of another, thus achieving an orderly execution process.

Lastly, we’ve found out how to control the order of Deployments. By triggering a rollout and actively monitoring its status, we ensured that our Deployments transitioned smoothly through updates, maintaining the sequence of operations and enhancing application stability.