1. Overview
In this tutorial, we’ll learn about the concept of a sidecar pattern in general. Furthermore, we’ll see how we can implement the sidecar pattern in Kubernetes with multiple containers in the same pod.
2. The Sidecar Pattern
The sidecar pattern is a design pattern that runs supporting services isolated from the main application. The isolation is best achieved as a separate container from the main application. Besides that, these supporting services are usually handling cross-cutting concerns, such as log shipping and network-level fault tolerance mechanisms.
2.1. The Problem
To appreciate the sidecar pattern, we can think of the requirements of building a service. Specifically, the service will concern itself with auxiliary services other than the main business logic. For example, it could be shipping logs to a remote target and implementing a suite of fault tolerance mechanisms on the network layer.
If we implement the supporting service logic into the business code, we’ll repeat the code for different services. Besides that, the supporting services will be restricted to the programming language we use for the main business logic. This incurs a high maintenance cost for the development team.
By adopting the sidecar pattern, the supporting service logic can be encapsulated in a separate container. Then, we construct the system by composing the main application container with the supporting service container.
2.2. Benefits of Sidecar
The isolation characteristic of the sidecar container enables a nice separation of concerns for the main application. For example, the main application only has to write the log to a shared volume without worrying about the shipping mechanism.
Additionally, network-layer fault tolerance handling can be delegated to a sidecar container. For example, the Istio service mesh is one project that epitomizes the idea of the sidecar. It works by injecting a sidecar proxy that intercepts all incoming and outgoing traffic. By acting as the gateway for the network traffic, it can perform network-layer manipulation externally.
Furthermore, we can compose a system using supporting services from different languages using the sidecar pattern. On the flip side, without the sidecar, we’d be coupling the supporting services code into the main business logic, meaning we could only use the libraries available in the language in which the main service is written.
2.3. The Lifecycle of Sidecar
One important requirement of a sidecar is that its lifecycle has to be tightly coupled with the main application’s lifecycle. In other words, the sidecar container should startup, shut down, and scale with the main container.
This is an important characteristic that differentiates a sidecar process from a standalone service. The sidecar will only serve the main application, and its lifecycle starts and ends with the main application’s lifecycle. On the other hand, a standalone service might serve more than one consumer and, therefore, have its own independent lifecycle.
3. Sidecar Container in Kubernetes
In Kubernetes, the pod is the smallest deployable unit. Within a pod, we can run more than one container. This is achievable by defining more containers under the containers field of the PodSpec. Furthermore, these containers share the same network namespace and storage device. Therefore, implementing a sidecar pattern in a Kubernetes pod is done simply by running the sidecar container in the same pod as the main application container.
3.1. Communication Between Containers
The containers within a single pod share the network namespace. Therefore, the sidecar container can communicate with the main application container through localhost. One downside is that the sidecar container and the main application container must not listen on the same port. This is because only a single process can listen to the same port on the same network namespace.
Another way for the sidecar container to interact with the main application container is by writing to the same volume. One way to share volume between containers in the same pod is by using the emptyDir volume. The emptyDir volume is created when a pod is created, and all the containers within the same pod can read from and write to that volume. However, the content of the emptyDir is ephemeral by nature and is erased when the pod is deleted.
For example, we can run a log shipper container alongside our main application container. The main application container will write the logs to the emptyDir volume and the log shipper container will tail the logs and ship them to a remote target.
3.2. Running a Sidecar Container in a Pod
To run a sidecar in a pod, we add the sidecar container definition to the containers field of the PodSpec. For example, we can add a hypothetical log shipper container, logshipper, alongside our myapp container in a Kubernetes Deployment object:
$ cat log-shipper-sidecar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: alpine:latest
command: ['sh', '-c', 'echo "logging" > /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
- name: logshipper
image: alpine:latest
command: ['sh', '-c', 'tail /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
volumes:
- name: data
emptyDir: {}
The manifest above creates a Deployment object, which in turn spawns one replica of the myapp pod. Within the myapp pod, there will be two containers that are running, myapp and logshipper. The idea is that the myapp container will act as the main application, which writes log to the /opt/logs.txt. On the other hand, the logshipper, as the sidecar container, will tail the log file at /opt/logs.txt. To make both of the containers see the same file, we mount the emptyDir volume to both of the containers.
4. Lifecycle Issue of a Sidecar Container
There are some caveats we have to take note of when implementing a sidecar container in Kubernetes. Specifically, the issues stem from the fact that Kubernetes doesn’t differentiate between containers in the pod. In other words, there’s no concept of primary or secondary containers in the Kubernetes perspective.
4.1. Non-Sequential Starting Order
When a pod starts, the kubelet process starts all the containers concurrently. Additionally, we cannot control the sequence of containers starting. For cases that require the sidecar container to be ready first before the main application container, this can be problematic. Some workarounds include adding a custom delay timer on the main application container to delay its starting. However, the best solution is to design the containers to be independent of the starting sequence.
If we require some initialization work prior to the main application starting, we should use the initContainers. This is because they are different from the normal containers in that they’ll always run to completion first before Kubernetes starts the main containers.
4.2. Preventing a Job From Completion
The Job object in Kubernetes is a specialized workload that is similar to a Deployment. However, one crucial difference is that the expectation of a Job object is to run to completion.
If we add a long-running sidecar container, the Job object will never reach the completion state. This is because Kubernetes will only consider a Job as complete when all of its containers exit with a zero exit code.
Besides that, it will also cause a Job that configures the deadline using the activeDeadlineSeconds property to timeout and restart. This can be problematic if we have a process that depends on the completion state of the Job object.
One solution is to extend the main application container in the Job to send a SIGTERM to the sidecar containers prior to exit. This can ensure that the sidecar container will shut down when the main application exits, completing the Job object.
5. Conclusion
In this tutorial, we’ve learned about the sidecar design pattern that runs supporting services as separate processes. Then, we looked at how to apply this pattern on Kubernetes by adding a sidecar container to the containers field. Finally, we learned that Kubernetes treats all the containers equally, and that might cause lifecycle issues with objects like the Job object.