1. Overview

Containers are popular since they provide many advantages. Isolation, security, scalability, and portability are just some of those advantages.

Although it’s possible to run multiple services or applications within a container, this isn’t a recommended practice.

In this tutorial, we’ll discuss why it’s recommended to run only one service or application in a container. First, we’ll discuss what it means for containers to have a single concern. Then, we’ll go over the advantages of the single-concern principle.

2. Containers With a Single Concern

Each container should have a single concern, and the container should manage it well. This is sometimes called the single-concern principle. It’s the counterpart of the single-responsibility principle in the SOLID principles of object-oriented design. According to the single-responsibility principle, a class should have one and only one reason to change. This is valid for containers as well.

Let’s consider the ELK stack as an example. Elasticsearch is a powerful search and analytics engine, whereas Logstash is a data collection engine. The other component of the ELK stack, Kibana, is a data visualization dashboard. It’s possible to run these services in the same container, but this wouldn’t be a good design as each service has its own specific and independent concern.

Isolation of services and applications has been one of the main concerns of containerization from the beginning. It’s possible to have more than one service running in a container. For example, we can execute a shell script as the entry point in a Docker container, and the script may spawn several processes. However, we should avoid running multiple processes in a container as much as possible unless the applications within the container are very tightly coupled.

We’ll discuss the advantages of running a single service in a container in the next section.

3. Advantages of Single Service per Container

There are several advantages of running a single service in a container.

3.1. Reusability

An obvious advantage of a container having a single service is the reusability of the container. We can use the container later in different environments, and maybe for different purposes, just by changing its configuration. Therefore, we may treat such containers as the building blocks of a deployment.

For example, suppose we have an ELK stack using Elasticsearch, Logstash, and Kibana containers. In that case, we can run several instances of the same Elasticsearch container to create an Elasticsearch cluster for high availability.

However, if we place the Elasticsearch and Logstash services within the same container, running multiple instances of that container will be overkill if we just need to run additional Elasticsearch services. Logstash is a powerful but resource-intensive service, so running a single instance of it might be sufficient within the ELK stack.

3.2. Scalability

Another advantage is the ease of horizontal scaling of containers. It’s easier to scale up a deployment by just running more instances of a container if it has a single service.

For example, if we have an Elasticsearch cluster, we might need to increase the performance of the cluster to have more shards and replicas as the workload increases over time. It’s sufficient to run more instances of the Elasticsearch container in this case. Similarly, we can scale down the Elasticsearch cluster just by shutting down some of the Elasticsearch containers in the cluster.

3.3. Isolation

One of the aims of using containers is the isolation of applications. We don’t want an application that has an intensive usage of resources such as CPU (Central Processing Unit) or memory to affect other applications adversely. Running one service per container achieves this aim.

For example, we can use the docker command’s –memory option to set a hard limit to the maximum physical memory a container can utilize, or its –cpus option to limit the number of CPUs a container can use.

3.4. Small Image Sizes

If we use a separate container for a service or application, then the size of the container image is smaller than a container image having multiple services. Therefore, we can build the container image in a shorter time. Additionally, the deployment of containers having smaller image sizes is faster.

3.5. Testing and Debugging

If we run in-house developed applications in separate containers, building container images for developers is easier since there’s no dependency on the applications developed by others. Additionally, it might be easier to debug and troubleshoot an application in an isolated environment than in an environment running multiple applications.

3.6. Upgrades

Having multiple services in a single container might cause dependency problems when we want to update one of the services to a newer version. The services might be using a common package already installed in the container. However, the new version of the service we want to upgrade may depend on a newer version of this package, while the other services might not be compatible with the newer version of the package. This problem of conflicting dependencies is known as dependency hell.

We don’t experience dependency problems if we use a separate container for each service. We can update the dependencies of the new version of the service without affecting other services. It’s even possible to update the operating system within the container image if necessary.

3.7. Collection of Logs

Services running in a single container might be writing their normal output and error messages to the standard output and standard error, respectively. Even if they write their logs to custom log files or Syslog, we may forward those logs to the standard output and standard error so that we can use the container’s logging facilities. In either case, we must identify the logs of different services if we run multiple services in a single container.

Running each service in a separate container is helpful when analyzing the logs of services. The separation of containers separates their logs, so it becomes much easier to analyze these logs.

4. Conclusion

In this article, we discussed why running only one service or application in a container is considered a good practice. First, we learned about the single-concern principle of containers, which is the counterpart of the single-responsibility principle for classes. Indeed, the isolation of processes has been one of the most important concerns of containerization.

Then, we discussed the advantages of running a single service in a container. In particular, the advantages include reusability, scalability, isolation, smaller image sizes, easier testing and debugging, upgrading of packages in the system, and collection of logs.