1. Introduction

Docker containers are self-contained units of a service that package application code with all its dependencies. These lightweight environments isolate an application process from the underlying system and other containers.

Building on this, Docker Compose and Docker Swarm are tools for managing multi-container applications. Docker Compose enables us to deploy multi-container applications with a single configuration file, while Docker Swarm orchestrates them across multiple machines for scaling.

In this tutorial, we’ll learn how to choose between Docker Compose and Docker Swarm, as both can seem to be the right tools in some situations, depending on the use case.

2. Docker Compose

Docker Compose is a tool for defining and running multi-container applications on a single host. It enables us to describe the application services in a YAML file and then start, stop, rebuild, or scale them with a single command. Additionally, Docker Compose eliminates the need to run the docker run command multiple times to spin up multiple containers.

Although Docker Compose isn’t bundled with Docker, we can install it using several methods:

By offering a single configuration file and commands, Docker Compose simplifies managing multi-container applications on a single host.

2.1. Running Multi-Container Applications on a Single Host

To demonstrate, let’s deploy a three-tier application consisting of a database, application, and Web server on a single virtual machine. We can define these services within a single YAML file, named docker-compose.yml by convention:

version: '3'
services:
  web:
    image: nginx
    ports:
      - "80"
    volumes:
      - var/www/html
  app:
    image: tomcat
    ports:
      - "8080"
  db:
    image: redis
    ports:
      - "9080"

The docker-compose.yml file defines a configuration for a multi-container application. It uses version 3 of the YAML syntax.

The core of the file is the services section. This section breaks down the application into distinct services, each isolated within its container.

The first service, web, runs a web server using a pre-built Docker image called nginx. The image keyword specifies this image. The ports section maps a port from the container (port 80) to a port on the host machine (also port 80 in this example). This enables us to access the web server from our machine’s browser.

Another service called app runs an application server using the tomcat Docker image. Like the Web service, it has a port mapping, this time exposing port 8080 on the host machine for access.

The last service, db, is a database container based on the Redis image. It also has a port mapping, allowing external connections to the database.

Overall, this YAML file creates a setup with three interconnected services running in separate Docker containers.

Let’s create and start the services via docker-compose:

$ docker-compose -f docker-compose.yml up -d

The above command instructs Docker Compose to bring up the multi-container applications defined in the docker-compose.yml file.

We can confirm the services have been created by executing docker ps</em:

$ docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED        STATUS         PORTS                                                   NAMES
d232f55ebe4b   nginx     "/docker-entrypoint.…"   29 hours ago   Up 5 seconds   0.0.0.0:32768->80/tcp, :::32768->80/tcp                 root-web-1
2578203054e8   tomcat    "catalina.sh run"        29 hours ago   Up 5 seconds   0.0.0.0:32769->8080/tcp, :::32769->8080/tcp             root-app-1
0425c5dd931c   redis     "docker-entrypoint.s…"   29 hours ago   Up 5 seconds   6379/tcp, 0.0.0.0:32770->9080/tcp, :::32770->9080/tcp   root-db-1

The output confirms we’ve started the different services in separate Docker containers.

2.2. Advantages of Docker Compose

Docker Compose has a few advantages:

  • With Docker Compose, we can define and manage application services (containers) in a single YAML file, which eliminates the need to start each container individually.
  • Docker Compose speeds up development workflows by spinning up an entire environment with a single command, which simplifies iterations by speeding up restarts through cached configurations.
  • Docker Compose files use variables, enabling application environment customization for different scenarios (development, testing, production) or users, making porting easier.

In essence, Docker Compose enables us to define and manage multi-container applications on a single host.

2.3. Limitations of Docker Compose

While Docker Compose is a powerful tool for defining and managing multi-container applications, it cannot properly scale applications or handle high traffic.

Although Docker Compose was previously capable of scaling services with the scale subcommand, this functionality is now deprecated. The –scale option is now only used to scale services:

$ docker-compose up --scale <service>=<number_of_containers>

For example, let’s increase the number of containers for the web, app, and db services in docker-compose.yml:

$ docker-compose up --scale web=2 --scale app=2 --scale db=3
WARN[0000] /root/compose.yml: `version` is obsolete     
[+] Running 7/7
 ✔ Container root-app-1  Running                                                                                                 0.0s 
 ✔ Container root-app-2  Created                                                                                                 0.2s 
 ✔ Container root-db-1   Running                                                                                                 0.0s 
 ✔ Container root-db-3   Created                                                                                                 0.2s 
 ✔ Container root-web-1  Running                                                                                                 0.0s 
 ✔ Container root-web-2  Created                                                                                                 0.2s 
 ✔ Container root-db-2   Created    

Then, we confirm if the containers for all the services have increased:

$ docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED              STATUS         PORTS                                                   NAMES
db815ce62839   redis     "docker-entrypoint.s…"   About a minute ago   Up 7 seconds   6379/tcp, 0.0.0.0:32781->9080/tcp, :::32781->9080/tcp   root-db-3
311836c1dc60   tomcat    "catalina.sh run"        About a minute ago   Up 8 seconds   0.0.0.0:32779->8080/tcp, :::32779->8080/tcp             root-app-2
ed3311900d92   nginx     "/docker-entrypoint.…"   About a minute ago   Up 7 seconds   0.0.0.0:32777->80/tcp, :::32777->80/tcp                 root-web-2
1587b10d532c   redis     "docker-entrypoint.s…"   About a minute ago   Up 8 seconds   6379/tcp, 0.0.0.0:32775->9080/tcp, :::32775->9080/tcp   root-db-2
d232f55ebe4b   nginx     "/docker-entrypoint.…"   30 hours ago         Up 8 seconds   0.0.0.0:32776->80/tcp, :::32776->80/tcp                 root-web-1
2578203054e8   tomcat    "catalina.sh run"        30 hours ago         Up 8 seconds   0.0.0.0:32778->8080/tcp, :::32778->8080/tcp             root-app-1
0425c5dd931c   redis     "docker-entrypoint.s…"   30 hours ago         Up 7 seconds   6379/tcp, 0.0.0.0:32780->9080/tcp, :::32780->9080/tcp   root-db-1

While this might look like a working solution, the —scale flag can only be executed once. Therefore, scaling with Docker Compose requires adjusting all services. Otherwise, scaling a single service and then rerunning the –scale flag would scale the latter back down to a single container.

Additionally, the Docker Compose scale command cannot work for stateful services such as databases, as each container has its own state, leading to consistency issues.

Another major issue with Docker Compose when scaling services is that the new containers get assigned random port numbers, which can lead to port conflicts. Additional YAML configurations can address this, but the process can be complex.

3. Docker Swarm

Docker Swarm is a container orchestration tool that enables us to manage and scale existing containers. It creates a cluster of Docker Engines called a Swarm. The swarm comprises individual nodes, which are physical or virtual machines running Docker.

There are two main types of nodes in a Docker Swarm:

  • Manager Nodes: one or more (for redundancy) nodes that schedule tasks (Docker containers and the commands to run inside them) for worker nodes and ensure everything is running
  • Worker Nodes: run manager-assigned tasks in containers, distributing the load

Therefore, Docker Swarm efficiently distributes tasks across the cluster with this hierarchical structure.

3.1. Scaling Services Across Multiple Hosts

Docker Swarm orchestrates containers across multiple hosts, so for this example, we need two servers, one as the manager node and the other as the worker node.

To begin, let’s initialize Docker Swarm on the manager node server:

$ docker swarm init --advertise-addr=eth1
Swarm initialized: current node (x2kn79feoq1qgoqzf2b5popim) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-6ci5msq8d57agjee0xo942g7u9pd59yxa4jzvypz0svrnvyv3p-4og3g3bhn55kmzpxvw9i544fw 192.168.56.12:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

This init command initializes a new Docker Swarm cluster, setting this server as the leader manager node for a new Swarm.

As seen in the command output, a worker node join token is generated. The worker node server executes the join command using the generated token:

$ docker swarm join --token SWMTKN-1-6ci5msq8d57agjee0xo942g7u9pd59yxa4jzvypz0svrnvyv3p-4og3g3bhn55kmzpxvw9i544fw 192.168.56.12:2377
This node joined a swarm as a worker.

To verify that the worker has joined the swarm cluster, list the nodes on the manager server:

$ docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
szuxj4p9t3qff8cde7aq8o8x5     worker          Ready     Active                          26.1.4
x2kn79feoq1qgoqzf2b5popim *   manager         Ready     Active         Leader           26.1.4

Now that the Swarm cluster is complete, let’s create a service named svc1 that runs six replicas of the nginx image:

$ docker service create --name svc1 --replicas 6 -p 1234:80 nginx
lkxzy5607gxoe3db036vk8x82

Then confirm it has been created:

$ docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS
lkxzy5607gxo   svc1      replicated   6/6        nginx:latest   *:1234->80/tcp

With Docker Swarm, the replicas are distributed on the manager and worker nodes. We can confirm by checking the running or stopped containers in the svc1 service:

$ docker service ps svc1
ID             NAME         IMAGE          NODE          DESIRED STATE   CURRENT STATE            ERROR        PORTS
x8zjcj6ap6lq   svc1.1       nginx:latest   manager       Running         Running 3 minutes ago
5akgfkqfn6dc   svc1.2       nginx:latest   worker        Running         Running 4 minutes ago
otgcqnyr6xtm   svc1.3       nginx:latest   worker        Running         Running 4 minutes ago
yoube1prqrqf   svc1.4       nginx:latest   manager       Running         Running 3 minutes ago
1j7th6s4pbur   svc1.5       nginx:latest   manager       Running         Running 3 minutes ago
3mqps5vqf9ki   svc1.6       nginx:latest   worker        Running         Running 4 minutes ago

The output above shows containers running on the manager and worker nodes.

3.2. Advantages of Docker Swarm

One of the key advantages of Docker Swarm is its self-healing capability. This means a Swarm can handle and recover from failures within containerized applications.

It uses built-in health checks to monitor the health of the containers constantly. When a health check detects a failing container, Swarm automatically restarts the container on the same node or reschedules it to a healthy node in the Swarm.

To see this in action, let’s delete all the containers on the manager node:

$ docker rm -f $(docker ps -qa)

Upon execution, Docker Swarm immediately detects the cluster is missing some containers and automatically creates new ones. Let’s verify by listing the containers in svc1:

$ docker service ps svc1
ID             NAME         IMAGE          NODE                    DESIRED STATE   CURRENT STATE             ERROR                           PORTS
o3s3vk2q3u4d   svc1.1       nginx:latest   localhost.localdomain   Running         Running 26 seconds ago                                    
x8zjcj6ap6lq    \_ svc1.1   nginx:latest   localhost.localdomain   Shutdown        Failed 34 seconds ago     "task: non-zero exit (137)"       
5akgfkqfn6dc   svc1.2       nginx:latest   localhost.localdomain   Running         Running 38 minutes ago                                      
otgcqnyr6xtm   svc1.3       nginx:latest   localhost.localdomain   Running         Running 38 minutes ago                                       
xju28oe60jlf   svc1.4       nginx:latest   localhost.localdomain   Running         Running 26 seconds ago                                    
yoube1prqrqf    \_ svc1.4   nginx:latest   localhost.localdomain   Shutdown        Failed 33 seconds ago     "task: non-zero exit (137)"        
hsph97tdm7dg   svc1.5       nginx:latest   localhost.localdomain   Running         Running 26 seconds ago                                    
1j7th6s4pbur    \_ svc1.5   nginx:latest   localhost.localdomain   Shutdown        Failed 33 seconds ago     "task: non-zero exit (137)"        
3mqps5vqf9ki   svc1.6       nginx:latest   localhost.localdomain   Running         Running 38 minutes ago                                    

From the output, we can deduce that the deleted containers are in the Shutdown state with an error task: non-zero exit (137). Immediately after, Docker Swarm orchestrates the creation of new containers to replace the deleted ones.

Secondly, Docker Swarm can also scale up or down the number of replicas in the service. For instance, let’s scale the number of replicas in svc1 from six to eight from the manager node server:

$ docker service scale svc1=8
svc1 scaled to 8
overall progress: 8 out of 8 tasks 
1/8: running   [==================================================>] 
2/8: running   [==================================================>] 
3/8: running   [==================================================>] 
4/8: running   [==================================================>] 
5/8: running   [==================================================>] 
6/8: running   [==================================================>] 
7/8: running   [==================================================>] 
8/8: running   [==================================================>] 
verify: Service svc1 converged 

Thus, we benefit from actual dynamic scaling instead of static scaling at the service level.

Finally, Docker Swarm offers two key networking advantages: ingress overlay networks and user-defined overlay networks. Upon cluster initialization, Docker Swarm automatically creates an overlay network called the ingress network. Users can additionally create custom networks alongside this default. The ingress network handles internal service discovery and load balancing within the swarm.

While creating the services, we mapped container port 80 to host port 1234; hence, the ingress network is responsible for redirecting traffic across all the containers on port 80.

3.3. Limitations of Docker Swarm

Docker Swarm is a good tool for orchestrating containerized applications, but it has some limitations to consider:

  • Swarm is easy to use, but this comes at the cost of more limited features compared to a platform like Kubernetes, so it might not be suitable for complex deployments.
  • Swarm relies on the Docker API, which restricts some degrees of customization.
  • While Swarm can scale applications across a cluster, it might not be the best choice for massive deployments, as there’s also a limit of seven manager nodes, which is unsuitable for larger environments.
  • The built-in monitoring tools of Swarm are basic, relying on the Docker event and server log functionalities, although third-party monitoring tools exist.

While Docker Swarm is a good option for creating scalable environments, the above limitations might be problematic in some instances.

4. Choosing the Right Tool

Now that we’ve learned more about Docker Compose and Docker Swarm, let’s summarize the key differences between them in a table to help us make an informed decision based on the project requirements:

Docker Compose

Docker Swarm

Docker Compose is a tool for running multi-container Docker applications

Docker Swarm orchestrates a cluster of Docker engines, treating them as a single virtual system for managing and deploying containerized applications.

Docker Compose deploys containers on a single host.

Swarm orchestrates containers across multiple nodes, with better scaling capabilities beyond single-host Docker Compose deployments.

It is mainly used for testing and development.

It is used for high availability and fault tolerance.

The Compose utility is a standalone binary.

Docker Swarm and related subcommands are built into the Docker command line interface (CLI).

Docker Compose is useful for automating the setup and maintenance of applications, such as self-hosting a WordPress blog on a single server.

Swarm is suitable for applications with large user bases, workloads requiring parallel scaling, and those bound by service-level agreements.

Based on the table above, we should be able to more quickly identify the most suitable tool for container orchestration needs.

5. Conclusion

In this article, we’ve learned about choosing the right tool between Docker Compose and Docker Swarm based on their advantages, areas of use, and limitations.

In conclusion, while there isn’t a superior tool, each one can find its place in specific situations and environments.