1. Introduction
Docker is a tool that’s widely used for deploying and running apps. Its popularity stems from its ability to create not only portable apps but also apps that run consistently across various host environments. This is achieved by isolating the app and its dependencies into self-contained executable files called Docker container images. We refer to running instances of images as containers.
Furthermore, containers provide a level of isolation, ensuring that the app runs consistently regardless of the underlying host environment. This consistency is helpful during the development, staging, and production phases. It allows for predictable behavior and eliminates potential issues caused by variations in the host environment.
In this tutorial, we’ll define a sample application and place it on localhost and a remote host using Docker Machine.
2. Application Sample in Compose
Docker-compose is a tool that enables us to define and manage multiple services in one file. We call the file a docker-compose file. It has a .yml or .yaml file extension. Thus, with this, we can build apps with many services in one go. We only need a running Docker engine and a docker-compose file containing the app’s services. With one docker-compose up command, we build and start the containers.
Let’s define a compose file describing an app that has two services:
version: '3.1'
services:
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
#Mysql DB
db:
image: mysql:5.7
container_name: Mysqldb
restart: unless-stopped
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: db_password_root
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: db_password
volumes:
db_data:
This compose file has two services. One is an NGINX server, while the other service is a MySQL database. We’ll use this example throughout the article.
3. Operating With Localhost
When building apps and software, it’s common practice to first develop and test in our local environment before pushing to any remote host. The local host is usually the current host we’re logged in to.
In most instances, it’s our personal computers. The beauty of this is that it allows us to catch errors and fix issues. To demonstrate, let’s run our sample application on our local host. We start by checking if our configuration is correct:
$ docker-compose config
services:
db:
container_name: Mysqldb
environment:
MYSQL_DATABASE: wordpress
MYSQL_PASSWORD: db_password
MYSQL_ROOT_PASSWORD: db_password_root
MYSQL_USER: wordpress
image: mysql:5.7
ports:
- 3306:3306/tcp
restart: unless-stopped
volumes:
- db_data:/var/lib/mysql:rw
webserver:
container_name: webserver
image: nginx:alpine
ports:
- 80:80/tcp
- 443:443/tcp
restart: unless-stopped
version: '3.1'
volumes:
db_data: {}
This confirms that the configuration is correct. Next, let’s start our services:
$ docker-compose up -d
Creating network "compose-example_default" with the default driver
Creating volume "compose-example_db_data" with default driver
Pulling webserver (nginx:alpine)...
alpine: Pulling from library/nginx
Digest: sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
Status: Downloaded newer image for nginx:alpine
Creating webserver ... done
Creating Mysqldb ... done
The services are created successfully. Lastly, let’s confirm the containers are up and running as expected:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------
Mysqldb docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp, 33060/tcp
webserver /docker-entrypoint.sh ngin ... Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
This shows our containers are up and their ports are exposed. We can further confirm this using our browser by visiting localhost:80.
4. Operating on a Remote Host
A remote host is a computer that is situated elsewhere outside our network. We use a network connection to communicate and connect to the computer. An excellent example is when accessing a server in the cloud from our personal computers. We’ll look at using Docker Machine to deploy and operate containers in a remote host.
4.1. Installing Docker Machine
Docker Machine is a tool that allows us to provision and manage many docker hosts from our personal computers. Moreover, Docker Machine works on local dev environments, virtual hosts, remote servers, and in the cloud. For better understanding, we’re going to deploy our sample application on a DigitalOcean droplet from our local computer.
Let’s start off by installing Docker Machine in our machine:
$ base=https://github.com/docker/machine/releases/download/v0.16.0 \
&& curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine \
&& sudo mv /tmp/docker-machine /usr/local/bin/docker-machine \
&& chmod +x /usr/local/bin/docker-machine
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 26.8M 100 26.8M 0 0 27.3M 0 --:--:-- --:--:-- --:--:-- 49.0M
This downloads Docker Machine from the official GitHub repo. Then, it avails it as a system command. Lastly, it makes it executable.
Let’s test to see if it’s properly installed:
$ docker-machine version
docker-machine version 0.16.0, build 702c267f
Now, we have Docker Machine installed.
4.2. Using Docker Machine
To create a Docker Machine, we use the docker-machine create command. We use this to create virtualized docker environments on remote hosts. Each infrastructure provider — for example, AWS, Azure, and DigitalOcean — has its own driver.
Let’s create a remote instance on the DigitalOcean cloud:
$ docker-machine create --driver digitalocean --digitalocean-size s-1vcpu-1gb --digitalocean-access-token $DOTOKEN -digitalocean-image ubuntu-20-04-x64 remote-host
Running pre-create checks...
Creating machine...
(remote-host) Creating SSH key...
(remote-host) Creating Digital Ocean droplet...
(remote-host) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
We used the digitalocean driver to create a remote instance. Secondly, we specified the RAM, CPU, and OS image to use for the instance. Lastly, we gave the instance the name remote-host.
Let’s verify that it was correctly created:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
remote-host - digitalocean Running tcp://138.197.36.13:2376 v24.0.2
We see that our remote host is ready, so, let’s connect to it from our local system:
$ eval $(docker-machine env remote-host)
This sets our working environment to the remote host. Let’s check:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
remote-host * digitalocean Running tcp://138.197.36.13:2376 v24.0.2
The asterisk * shows the active host. We see our Docker Machine points to the remote host as our working environment. Subsequently, we can run any Docker commands on it.
First, let’s run our sample application as we did using docker-compose up -d. Secondly, let’s check if the containers are running:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------------
Mysqldb docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp,:::3306->3306/tcp, 33060/tcp
webserver /docker-entrypoint.sh ngin ... Up 0.0.0.0:443->443/tcp,:::443->443/tcp, 0.0.0.0:80->80/tcp,:::80->80/tcp
This shows they are up with respective ports on the host. Let’s use the curl command to verify:
$ curl $(docker-machine ip remote-host):80
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 615 100 615 0 0 1471 0 --:--:-- --:--:-- --:--:-- 1474<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
</style>
</head>
<body>
...
</body>
</html>
We can see that the nginx container is up and running. We can further perform commands such as start, stop, rebuild, and delete on the remote host.
4.3. Hosted Containers
On the other hand, if we are not building the containers from our local system, we rely on hosted images. This means we can run images directly from a hosted repository, such as Docker Hub or GitHub container registry.
For example, let’s pull an adminer container from Docker Hub and run it on the remote host:
$ docker run -d -p 8080:8080 --name databasetool adminer
Unable to find image 'adminer:latest' locally
latest: Pulling from library/adminer
...
ed7a0cc37cf2: Pull complete
Digest: sha256:3751b49306d8fd25567ed2222d4f1fc55415badbb827956d38e8ca3e7167a5d8
Status: Downloaded newer image for adminer:latest
705d48990269eee67fc4b90f4ecb4447efd4750a8c6dc4597e61e60779fc9239
The above pulls the latest adminer image, builds it, and maps it to port 8080. We can access adminer on the browser using http://remote-host ip:8080.
Finally, let’s check the active images on the remote host:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0788b6893eba adminer "entrypoint.sh php -…" 42 minutes ago Up 42 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp databasetool
f6f21c700cc1 nginx:alpine "/docker-entrypoint.…" 44 minutes ago Up 44 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp webserver
f202e57a0cb2 mysql:5.7 "docker-entrypoint.s…" 44 minutes ago Up 44 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp Mysqldb
We see all the containers running in the host: adminer, nginx, and mysql. When working with many hosts, we can interchangeably use the docker-machine use command to switch between the hosts.
5. Conclusion
In this article, we’ve explored the use cases of Docker Machine for remote deployments, highlighting its appeal as a straightforward approach for running containers on remote servers. Furthermore, Docker Machine minimizes placement risks when transitioning between environments, providing a seamless and easy-to-manage transition process.
However, with that said, newer tools for container management are emerging, and we should be on the lookout.