1. Overview

In this tutorial, we’ll learn how to set up our Docker container to serve HTTPS traffic.

2. Deployment Architecture

The main idea of serving HTTPS on a Docker container service is that the backend service lives in a Docker network without exposing any ports. Then, we run a reverse proxy within the same Docker network that performs the SSL termination and forwards requests to the backend service. Importantly, we place a rule on the reverse proxy such that we only accept HTTPS requests.

The advantage of this deployment arrangement is that we relieve the backend service from needing to handle the SSL termination. This can be beneficial if we have multiple services that require requests to be made with HTTPS. Furthermore, using a dedicated reverse proxy for terminating SSL lessens the burden of SSL processing workload on those backend services.

For our demonstration, we’ll use the crccheck/hello-world Docker image. That image starts a simple web server that responds to HTTP requests with an HTTP payload. Additionally, we’ll use the open-source multipurpose Nginx web server as our reverse proxy web server.

3. Creating the Deployment

On a high level, we’ll first start our backend web service as a Docker container. Then, we’ll generate a self-signed certificate that’ll be used for enabling the SSL mode. With the SSL certificate, we’ll start the Nginx Docker container after configuring it to terminate SSL and forward requests to the backend service.

3.1. Creating a Docker Network

Firstly, we’ll need to create a Docker network server-reverse-proxy-link using the docker network create command:

$ docker network create server-reverse-proxy-link
c06abccb5e91855b3b752b7df1e05252d379c83054be984048fab802f6c754a7

This Docker network will later house the two containers to allow network communication between them.

3.2. Starting the Backend Service

We then start the backend service container using the docker run command and pass the image name as the argument:

$ docker run -d --network server-reverse-proxy-link --name backend-service crccheck/hello-world

Importantly, we do not expose its port to the outside world. This is critical in ensuring that the service only accepts requests from the reverse proxy web server, which only accepts HTTPS connections.

Additionally, we’ve also attached the container to the server-reverse-proxy-link Docker network using the –network option when we start the container.

3.3. Obtaining a TLS Certificate

For demonstration purposes, we’ll be generating our self-signed certificate. The caveat with this approach is that the client will complain about the certificate being untrustable as it doesn’t bear the signature of any reputable certificate authority.

To generate a self-signed certificate, we’ll first need to generate a private key using the openssl genrsa command:

$ openssl genrsa -out server.key 2048

Then, we create a certificate signing request using the openssl req command:

$ openssl req -key server.key -new -out server.csr

When we generate the certificate signing request, the command will prompt us with information for the certificate. We can fill in the information on the prompt and press enter to proceed until we complete the creation. At the end of the procedure, we’ll get a server.csr file.

Finally, we’ll use the openssl x509 command to sign the certificate signing request we’ve created previously:

$ openssl x509 -signkey server.key -in server.csr -req -days 365 -out server.crt
Certificate request self-signature ok
subject=C = MY, ST = KL, O = Example Ltd, CN = backend-service

With the certificate, we can now set up our reverse proxy server that handles SSL termination and request forwarding.

3.4. Nginx Configuration File for SSL and Routing

Before we start the Nginx reverse proxy container, we’ll need to first write the configuration file for the Nginx web server. In the configuration file, we’ll configure a server that listens on the HTTPS port and forwards the requests to our backend service.

Here’s how the configuration file looks like:

$ cat nginx.conf
events { }
http {
    server {
        listen 443 ssl;

        ssl_certificate /opt/certificates/server.crt;
        ssl_certificate_key /opt/certificates/server.key;

        location / {
            proxy_pass http://backend-service:8000;
        }
    }
}

Let’s break down the configuration file.

At the top level, we define a server block that configures a server that handles requests. Within the server block, we further group the configuration into blocks using a newline. Through the license directive, we set the server to listen to requests on port 443. Additionally, we specify the ssl directive to ensure that the incoming connection works in SSL mode.

Subsequently, we specify the path to the server certificate and the private key using the ssl_certificate and ssl_certificate_key, respectively. In our example, we specify the path to the server certificate and key we’ve generated in the previous section.

Finally, we specify the location directive followed by the forward slash, which matches all the incoming requests as the forward slash matches the root URI. When the server gets matching requests, it forwards the request to the service as specified by the proxy_pass parameter.

In the gist of it, the configuration configures a web server that listens to HTTPS traffic on port 443. Then, the server forwards the requests to the service at http://backend-service:8000.

3.5. Starting the Nginx Docker Container

With the configuration file, we can start the Nginx Docker container using the docker run command:

$ docker run -d \
  --name nginx-container \
  --network server-reverse-proxy-link \
  -p 443:443 \
  -v /opt/certificates:/opt/certificates \
  -v /opt/nginx.conf:/etc/nginx/nginx.conf \
  nginx:latest

Importantly, we use the -v option to mount the certificate, private key, and configuration file to the container. This is necessary to ensure that the Nginx process within the container can read those files.

Besides that, we specify the –network option to attach the container to the server-reverse-proxy-link.

3.6. Testing the Service

We can test out the deployment by running the curl command against localhost using the HTTPS protocol:

$ curl https://localhost
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it...

Notably, the command fails because the certificate it gets from the server is self-signed. To disable the check, we can specify the option –insecure to bypass the check:

$ curl --insecure https://localhost
<pre>
Hello World


                                       ##         .
                                 ## ## ##        ==
                              ## ## ## ## ##    ===
                           /""""""""""""""""\___/ ===
                      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
                           \______ o          _,/
                            \      \       _,'
                             `'--.._\..--''
</pre>

From the output, we can see that we’ve successfully gotten a response from the backend service through an HTTPS request.

4. Conclusion

In this tutorial, we’ve learned about serving HTTPS for a web service running as a Docker container. Specifically, we’ve set up a simple web service as a Docker container, and an Nginx web server running alongside. Then, we’ve demonstrated how to configure the Nginx web server to terminate SSL and forward requests to the web service.