1. Introduction

In this tutorial, we’ll explore strategies to automate the update process of Docker containers when their base images are updated. We’ll look into custom Bash scripts and third-party tools like Watchtower that provide a range of solutions.

2. Understanding Docker Image Updates

Before diving into the solutions, let’s understand why automating Docker container updates is crucial.

Docker containers are built from base images, which are essentially snapshots of a filesystem that include the operating system and additional installed packages. These base images are updated regularly to include security patches, bug fixes, and feature enhancements.

However, once a Docker container is built from a base image, it doesn’t automatically receive updates from its base image. This means that over time, containers can become outdated and vulnerable to security threats if not manually updated.

Therefore, automating the update process ensures that containers remain secure and efficient without requiring constant manual oversight. This automation can significantly reduce the risk of security vulnerabilities in containerized applications, which is especially important for those running critical services in production environments.

3. Using Watchtower for Automatic Container Updates

Watchtower is an open-source tool that automates the process of updating Docker containers. It operates by polling the Docker registry to check for updates to the images from which containers were initially instantiated.

Suppose Watchtower detects that an image has been updated. In that case, it gracefully shuts down the container running the outdated image, pulls the new image from the Docker registry, and starts a new container with the same configurations as the previous one. This ensures that our containers are always running the latest version of the base image without manual intervention.

Furthermore, Watchtower can monitor all containers on a host or only those explicitly specified, providing flexibility in how we manage updates across different environments. It also supports notifications, which can alert us to the status of container updates through various channels such as email, Slack, or HTTP endpoints.

Let’s see the steps to set up Watchtower in our Docker environment.

3.1. Running Watchtower Container

To start using the Watchtower, we first need to run the Watchtower container itself.

We can do this by executing a Docker command:

$ docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  containrrr/watchtower

e9d83c116cde23a5983d0e34f0dec7b658f000123f8a2edef1b2c2b3a9256c6e

With this, we run Watchtower in a detached mode, giving it access to the Docker socket, which is necessary for Watchtower to monitor and update containers. Our output shows the unique identifier for the Watchtower container now running in the background.

3.2. Configuring Containers to Monitor

By default, Watchtower will monitor and update all containers.

However, if we wish to limit Watchtower to specific containers, we can do so by using the –label-enable flag when running Watchtower and adding a “com.centurylinklabs.watchtower.enable=true” label to the containers we want Watchtower to manage.

Let’s see an example.

First, we run Watchtower with the flag:

$ docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  --label-enable \
  containrrr/watchtower

Afterward, we can run our container with the label for monitoring:

$ docker run -d \
  --name my-nginx \
  --label com.centurylinklabs.watchtower.enable=true \
  nginx

0f415dcfaf3b25a1a7ec2d3d3125c0f0d3e9730c4e9a5f0b7555b6e7e1d8a5f7

Here, our output confirms that the my-nginx container with its container ID is running and labeled for Watchtower monitoring.

3.3. Customizing Update Polling Interval

We can customize how frequently Watchtower checks for image updates by setting the –interval flag followed by the number of seconds between checks.

For example, we can set the interval to 86400 seconds, thus instructing Watchtower to check for updates once a day:

--interval 86400

This can help balance between immediacy and system resource consumption.

3.4. Enabling Notifications

If we want to receive notifications about container updates, we can also configure Watchtower with notification settings using environment variables. Watchtower supports various notification methods, including email, Slack, and custom HTTP hooks.

Let’s see some helpful scenarios.

To configure email notifications, we can set up the environment variables and run Watchtower:

$ docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e WATCHTOWER_NOTIFICATIONS=email \
  -e [email protected] \
  -e [email protected] \
  -e WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.example.com \
  -e WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=587 \
  -e [email protected] \
  -e WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=your-email-password \
  containrrr/watchtower

Here, we should modify the variables as appropriate.

Upon receiving a notification, this email provides a summary of containers that were updated, including their names and the image versions before and after the update, e.g.:

Subject: Watchtower Updates

Watchtower has successfully updated the following containers:
- Updated 'nginx' from 'nginx:1.17' to 'nginx:1.18'

As we can see, this depicts a successful nginx update.

Furthermore, for Slack notifications, we’ll need to create a Slack app and obtain a webhook URL.

Then, we can replace the <SLACK_WEBHOOK_URL> with our webhook URL:

$ docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e WATCHTOWER_NOTIFICATIONS=slack \
  -e WATCHTOWER_NOTIFICATION_SLACK_HOOK_URL=<SLACK_WEBHOOK_URL> \
  containrrr/watchtower

For notification, the Slack message will typically inform us about which containers were updated, similar to the email notification but within our Slack workspace.

Lastly, we can also send HTTP notifications to a custom endpoint. This is useful for integrating with systems not directly supported by Watchtower or for custom logging solutions:

$ docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e WATCHTOWER_NOTIFICATIONS=shoutrrr \
  -e WATCHTOWER_NOTIFICATION_URL=shoutrrr://http://your-webhook-endpoint \
  containrrr/watchtower

A sample notification will look like:

{
  "text": "Watchtower has successfully updated the following containers: nginx from nginx:1.17 to nginx:1.18"
}

This JSON payload could be sent to our specified HTTP endpoint to provide a concise update message.

4. Custom Scripts for Advanced Scenarios

While a tool like Watchtower provides straightforward solutions for keeping Docker containers updated, there are scenarios where this tool might not offer the level of control or granularity needed. This is where custom scripting comes into play.

With scripts, we can constantly tailor to our specific requirements, such as handling complex dependencies between images, integrating with internal tools, or applying custom logic before deploying updates.

Let’s explore a couple of examples where we use Bash scripts to manage Docker image updates more granularly.

4.1. Automating Docker Container Updates With a Bash Script

Let’s see a Bash script that automates Docker container updates based on their base images.

With the script, we can iterate over all our running Docker containers, checking each container’s base image for updates, and, if an update is available, replacing the container with a new one running the updated image:

#!/usr/bin/env bash
set -e

# Our function to update containers based on their base image
update_container() {
    local image=$1
    docker pull $image
    local updated_containers=0

    # Loop through all running containers
    for container in $(docker ps --format "{{.Names}}"); do
        local container_image=$(docker inspect --format '{{.Config.Image}}' "$container")
        
        # We check if the current container's image matches the updated image
        if [[ "$container_image" == "$image" ]]; then
            local latest=$(docker inspect --format "{{.Id}}" $image)
            local running=$(docker inspect --format "{{.Image}}" $container)

            if [[ "$running" != "$latest" ]]; then
                echo "Upgrading $container"
                docker rm -f $container
                docker run --name $container $image
                ((updated_containers++))
            fi
        fi
    done

    if [[ $updated_containers -eq 0 ]]; then
        echo "No containers updated for $image"
    else
        echo "$updated_containers container(s) updated for $image"
    fi
}

# Our main script starts here
# Check for updates to all images used by running containers
for image in $(docker ps --format '{{.Image}}' | sort | uniq); do
    echo "Checking updates for $image"
    update_container $image
done

echo "Container update check complete."

Let’s better understand the script’s crucial aspects:

  • set -e – ensures the script exits if any command returns a non-zero status, which is crucial for catching errors early in automation scripts
  • update_container() – takes an image name as an argument, pulls the latest version of the image, and checks if any running containers are using an outdated version of this image
  • docker ps –format “{{.Names}}” – gets a list of all running containers and inspects each container to determine its base image
  • docker rm -f $container – removes an outdated container when found and then starts a new container using the same name and the updated image with docker run –name $container $image
  • docker ps –format ‘{{.Image}}’ | sort | uniq – compiles a list of unique images used by currently running containers and, for each unique image, calls update_container() to check for and apply updates

Afterward, we save the script, e.g., update_all_docker_containers.sh, and then make it executable with chmod:

$ chmod +x update_all_docker_containers.sh

Finally, we can now run the script:

$ ./update_all_docker_containers.sh

Checking updates for nginx:latest
No containers updated for nginx:latest
Checking updates for ubuntu:latest
Upgrading ubuntu_container_1
1 container(s) updated for ubuntu:latest
Checking updates for redis:alpine
No containers updated for redis:alpine
Container update check complete.

For this example, it finds that the nginx:latest and redis:alpine images are already up-to-date, so there is no need for extra actions for containers using these images.

Then, it identifies that the container ubuntu_container_1, running on ubuntu:latest, is based on an outdated image. Then, it proceeds to upgrade ubuntu_container_1 by pulling the latest ubuntu:latest image, removing the old container, and running a new one with the updated image.

4.2. Preserving Configurations During Automatic Updates

When updating Docker containers with new base images, we can also decide to maintain the existing containers’ configurations to ensure seamless and consistent operation, especially if we’re in a production environment.

Notably, our earlier script comes in handy if we don’t want the new containers to run with the existing configurations.

Now, let’s see another Bash script that preserves environment variables, volumes, and network settings of a Docker container during the update process:

#!/usr/bin/env bash
set -e

# Function to preserve and update container
preserve_and_update_container() {
    local container=$1
    local image=$(docker inspect --format '{{.Config.Image}}' "$container")
    
    # Pull the latest image version
    docker pull $image

    # Compare image IDs to determine if an update is needed
    local latest_image_id=$(docker inspect --format '{{.Id}}' $image)
    local container_image_id=$(docker inspect --format '{{.Image}}' "$container")

    if [[ "$latest_image_id" != "$container_image_id" ]]; then
        echo "Updating $container..."
        
        # Capture current configurations
        local env_vars=$(docker inspect $container --format '{{range .Config.Env}}{{println .}}{{end}}')
        local volumes=$(docker inspect $container --format '{{range .Mounts}}{{println .Source ":" .Destination}}{{end}}')
        local network=$(docker network ls --filter id=$(docker inspect $container --format '{{.HostConfig.NetworkMode}}') --format '{{.Name}}')
        
        # Remove the outdated container
        docker rm -f $container
        
        # Recreate the container with the same configurations
        docker run -d --name $container $(echo "$env_vars" | xargs -I {} echo --env '{}') $(echo "$volumes" | xargs -I {} echo -v '{}') --network="$network" $image
        echo "$container updated successfully."
    else
        echo "$container is already up to date."
    fi
}

# Iterate over all running containers
for container in $(docker ps --format "{{.Names}}"); do
    preserve_and_update_container $container
done

echo "Container update check complete while preserving existing container configurations."

In this script, we first capture the current container’s configurations using the docker inspect command to get these details and store them. The configurations of interest typically include environment variables (Env), mounted volumes (Mounts), and network settings (NetworkSettings).

We should note that this script simplistically handles network settings by reusing the network ID and assumes simple volume mounts.

However, for more complex networking configurations, named volumes, or more complex mount configurations, we may need additional modification of the script. We can always review and adapt the script to fit the specific needs and configurations of our Docker environment, no matter how complex.

4.3. Automating the Bash Script With Cron

As the context of our entire conversation is automation, we can further automate the execution of this script by adding it to our cron jobs.

Let’s see an example of running this script daily.

First, we open our crontab:

$ crontab -e

Then, we add a line for the script to run at a desired time, e.g., for every day at 3 AM:

0 3 * * * /path/to/ourscript/update_all_docker_containers.sh

Finally, we should save and close the editor.

With this, cron will now automatically run this script at the scheduled time.

4.4. Integrating Image Updates With CI/CD Pipelines

Integrating custom scripts into CI/CD pipelines will allow for automated rebuilding and deployment of Docker images based on specific triggers, such as updates to a base image. This integration enhances operational efficiency and ensures that applications are always running on the latest, most secure base images.

Let’s assume we have a configuration file project_images.conf with the following content:

# File project_images.conf

my-node-app=node:14
my-python-app=python:3.9

Each line in this file represents a project and its associated base image, separated by an equals sign.

Let’s now see a Bash script that triggers an image rebuild and deployment process if the base image from our project file is updated:

#!/bin/bash

# Initialize an associative array
declare -A projects

# Load project and base image associations from our configuration file
while IFS='=' read -r key value; do
  projects["$key"]="$value"
done < "project_images.conf"

CI_TRIGGER_URL="https://ci.example.com/build"

for project in "${!projects[@]}"; do
  BASE_IMAGE="${projects[$project]}"

  # Pull the latest base image
  docker pull $BASE_IMAGE

  # Check for updates
  LOCAL_IMAGE_ID=$(docker images -q $BASE_IMAGE)
  REMOTE_IMAGE_ID=$(docker inspect --format='{{.Id}}' $BASE_IMAGE)

  if [ "$LOCAL_IMAGE_ID" != "$REMOTE_IMAGE_ID" ]; then
    echo "Base image $BASE_IMAGE has been updated. Triggering rebuild for $project."

    # Trigger our CI/CD pipeline (example using curl)
    curl -X POST "$CI_TRIGGER_URL" -d "project=$project&trigger=rebuild"
  else
    echo "$project is up to date."
  fi
done

Our Bash script here uses a while loop to read each line of the project_images.conf file, splitting the line into key and value pairs based on the equals sign, and then populating the projects associative array accordingly. This way, our CI/CD pipeline can automatically trigger rebuilds for any project whose base image has been updated without needing to specify each project or base image manually.

Notably, to integrate a script like this, we can include them as steps in our pipeline configurations using tools like:

  • Jenkins – An open-source automation server that automates all sorts of tasks, including building, testing, and deploying Docker images
  • GitLab CI/CD – A powerful, integrated CI/CD service that can handle Docker image builds and deployments directly from our GitLab repositories
  • GitHub Actions – Automate our workflow directly from our GitHub repository, including Docker image builds and pushes to Docker registries
  • CircleCI – Offers robust CI/CD features with first-class Docker support, allowing for easy image building, testing, and deployment.

In short, integrating Docker image updates into CI/CD pipelines not only streamlines the update process but also ensures that applications benefit from the latest security and performance improvements offered by updated base images.

5. Conclusion

In this article, we explored the importance of keeping Docker containers updated, especially in response to updates in their base images. We delved into various strategies and tools that help automate this process, from leveraging third-party solutions like Watchtower to employing custom scripts and integrating with CI/CD pipelines for automated rebuilding and deployment.