1. Introduction

As DevOps enthusiasts, passing AWS credentials securely to Docker containers is crucial. Whether we’re running containers on EC2 instances or local machines, properly managing these credentials can enhance security and efficiency.

In this tutorial, we’ll explore several methods for securely passing AWS credentials to Docker containers. First, we’ll discuss using identity and access management (IAM) roles for Amazon EC2 instances, which is often the preferred approach for securely managing AWS credentials on EC2s. Then, we’ll cover other methods that work for other use cases and environments. Let’s get started!

2. Using IAM Roles for Amazon EC2

One of the most secure and convenient methods for providing AWS credentials to Docker containers running on EC2 instances is to use IAM roles. This method provides a secure and convenient way to manage AWS credentials without hardcoding them into our applications or Docker images.

By using IAM roles, we can assign permissions to our EC2 instances, allowing containers running on these instances to access AWS resources securely. Let’s see how to set this up in more detail.

2.1. Creating a Trust Policy File

First, we need to create the trust-policy.json file. The file should contain the trust relationship policy that allows the EC2 instance to assume the role.

Here is an example content for the trust-policy.json file:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

This file defines the trust relationship that allows EC2 instances to assume the role.

2.2. Creating an IAM Role

We can now use the aws iam create-role command to create an IAM role with the trust policy:

$ aws iam create-role --role-name MyDockerRole
  --assume-role-policy-document file://trust-policy.json
{
  "Role": {
    "Path": "/",
    "RoleName": "MyDockerRole",
...
}

Here, we create a new IAM role, MyDockerRole, with the trust policy we defined in the trust-policy.json file.

2.3. Attaching a Policy to the Role

After creating the role, we attach a policy to the role to grant the necessary permissions.

For this example, let’s attach the AmazonS3ReadOnlyAccess policy:

$ aws iam attach-role-policy --role-name MyDockerRole
  --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

Typically, this command does not produce any output if successful.

But to verify if the process went well, we can list the attached policies:

$ aws iam list-attached-role-policies --role-name MyDockerRole
{
  "AttachedPolicies": [
    {
      "PolicyName": "AmazonS3ReadOnlyAccess",
      "PolicyArn": "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
    }
  ]
}

We can see the AmazonS3ReadOnlyAccess policy attached to MyDockerRole, granting read-only access to S3.

2.4. Attaching the Role to an EC2 Instance

We can now associate the IAM role with an EC2 instance using the aws ec2 associate-iam-instance-profile command:

$ aws ec2 associate-iam-instance-profile --instance-id i-1234567890abcdef0
  --iam-instance-profile Name=MyDockerProfile
{
  "IamInstanceProfileAssociation": {
    "AssociationId": "iip-assoc-0abc1234abcd56789",
    "InstanceId": "i-1234567890abcdef0",
    "IamInstanceProfile": {
      "Arn": "arn:aws:iam::123456789012:instance-profile/MyDockerProfile",
      "Id": "AIPAJ3EXAMPLE"
    },
    "State": "associating"
  }
}

Here, we associate the IAM role MyDockerRole with the EC2 instance i-1234567890abcdef0. The role is specified through an instance profile, MyDockerProfile.

With these steps, Docker containers running on the EC2 instance will now inherit the permissions associated with the IAM role, allowing secure and temporary access to AWS resources. The AWS SDK will handle credential retrieval and rotation transparently. This method eliminates the need for hardcoding credentials and manual management, enhancing security and efficiency.

Let’s see an example of a Python application using boto3:

import boto3

s3 = boto3.client('s3')
response = s3.list_buckets()
print(response['Buckets'])

Here, boto3 will automatically fetch the temporary credentials from the instance metadata service, leveraging the IAM role attached to the EC2 instance.

Notably, this IAM role method is specific to EC2 instances. If we’re running our containers in a different environment, we’ll need to explore other options we shall discuss.

3. Injecting Secrets During Docker Image Build

In some scenarios, we may need to access AWS credentials during the Docker image build process. We can accomplish this using multi-stage builds or Docker BuildKit, which provide secure ways to inject secrets. Let’s examine them in more detail.

3.1. Using Multi-Stage Builds

Multi-stage builds allow us to create intermediate images that use secrets, which are then excluded from the final image. This method ensures that sensitive data does not end up in the final image layers.

Let’s see an example of a multi-stage Dockerfile:

# First stage: Build stage
FROM node:14 AS builder
COPY . /app
WORKDIR /app

# Inject AWS credentials as environment variables
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY

RUN npm install
RUN npm run build

# Second stage: Production stage
FROM node:14
COPY --from=builder /app/dist /app/dist
CMD ["node", "app/dist/index.js"]

In this sample Dockerfile, during the build, it uses the AWS credentials to install dependencies or perform operations, but they are not present in the final image.

Notably, ARG AWS_ACCESS_KEY_ID and ARG AWS_SECRET_ACCESS_KEY define build arguments for AWS credentials*.* Then, ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID and ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY set the AWS credentials as environment variables.

We can now build the Docker image with the docker build command:

$ docker build --build-arg AWS_ACCESS_KEY_ID=your_access_key_id
  --build-arg AWS_SECRET_ACCESS_KEY=your_secret_access_key -t my_image .
Sending build context to Docker daemon  16.89MB
...
Step 4/11 : ARG AWS_ACCESS_KEY_ID
 ---> Running in 0b6f5fbe5ad6
Removing intermediate container 0b6f5fbe5ad6
 ---> d3f8b48f4b7a
Step 5/11 : ARG AWS_SECRET_ACCESS_KEY
...
Successfully tagged my_image:latest

As we can see, our output shows the various steps in the multi-stage build process, including setting environment variables and building the application.

3.2. Using BuildKit for Secret Management

Docker BuildKit provides an advanced way to manage secrets during the build process. BuildKit allows secrets to be mounted as files, which are then removed after the build step completes.

To use BuildKit, first, we need to enable it:

$ export DOCKER_BUILDKIT=1

Now, we can make use of BuildKit in our build process.

Let’s see an example Dockerfile using BuildKit:

# syntax=docker/dockerfile:1.3
FROM python:3.9

# Install AWS CLI
RUN pip install awscli

# Use BuildKit to mount the AWS credentials file and copy a file from an S3 bucket
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://my-bucket/my-file /my-file

Now, we build the Docker image with the docker build command using BuildKit:

$ DOCKER_BUILDKIT=1 docker build --secret id=aws,src=$HOME/.aws/credentials -t my_image .
[+] Building 5.0s (8/8) FINISHED
...
 => [3/3] RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://my-bucket/my-file /my-file                     3.6s
 => exporting to image                                                                                                               ...
 => => naming to docker.io/library/my_image                                                                                          0.0s

In this example, the build process mounts the AWS credentials as a temporary file, ensuring they are not included in the final image.

4. Passing Credentials as Environment Variables

Passing AWS credentials as environment variables is straightforward but has significant security risks. These credentials can be exposed in logs, environment dumps, and other vulnerabilities. However, it is a viable method for local development or controlled environments.

We can pass environment variables to a Docker container at runtime using the -e flag with docker run, or by defining them in our docker-compose.yml file.

4.1. Using docker run

Let’s see an example using docker run:

$ docker run -e AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
  -e AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
  -e AWS_DEFAULT_REGION=us-west-2
  my-docker-image

We should be sure to replace our parameters.

4.2. Using docker-compose.yml File

And here’s how we might define these in a docker-compose.yml file:

version: '3'

services:
  myapp:
    image: my-docker-image
    environment:
      - AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
      - AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
      - AWS_DEFAULT_REGION=us-west-2

This method is straightforward and works well for development environments.

However, credentials might be visible in shell history or log files. Further, managing different sets of credentials for different environments can be cumbersome.

To mitigate some of these risks, we can use environment variable files (.env files) and ensure they’re properly excluded from version control.

4.3. Managing Multiple Profiles and Multi-Factor Authentication

If we use multiple AWS profiles or multi-factor authentication, we can specify the profile using the AWS_PROFILE environment variable:

version: '3'

services:
  app:
    image: my_image
    environment:
      - AWS_PROFILE=your_profile

However, before running the Docker container, we should set the AWS_PROFILE:

$ export AWS_PROFILE=your_profile
$ docker-compose up

While this method is convenient, it’s important to handle the environment variables securely and avoid exposing sensitive information in logs or configuration files.

5. Mounting Credentials as Volumes

Another approach is to mount the AWS credentials file as volumes directly into the Docker container. This ensures that they remain outside the container’s filesystem and thus reduces the risk of exposure. The AWS credentials file is typically located at ~/.aws/credentials on Linux and macOS, or %UserProfile%\.aws\credentials on Windows. It contains profiles with access keys.

Specifically, volume mounting involves attaching a file or directory from the host machine to a specific location within the Docker container. This method ensures that the credentials are not part of the image layers and remain on the host machine.

We use the -v flag with docker run to mount the credentials file when running the container:

$ docker run -v $HOME/.aws/credentials:/root/.aws/credentials:ro my_image

For Docker Compose, we define the volume mount in the docker-compose.yml file:

version: '3'

services:
  app:
    image: my_image
    volumes:
      - $HOME/.aws/credentials:/root/.aws/credentials:ro

This configuration ensures that the credentials file is available inside the container at runtime and excludes it from the image itself.

When using this method, there are some security considerations and best practices to consider.

First, we should always mount the credentials file as read-only (:ro) to prevent accidental modifications. We should also ensure that only authorized users can access the host machine’s credentials file. Lastly, we should avoid using the same credentials for development and production environments by using separate credentials and profiles.

6. Using Docker Swarm Secrets

If we’re using Docker Swarm, Docker secrets provides a secure way to manage sensitive information, including AWS credentials. These secrets are encrypted and only accessible to services that need them. This feature allows us to store our credentials securely and inject them into containers at runtime.

To use this approach, first, let’s create a secret in Docker Swarm:

$ docker secret create aws_creds $HOME/.aws/credentials
j2v4gpz7y3nlfklj7wmlt5gn7

This command creates a Docker secret aws_creds from the $HOME/.aws/credentials file. The output is the ID of the created secret.

Afterward, we define the secrets and services in our docker-compose.yml file:

version: '3.7'

secrets:
  aws_creds:
    external: true

services:
  app:
    image: my_image
    secrets:
      - source: aws_creds
        target: /root/.aws/credentials
        uid: '1000'
        gid: '1000'
        mode: 0400

Let’s better understand the definition of our service:

  • aws_creds – specifies that the secret aws_creds will be provided externally (i.e., it’s already created and managed outside the compose file)
  • target: /root/.aws/credentials: – sets the mount location inside the container for the secret
  • uid: ‘1000’, gid: ‘1000’ – sets the user and group ID for the secret file
  • mode: 0400 – sets the file permissions to read-only for the owner

Lastly, we can now deploy the stack with the secrets:

$ docker stack deploy -c docker-compose.yml my_stack
Creating network my_stack_default
Creating service my_stack_app

Our output shows the creation of the network and the service defined in the compose file.

Notably, for security and encryption considerations, secrets are encrypted at rest and in transit within the Swarm. They are only accessible to services that explicitly request them and are granted access. We can easily rotate secrets without modifying our service configurations. Further, secrets are stored in memory on the worker nodes and are not written to disk.

However, Docker secrets are only available in Swarm mode, which may not be suitable for all our deployment scenarios.

7. Conclusion

In this article, we’ve explored various methods for passing AWS credentials to Docker containers, from basic approaches like environment variables and mounted credential files to more advanced techniques like Docker Secrets and IAM roles for EC2 instances.

We’ve seen that the best method depends on our specific use case and deployment environment.