1. Overview
A Docker container is a portable, lightweight, isolated environment that runs applications and their dependencies. Mounting a filesystem within a Docker container enables access to files or directories on the host system from the container. Doing so can be useful for sharing data or configurations between the container and the host.
In this tutorial, we’ll learn how to mount filesystems within a Docker container.
2. Host Source Directory
First, to mount filesystems within a Docker container, Docker must be installed on the system. Further, let’s identify a source directory on the host system, i.e., the path on the system we wish to use during the mounting process.
To illustrate, let’s create a new directory on our host machine with mkdir:
$ mkdir /home/user1/hostpath
Next, let’s check to see that the new directory is present and verify its attributes with the ls command:
$ ls -ld /home/user1/hostpath/
drwxrwxr-x 2 user1 user1 4096 Jul 17 04:43 /home/user1/hostpath/
Here, the -l option enables a long listing of the directory, while the -d flag ensures that we display information about the directory filesystem object and not its content.
With the source directory ready, let’s now run the container.
3. Using Volumes
To mount a filesystem within a Docker container, we use the -v or –volume flag when running the container. Its argument consists of two fields separated by a colon (:):
- host source directory path
- container target directory path
In short, we link a directory on the host machine to a directory within the container when using docker:
$ docker run -it --rm -v /path/on/host:/path/in/container image_name /bin/bash
Let’s break down this command:
- -i -t (or -it) enables interactive access to the container
- –rm removes the container upon exit to free system resources (CPU, memory)
- /path/on/host is the path of the directory on the host machine that we want to mount
- /path/in/container is the desired path within the container where the directory will be accessible
- image_name is the name or ID of the Docker image we want to run
- /bin/bash provides a Bash shell within the container
Importantly, if we don’t employ the –rm option, we might need to remove the container manually after exit to free storage space.
4. Running the Docker Container
To illustrate, let’s now replace /path/on/host and /path/in/container from the general syntax we saw above with the respective source and target paths for our scenario:
$ docker run -it --rm -v /home/user1/hostpath:/home/cont_path ubuntu /bin/bash
root@e8439d98c634:/#
This command performs several operations:
- starts a Docker container with a copy of the u**buntu image
- creates the /home/cont_path directory within the container
- maps the /home/cont_path target directory to the /home/user1/hostpath source directory we created earlier
Once the container is up and running, we can check that the filesystem is properly mounted by navigating to the target directory within the container:
root@baeldung:/# cd /home/cont_path
root@baeldung:/home/cont_path#
Notably, using the /home/user1/hostpath directory, we can now access any newly generated or modified data under the /home/cont_path directory of the container. Docker also replicates any changes between both paths. So, any change we make in the /home/cont_path directory affects both the host and container systems. To understand, let’s create the subdirectory structure /home/cont_path/dir1/dir2:
# mkdir -p dir1/dir2
After exiting the container, let’s check the directory on our host machine to see if our additions are present:
$ ls /home/user1/hostpath/
dir1
$ ls ~/hostpath/dir1
dir2
Significantly, the mount operation is verified as successful when both the host directory and the container directory have the same content.
5. Using Elevated Privileges
In Docker, we can use the –privileged and –cap flags to mount filesystems within a container.
Let’s suppose we want to set up an environment that depends on mounting a CIFS share from the host. If we try to do so directly, the container might encounter problems with the default permissions not allowing the mount. However, –privileged and –cap can provide elevated privileges to the container. Thus, it can access host-level resources, including mounting filesystems.
To demonstrate, let’s explore how to mount filesystems within a Docker container using the –privileged and –cap options.
5.1. Running the Container With –privileged
Again, to grant a container elevated privileges, we’ll run it with the –privileged switch:
$ docker run --privileged -it -v /home/user1/hostpath:/home/cont_path ubuntu /bin/bash
root@53d79f0bb259:/#
To verify, we can use the docker inspect command with the –format option. This docker command provides additional data about running containers. Importantly, it can show whether a container runs in privileged mode.
So, let’s check if our container is running with elevated rights using the –format flag and the container ID as an argument:
$ docker inspect --format='{{.HostConfig.Privileged}}' 22a2439bb82f
true
Indeed, true means we are in privileged mode. However, for containers not using that mode, the command would print false.
5.2. Granting Specific Permissions
To strengthen our control over the capabilities of a container, we can use the –cap flag. This option goes with either the -drop or the -add suffix to alter the default capabilities of a container in Linux.
Notably, containers with higher privilege have some default capabilities on filesystems mounted within the container:
- READ
- WRITE
- MKNOD
To illustrate how to use both flags, let’s set up our container to use all capabilities except for MKNOD:
$ docker run --cap-add=ALL --cap-drop=MKNOD -it -v /home/user1/hostpath:/home/cont_path ubuntu /bin/bash
Let’s break down the new flags in the command:
- –cap-add=ALL adds all capabilities to the container
- –cap-drop=MKNOD removes the MKNOD capability, thus blocking the container from creating special files using mknod
In addition, to allow the container to carry out some system administration tasks, we can give it the SYS_ADMIN capability. To do this, we can use –cap-add=SYS_ADMIN:
$ docker run --cap-add=SYS_ADMIN -it -v /home/user1/hostpath:/home/cont_path ubuntu /bin/bash
Notably, the SYS_ADMIN argument provides administrative rights such as mount privileges to the container.
6. Mounting Multiple Directories
Of course, we can mount multiple directories by specifying multiple -v flags in the docker run command. Each -v switch represents a separate mount:
$ docker run -v ~/hostpath/path_A:/home/path_1 -v ~/hostpath/path_B:/home/path_2 image_name
This command mounts two directories:
- ~/hostpath/path_A on the host machine to /home/path_1 in the container
- ~/hostpath/path_B on the host machine to /home/path_2 in the container
Notably, we can mount as many directories as we need by adding more -v options.
For successful mounting, the host machine’s filesystem must be accessible while inside the container. Also, to ensure accurate read and write access, we can evaluate the permissions and ownership of the mounted directories. When we run into permissions problems, we may need to change permissions or use additional setup options.
In general, we can effortlessly exchange data and resources between the host machine and the container. This data exchange can be successful through mounting filesystems within Docker containers. In addition, it can increase flexibility and improve the overall functioning of any containerized applications.
7. Conclusion
In this article, we saw how to mount a host filesystem on a Docker container. First, we learned how to set up the host machine. Then we used the -v option to mount the filesystem on the Docker container. Further, we mounted a host filesystem on a Docker container with elevated privileges, allowing it to access host-level resources. Finally, we tested the filesystem to confirm that it works properly.