1. Introduction

Kubernetes organizes containers into units called pods. They are responsible for the storage allocation, resources, and network settings of each container within. Thus, to know a piece of information like the IP address of a sub-pod unit, we usually turn to the definition of the pod that contains it. However, there are times when we might want to acquire this configuration from within a pod container.

In this tutorial, we explore ways to get network settings for a particular container as part of a Kubernetes pod. First, we perform pod creation. After that, we turn to a description of the basic pod networking concepts. Next, we go through the way a pod and container acquire an IP address. Finally, we show ways to get the IP address of a pod or container both from within the entities and by using Kubernetes tools.

Notably, virtual machine (VM) and container Kubernetes deployments with Minikube should investigate the minikube entity and not its host. To do so, we can use exec and the shell for containers or just SSH for a VM. In addition, we assume Docker is the container runtime of choice for the Kubernetes deployment.

We tested the code in this tutorial on Debian 12 (Bookworm) with GNU Bash 5.2.15. Unless otherwise specified, it should work in most POSIX-compliant environments.

2. Pod Creation

Kubernetes pods are usually set up via definitions in configuration files.

Let’s see an example:

$ cat compod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: compod
spec:
  containers:
  - name: deb1
    image: debian:latest
    command: ["sh"]
    tty: true
  - name: deb2
    image: debian:latest
    command: ["sh"]
    tty: true

In this case, we define a Pod that we name compod. It [spec]ifies two containers with the same definition but different names (deb1 and deb2).

To create a pod based on this file, we use the apply subcommand of kubectl:

$ kubectl apply --filename=compod.yaml

For simplicity, we can avoid creating files and use a single command:

$ kubectl apply --filename=- <<< '
apiVersion: v1
kind: Pod
metadata:
  name: compod
spec:
  containers:
  - name: deb1
    image: debian:latest
    command: ["sh"]
    tty: true
  - name: deb2
    image: debian:latest
    command: ["sh"]
    tty: true
'
pod/compod created

How does Kubernetes decide what network settings to assign to a pod that doesn’t specify any network-related information in its definition?

3. Pod Networking

Generally, Kubernetes handles networking based on its underlying container runtime such as Docker. Still, it can build on top of that via addons like Flannel or Calico.

In general, several control plane components provide addresses that can be assigned to different entities:

  • network plugin: assigns IP addresses to pods
  • kube-apiserver: assigns IP addresses to services
  • kubelet (or cloud-controller-manager): assigns IP addresses to nodes

However, the network model itself is handled by the container runtime as the Container Network Interface (CNI). Usually, this structure comprises a bridge to the local network and a veth* interface that connects each pod container to that bridge. With addons, this communication can be expanded to other nodes through encapsulation.

Notably, Kubernetes also includes an internal DNS server that handles name-to-address mappings within the cluster.

4. Pod IP Address Assignment

Before we understand how to acquire the current pod or container IP address, let’s get to know what’s involved in its assignment. To do that, we ignore any Kubernetes network addons and only look at the default Kubernetes installation.

4.1. Node Classless Inter-Domain Routing (CIDR) Address

To begin with, we check the Classless Inter-Domain Routing (CIDR) address of the current node:

$ kubectl get node <NODE_NAME> --output=jsonpath={..podCIDR}

For instance, if we use Minikube, the NODE_NAME is usually minikube:

$ kubectl get node minikube --output=jsonpath={..podCIDR}
10.244.0.0/24

Thus, we get the fairly common 10.244.0.0/24 subnet. Other nodes within the cluster get different subnet assignments. For example, they can be 10.244.1.0/24 for the next node, 10.244.2.0/24 for the third, and so on.

Notably, this is different from the service ClusterIP subnet, which is common for the whole cluster.

4.2. Kubernetes Bridge

After acquiring the Kubernetes subnet assigned of the current node, we can check the network interfaces and see which one uses that address:

$ ip address show
[...]
3: bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:e9:1c:47:3f:16 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/16 brd 10.244.255.255 scope global bridge
       valid_lft forever preferred_lft forever
    inet6 fe80::cce9:1cff:fe47:3f16/64 scope link
       valid_lft forever preferred_lft forever
[...]

As expected, we see a bridge, which sits at the base of all container network interfaces:

+------------+ +------------+
| container1 | | container2 |
|(10.244.0.#)| |(10.244.0.#)|
|   veth0    | |    veth1   |
+------------+ +------------+
             ^ ^
             v v            
+-----------------------------+
|            host             |
|        (10.244.0.1)         |
|           bridge            |
+-----------------------------+

This way, upon creation, a Kubernetes container gets an IP address in the correct range and can communicate with the outside via the bridge. Critically, the default docker0 bridge and IP address range isn’t involved in this process.

4.3. Container Virtual Ethernet

As we already saw, each container has a veth* interface that connects to the main Kubernetes bridge.

To understand how they link to the deployment in practice, we can again list the interfaces on the Kubernetes host:

$ ip address show
[...]
4: veth24d8dc94@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master bridge state UP group default
    link/ether b2:e6:d0:54:6a:74 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::b0e6:d0ff:fe54:6a74/64 scope link
       valid_lft forever preferred_lft forever
35: veth28f88740@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master bridge state UP group default
    link/ether 36:af:2a:fc:34:56 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::34af:2aff:fefc:3456/64 scope link
       valid_lft forever preferred_lft forever
[...]

Here, we see two veth* interfaces on the host with a unique identifier and a suffix that denotes their relation to a specific container interface. In this case, if2 or [i]nter[f]ace 2 is the interface number in each container that maps to the respective host veth*.

After understanding the basic way containers get IP addresses, let’s investigate how we can acquire the current assignment for a given container.

5. Get IP Address Within Container

As we already saw, network interfaces on the Kubernetes host map to interfaces within different containers.

5.1. List Containers

To begin with, we list the containers on the Kubernetes host:

$ docker ps
CONTAINER ID   IMAGE                       [...]   NAMES
ff9f22b59b1d   debian                      [...]   k8s_deb2_compod_default_855cfd1d-5aec-4240-b750-1283af2c1a26_0
6cc96be009e3   debian                      [...]   k8s_deb1_compod_default_855cfd1d-5aec-4240-b750-1283af2c1a26_0
a0dbd3329fd6   registry.k8s.io/pause:3.9   [...]   k8s_POD_compod_default_855cfd1d-5aec-4240-b750-1283af2c1a26_0
9b2e662ebd08   6e38f40d628d                [...]   k8s_storage-provisioner_storage-provisioner_kube-system_60f33c86-5096-4727-b09a-d2747d025b0e_2
177c90140963   ead0a4a53df8                [...]   k8s_coredns_coredns-5dd5756b68-cbtcp_kube-system_428e26bd-b5e3-40ca-a2e8-9467ef6b6a6d_0
946e3d845de4   registry.k8s.io/pause:3.9   [...]   k8s_POD_coredns-5dd5756b68-cbtcp_kube-system_428e26bd-b5e3-40ca-a2e8-9467ef6b6a6d_0
[...]

Visually, we can already identify the two containers we created with the compod pod:

  • deb1: k8s_deb1_compod_default_855cfd1d-5aec-4240-b750-1283af2c1a26_0
  • deb2: k8s_deb2_compod_default_855cfd1d-5aec-4240-b750-1283af2c1a26_0

In general, the container name includes k8s, the name that we assigned, the pod name, and a unique identifier.

With this mapping, we can continue working with a particular container of interest.

5.2. Enter Container

Let’s leverage exec to get an –interactive shell –tty within the container:

$ docker exec --interactive --tty k8s_deb1_compod_default_855cfd1d-5aec-4240-b750-1283af2c1a26_0 bash
root@compod:/#

Another way to enter the container is the exec equivalent of kubectl:

$ kubectl exec --stdin --tty pod/compod --container deb1 -- bash
root@compod:/#

Here, –stdin replaces –interactive, while the –container is specified through the pod name and a special option. Finally, isn’t yet mandatory, but the syntax without it is deprecated.

Either way, we can now run commands within the container.

5.3. Get IP Address With ip or /proc

At this point, there are different means to acquire the IP address.

As usual, ip is the standard utility:

root@compod:/# ip address show
[...]
2: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether c2:98:17:b5:04:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.244.0.33/16 brd 10.244.255.255 scope global eth0
       valid_lft forever preferred_lft forever

Thus, we see that deb1 has an IP address of 10.244.0.33. Notably, we concentrate on interface 2 due to the veth* mapping we saw earlier on the Kubernetes host. As expected, *eth0 in the container points back to if35 [i]nter[f]ace 35*. This way, we know

Of course, containers usually aim to be as minimalistic as possible, so the iproute2 package with the ip command might not be available. In such cases, we can install it with the respective package manager:

$ apt install iproute2

Still, a container may not even have a package manager.

In such cases, the /proc pseudo-filesystem might also be an option to get the IP address:

$ cat /proc/net/fib_trie
Main:
  +-- 0.0.0.0/1 2 0 2
     +-- 0.0.0.0/4 2 0 2
        |-- 0.0.0.0
           /0 universe UNICAST
        +-- 10.244.0.0/16 2 0 2
           +-- 10.244.0.0/26 2 0 2
              |-- 10.244.0.0
                 /16 link UNICAST
              |-- 10.244.0.33
                 /32 host LOCAL
           |-- 10.244.255.255
              /32 link BROADCAST
[...]

In particular, the /proc/net/fib_trie file shows the network topography. To match these addresses to interfaces, we can also cross-reference /proc/net/route.

Yet, containers sometimes don’t even have a shell.

5.4. Get IP Address Without a Shell

When a container is minimalistic and doesn’t provide a shell, we might still be able to use the exec subcommand of kubectl or docker:

$ docker exec --interactive --tty k8s_deb1_compod_default_855cfd1d-5aec-4240-b750-1283af2c1a26_0 cat /proc/net/fib_trie
[...]
$ kubectl exec --stdin --tty pod/compod --container deb1 -- cat /proc/net/fib_trie
[...]

In this example, we show the /proc/net/fib_trie command, but ip can be used the same way. This method is especially beneficial for automation.

5.5. Get IP Address in Minimal Containers

With containers without any tooling, we might turn to the debug subcommand of kubectl:

$ kubectl debug --tty --stdin pod/compod --image=debian:latest --target=deb1 -- bash
Targeting container "deb1". If you don't see processes from this container it may be because the container runtime doesn't support this feature.
Defaulting debug container name to debugger-h8kwv.
If you don't see a command prompt, try pressing enter.
root@compod:/#

In this case, we –target container deb1 from the compod pod. Specifically, this command creates a special ethereal container in the same pod, which attempts to connect itself to the same resources as the –target mainly via the Linux namespace features.

Now, we can employ the same methods we saw earlier, since the –image we infuse can contain any toolset.

Critically, containers within the same pod have the same IP address assignment. Thus, we can also find out the container address by checking the one for the pod.

6. Get IP Address From Pod Definition

To get basic information about a pod, we can just –output data about it in the wide format:

$ kubectl get pod/compod --output wide
NAME     READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
compod   2/2     Running   0          25h   10.244.0.33   minikube   <none>           <none>

As expected, the pod IP address is 10.244.0.33.

Of course, describe can also show the same information:

$ kubectl describe pod/compod
Name:             compod
Namespace:        default
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2
Start Time:       Tue, 02 Apr 2024 02:04:00 -0400
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               10.244.0.33
IPs:              IP:  10.244.0.33
Containers:
  deb1:
[...]
  deb2:
[...]

Thus, we again see the address as 10.244.0.33. Consequently, each of the containers has this internally assigned to their respective interfaces.

To only extract the address, we can leverage a jsonpath:

$ kubectl get pod/compod --output=jsonpath='{..podIP}'
10.244.0.33

In case of multiple assignments, we can turn to podIPs instead:

$ kubectl get pod/compod --output=jsonpath='{..podIPs}'
[{"ip":"10.244.0.33"}]

Notably, when we try to get the same IPaddress via the Docker container runtime, we won’t see any output:

$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' k8s_deb1_compod_default_5cfd1d-5aec-4240-b750-1283af2c1a26_0
$

This is because the Docker network interface doesn’t play a role in the Kubernetes container IP address assignment.

7. Summary

In this article, we talked about checking the pod and container IP addresses in a Kubernetes environment.

In conclusion, there are many ways to acquire the pod or container IP address both within the container and from the outside.