1. Overview

Kubernetes has become the top platform for orchestrating complex applications that need to scale. As these applications grow, they often involve multiple interconnected components.

Multi-pod deployments in Kubernetes help us manage these components efficiently by enabling each part to run in its pod. This provides greater control and flexibility. Moreover, it makes scaling, troubleshooting, and maintaining our applications easier.

In this tutorial, we’ll set up a multi-pod deployment in Kubernetes.

2. Understanding Multi-Pod Deployments

Before we go into the configuration specifics, let’s take a step back and understand why multi-pod deployments are crucial in Kubernetes.

2.1. Advantages of Multi-Pod Deployments

Multi-pod deployments in Kubernetes involve running different parts of an application in separate pods within a cluster. This approach offers several advantages:

  • Scalability: we can easily increase or decrease the number of pods for each component based on demand. This ensures our application can handle varying levels of traffic
  • Fault tolerance: if one pod fails, others continue running. Hence, we can prevent downtime and ensure our application is always available
  • Maintainability: we can update and change individual components without affecting the entire system. Therefore, it simplifies development and maintenance

In essence, multi-pod deployments empower us to build applications that are more scalable, resilient, and easier to manage.

2.2. Important Kubernetes Concepts

Now that we understand the advantages of multi-pod deployments let’s briefly introduce two essential Kubernetes concepts—Deployments and Services.

In Kubernetes, Deployments are an important tool for managing stateless applications. They’re like blueprints that specify the desired state of our pods.

Furthermore, let’s say we have multiple pods working together. How do they communicate? That’s where Services comes in.

A Service provides a stable network address and a way to distribute incoming traffic across those pods. Therefore, this abstraction makes our application more flexible and scalable.

Moreover, it enables us to add or remove pods without disrupting how users interact with our application.

3. Setting up a Kubernetes Multi-Pod Deployment

To illustrate the process of configuring a multi-pod deployment, let’s consider a realistic application. In this scenario, we have a web application composed of three distinct components—a frontend, a backend, and a database.

As a result, each component runs in its pod and can scale independently based on demand.

3.1. Sample Application Scenario

Let’s imagine we have a web application with a user interface (UI) served by a frontend service, business logic handled by a backend service, and data stored in a database.

Each component must run in a separate pod to allow independent scaling and management. For instance, the frontend might require more replicas during usage, while the backend and database might have different resource requirements.

Let’s create separate Kubernetes Deployment configurations to deploy each component. These configurations define how many replicas of each pod should run, what container images to use, and how resources should be allocated.

3.2. Frontend Deployment

Now, let’s create a frontend Deployment to run an Nginx container serving the UI. Here’s a YAML configuration for the frontend:

$ cat frontend-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

The above configuration file creates three replicas of the frontend pod. Moreover, each of these pods runs a Nginx container.

Additionally, the resources section ensures that each pod requests a minimum of 64Mi of memory and 250m of CPU. It also sets upper limits of 128Mi of memory and 500m of CPU.

3.3. Backend Deployment

Next, let’s configure the backend, which handles the application’s logic. Below is our YAML configuration:

$ cat backend-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend-container
        image: my-backend-app:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "500m"
          limits:
            memory: "256Mi"
            cpu: "1"
        env:
        - name: DATABASE_URL
          value: "mongodb://db-service:27017"

This Deployment creates two replicas of the backend pod. Moreover, it includes an environment variable, DATABASE_URL, which the backend application uses to connect to the database.

3.4. Database Deployment

Finally, let’s configure the database component. Here, let’s use MongoDB, which is a popular NoSQL database. Now, let’s have a look at the YAML configuration:

$ cat database-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: db-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
    spec:
      containers:
      - name: mongo-container
        image: mongo:4.2
        ports:
        - containerPort: 27017
        resources:
          requests:
            memory: "256Mi"
            cpu: "500m"
          limits:
            memory: "512Mi"
            cpu: "1"
        volumeMounts:
        - name: mongo-storage
          mountPath: /data/db
      volumes:
      - name: mongo-storage
        persistentVolumeClaim:
          claimName: mongo-pvc

The configuration file sets up a MongoDB pod with resource requests and limits similar to the backend. In addition, the volumeMounts and volumes sections ensure that the database data is persisted even if the pod is restarted or rescheduled.

Therefore, we’ve established the foundation for our multi-pod application by creating these three separate Deployment configurations.

4. Configuring Services for Multi-Pod Deployments

Now that we have individual components deployed as pods, we need a way to connect and make them accessible. This is where Kubernetes Services step in.

Let’s find out how Services act as the glue that binds our multi-pod container together.

4.1. The Role of Services

In Kubernetes, pods are ephemeral by nature—they can be created, destroyed, or even moved around the cluster at any time. Consequently, it’s impractical to directly address individual pods when we need them to talk to each other or be accessible from outside the cluster.

A Service acts as a stable network endpoint. It provides a consistent address for a group of pods. This abstraction layer allows us to scale our application up or down without worrying about clients needing to keep track of constantly changing pod IP addresses.

4.2. Types of Kubernetes Services

Kubernetes offers a few different types of Services, each with its strengths.

First, ClusterIP is the default type. It gives our pods a virtual IP address accessible only from within the cluster. Therefore, it’s great when different parts of our application need to talk to each other but don’t need exposure to the outside world.

Next, the NodePort makes our service accessible from outside the cluster. It does this by opening a specific port on each node in our cluster. As a result, we can use the node’s IP address and this special port number to reach the service.

In addition, the LoadBalancer is the major means for exposing our application to the internet. It creates an external load balancer that distributes traffic to our service.

However, the right Service to use depends on how we want the application to be accessed.

4.3. Creating Services

Let’s see how to create Services for our frontend, backend, and database components. First, let’s configure a NodePort service for our frontend:

$ cat frontend-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  type: NodePort 
  selector:
    app: frontend 
  ports:
  - port: 80 
    targetPort: 3000 
    nodePort: 30000

This service routes traffic on port 80 to pods labeled with app: frontend. It targets port 3000 on those pods. Additionally, it exposes the service on port 30000 on each node, allowing external access.

Next, let’s set up a ClusterIP service for our backend:

$ cat backend-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: backend-service
spec:
  selector:
    app: backend 
  ports:
  - port: 8080
    targetPort: 8080

The above Service routes traffic on port 8080 to pods labeled with app: backend. However, it also targets port 8080.

In addition, let’s configure a headless ClusterIP service for our database:

$ cat database-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: database-service
spec:
  clusterIP: None 
  selector:
    app: database 
  ports:
  - port: 5432
    targetPort: 5432

Here, the Service doesn’t have a cluster IP. Instead, it provides stable DNS entries for each pod labeled with app: database, which is useful for stateful applications.

Let’s take note of the selector field in each Service configuration file above. This is how Kubernetes knows which pods to send traffic to. Moreover, it matches the labels we defined in our Deployment configurations.

Furthermore, with these Services in place, our application components can now talk to each other. We can also access the frontend from outside the cluster using the NodePort we specified.

For example, if our node’s IP address is 192.168.99.100, we can access the frontend at http://192.168.99.100:30000.

5. Conclusion

In this article, we’ve explored how to set up multi-pod deployments in Kubernetes. Furthermore, we discussed core concepts and provided practical examples. Then, we emphasized the benefits of managing different components independently.

Hence, we’re well-equipped to build scalable and resilient Kubernetes applications using these techniques.