1. Introduction

For many software applications, communicating with external services is necessary to complete their tasks. Whether sending messages or consuming APIs, most applications rely on other systems to function properly.

However, as more and more users move their applications into Kubernetes, providing secure and reliable access to them becomes more challenging. Navigating through various deployments and services can make it difficult for network traffic to reach the right place.

Luckily, there are a number of different mechanisms to help manage network traffic and ensure requests get to their desired destination inside a cluster. In this tutorial, we will look at two of these mechanisms: ingresses and load balancers.

2. Workloads and Services

Before we discuss either of these, we must first take a step back and look at how applications are deployed and managed in Kubernetes.

2.1. Workloads and Pods

We start by packaging our applications into docker images. Those docker images are then used to create one of the pre-defined workload types, for example:

  • ReplicaSet: Ensures a minimum number of pods are available at all times
  • StatefulSet: Provides a predictable and unique ordering when increasing or decreasing the number of pods
  • DaemonSet: Ensures a specific number of pods are running on some or all nodes at all times

All of these workloads create one or more pods being deployed in the cluster. A pod is the smallest deployable unit that we can use in Kubernetes. It essentially represents an application running somewhere in the cluster.

By default, every pod has a unique IP that is accessible by other members of the cluster. However, accessing a pod using its IP address is not a good practice for multiple reasons.

First of all, it isn’t easy to know what IP will be assigned to a pod ahead of time. This makes it nearly impossible to store IP information in a configuration where other applications can access it. Second, many workloads create multiple pods — in some cases, dynamically. This means, at any point in time, we may not know how many pods are running for a given application.

And finally, pods are nonpermanent resources. They are meant to start and stop over time, and each time this happens, it’s more than likely they will get a new IP address.

For all of these reasons, communicating with pods using their IP address is a bad idea. Instead, we should use services.

2.2. Services

A Kubernetes service is an abstraction that exposes a group of pods as a network service. The service handles all the complexity of identifying running pods and their IP addresses. Every service gets a unique URL that is accessible across the cluster. So instead of using IPs to communicate, pods simply need to use the provided service URLs.

Let’s look at a sample service definition:

apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

This will create a service named api-service in the cluster. This service is bound to any pod running the application api-app, regardless of how many of those pods exist or where they are running. And, as new pods of this type start-up, the service will automatically discover them.

Using a service is a great start to decoupling pods from the applications that use them. But, on their own, they don’t always achieve our desired goal. That’s where ingresses and load balancers come in.

3. Ingress

By default, a Kubernetes service is private to the cluster. This means only applications inside the cluster can access them. There are a number of ways around this, and one of the best is an ingress.

In Kubernetes, an ingress lets us route traffic from outside the cluster to one or more services inside the cluster. Typically, the ingress works as a single point of entry for all incoming traffic.

An ingress receives a public IP, meaning it is accessible outside the cluster. Then, using a set of rules, it forwards all of its traffic to an appropriate service. In turn, that service will send the request to a pod that can actually handle the request.

There are a few things to keep in mind when creating an ingress. First, they are designed to handle web traffic (HTTP or HTTPS). While it is possible to use an ingress with other types of protocols, it typically requires extra configuration.

Second, an ingress can do more than just routing. Some other use cases include load balancing and SSL termination.

Most importantly, the ingress object by itself does not actually do anything. So, for an ingress to actually do anything, we need to have an ingress controller available.

3.1. Ingress Controllers

As with most Kubernetes objects, an ingress requires an associated controller to manage it. However, while Kubernetes provides controllers for most objects like deployments and services, *it does not include an ingress *con**troller by default**. Therefore, it is up to the cluster administrator to ensure an appropriate controller is available.

Most cloud platforms provide their own ingress controllers, but there are also plenty of open-source options to choose from. Perhaps the most popular is the nginx ingress controller, which is built on top of the popular web server of the same name.

Let’s check out a sample configuration using the nginx ingress controller:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80

In this example, we create an ingress that routes any request that starts with /api to a Kubernetes service named api-service.

Notice the annotations field contains values that are specific to nginx. Because all ingress controllers use the same API object, we typically use the annotations field to pass specific configurations into the ingress controller.

There are dozens of available ingress controllers in the Kubernetes ecosystem, and covering them all is well beyond the scope of this article. However, because they use the same API object, they share some common features:

  • Ingress Rules: the set of rules that define how to route traffic to a specific service (typically based on URL or hostname)
  • Default Backend: A default resource that handles traffic that does not match any rule
  • TLS: A secret that defines a private key and certificate to allow TLS termination

Different ingress controllers build on these concepts and add their own functionality and features.

4. Load Balancers

Load Balancers in Kubernetes have quite a bit of overlap with ingresses. This is because they are primarily used to expose services to the internet, which, as we saw above, is also a feature of ingresses.

However, load balancers have different characteristics from ingresses. Rather than a standalone object like an ingress, a load balancer is just an extension of a service.

Let’s see a simple example of a service with a load balancer:

apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer

Just like our earlier service example, here, we’re creating a service that routes traffic to any pod running our API application. In this case, we’ve included a LoadBalancer configuration.

For this to work, the cluster must be running on a provider that supports external load balancers. All of the major cloud providers support external load balancers using their own resource types:

Just like we saw with different ingress controllers, different load balancer providers have their own settings. Typically, we manage these settings directly with either a CLI or tool specific to the infrastructure rather than with YAML. Different load balancer implementations will also provide additional features such as SSL termination.

Because load balancers are defined per service, they can only route to a single service. This is different from an ingress, which has the ability to route to multiple services inside the cluster.

Also, keep in mind that regardless of the provider, using an external load balancer will typically come with additional costs. This is because, unlike ingresses and their controllers, external load balancers exist outside of the Kubernetes cluster. Because of this, most cloud providers will charge for the additional resource usage beyond the cluster itself.

5. Conclusion

In this article, we’ve looked at some core concepts from Kubernetes, including deployments, services, and ingresses. Each of these objects plays a key role in routing network traffic between various pods.

While ingresses and load balancers have a lot of overlap in functionality, they behave differently. The main difference is ingresses are native objects inside the cluster that can route to multiple services, while load balancers are external to the cluster and only route to a single service.