1. Overview
In this previous article, we covered a theoretical introduction about Kubernetes.
In this tutorial, we’ll discuss how to deploy a Spring Boot application on a local Kubernetes environment, also known as Minikube.
As part of this article, we’ll:
- Install Minikube on our local machine
- Develop an example application consisting of two Spring Boot services
- Set up the application on a one-node cluster using Minikube
- Deploy the application using config files
2. Installing Minikube
The installation of Minikube basically consists of three steps: installing a Hypervisor (like VirtualBox), the CLI kubectl, as well as Minikube itself.
The official documentation provides detailed instructions for each of the steps, and for all popular operating systems.
After completing the installation, we can start Minikube, set VirtualBox as Hypervisor, and configure kubectl to talk to the cluster called minikube:
$> minikube start
$> minikube config set vm-driver virtualbox
$> kubectl config use-context minikube
After that, we can verify that kubectl communicates correctly with our cluster:
$> kubectl cluster-info
The output should look like this:
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
At this stage, we’ll keep the IP in the response close (192.168.99.100 in our case). We’ll later refer to that as NodeIP, which is needed to call resources from outside of the cluster, e. g. from our browser.
Finally, we can inspect the state of our cluster:
$> minikube dashboard
This command opens a site in our default browser, which provides an extensive overview about the state of our cluster.
4. Demo Application
As our cluster is now running and ready for deployment, we need a demo application.
For this purpose, we’ll create a simple “Hello world” application, consisting of two Spring Boot services, which we’ll call frontend and backend.
The backend provides one REST endpoint on port 8080, returning a String containing its hostname. The frontend is available on port 8081, it will simply call the backend endpoint and return its response.
After that, we have to build a Docker image from each app. All files necessary for that are also available on GitHub.
For detailed instructions how to build Docker images, have a look at Dockerizing a Spring Boot Application.
We have to make sure here that we trigger the build process on the Docker host of the Minikube cluster, otherwise, Minikube won’t find the images later during deployment. Furthermore, the workspace on our host must be mounted into the Minikube VM:
$> minikube ssh
$> cd /c/workspace/tutorials/spring-cloud/spring-cloud-kubernetes/demo-backend
$> docker build --file=Dockerfile \
--tag=demo-backend:latest --rm=true .
After that, we can logout from the Minikube VM, all further steps will be executed on our host using kubectl and minikube command-line tools.
5. Simple Deployment Using Imperative Commands
In a first step, we’ll create a Deployment for our demo-backend app, consisting of only one Pod. Based on that, we’ll discuss some commands so we can verify the Deployment, inspect logs, and clean it up at the end.
5.1. Creating the Deployment
We’ll use kubectl, passing all required commands as arguments:
$> kubectl run demo-backend --image=demo-backend:latest \
--port=8080 --image-pull-policy Never
As we can see, we create a Deployment called demo-backend, which is instantiated from an image also called demo-backend, with version latest.
With –port, we specify, that the Deployment opens port 8080 for its Pods (as our demo-backend app listens to port 8080).
The flag –image-pull-policy Never ensures, that Minikube doesn’t try to pull the image from a registry, but takes it from the local Docker host instead.
5.2. Verifying the Deployment
Now, we can check whether the deployment was successful:
$> kubectl get deployments
The output looks like this:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
demo-backend 1 1 1 1 19s
If we want to have a look at the application logs, we need the Pod ID first:
$> kubectl get pods
$> kubectl logs <pod id>
5.3. Creating a Service for the Deployment
To make the REST endpoint of our backend app available, we need to create a Service:
$> kubectl expose deployment demo-backend --type=NodePort
–type=NodePort makes the Service available from outside of the cluster. It will be available at
We use the expose command, so NodePort will be set by the cluster automatically (this is a technical limitation), the default range is 30000-32767. To get a port of our choice, we can use a configuration file, as we’ll see in the next section.
We can verify that the service was created successfully:
$> kubectl get services
The output looks like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-backend NodePort 10.106.11.133 <none> 8080:30117/TCP 11m
As we can see, we have one Service called demo-backend, of type NodePort, which is available at the cluster-internal IP 10.106.11.133.
We have to take a closer look at column PORT(S): as port 8080 was defined in the Deployment, the Service forwards traffic to this port. However, if we want to call the demo-backend from our browser, we have to use port 30117, which is reachable from outside of the cluster.
5.4. Calling the Service
Now, we can call our backend service for the first time:
$> minikube service demo-backend
This command will start our default browser, opening
5.5. Cleaning up Service and Deployment
Afterward, we can remove Service and Deployment:
$> kubectl delete service demo-backend
$> kubectl delete deployment demo-backend
6. Complex Deployment Using Configuration Files
For more complex setups, configuration files are a better choice, instead of passing all parameters via command line arguments.
Configurations files are a great way of documenting our deployment, and they can be versioned controlled.
6.1. Service Definition for Our Backend App
Let’s redefine our service for the backend using a config file:
kind: Service
apiVersion: v1
metadata:
name: demo-backend
spec:
selector:
app: demo-backend
ports:
- protocol: TCP
port: 8080
type: ClusterIP
We create a Service named demo-backend, indicated by the metadata: name field.
It targets TCP port 8080 on any Pod with the app=demo-backend label.
Finally, type: ClusterIP indicates that it is only available from inside of the cluster (as we want to call the endpoint from our demo-frontend app this time, but not directly from a browser anymore, as in the previous example).
6.2. Deployment Definition for Backend App
Next, we can define the actual Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-backend
spec:
selector:
matchLabels:
app: demo-backend
replicas: 3
template:
metadata:
labels:
app: demo-backend
spec:
containers:
- name: demo-backend
image: demo-backend:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
We create a Deployment named demo-backend, indicated by the metadata: name field.
The spec: selector field defines how the Deployment finds which Pods to manage. In this case, we merely select on one label defined in the Pod template (app: demo-backend).
We want to have three replicated Pods, which we indicate by the replicas field.
The template field defines the actual Pod:
- The Pods are labeled as app: demo-backend
- The template: spec field indicates that each Pod replication runs one container, demo-backend, with version latest
- The Pods open port 8080
6.3. Deployment of the Backend App
We can now trigger the deployment:
$> kubectl create -f backend-deployment.yaml
Let’s verify that the deployment was successful:
$> kubectl get deployments
The output looks like this:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
demo-backend 3 3 3 3 25s
We can also check whether the Service is available:
$> kubectl get services
The output looks like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-backend ClusterIP 10.102.17.114 <none> 8080/TCP 30s
As we can see, the Service is of type ClusterIP, and it doesn’t provide an external port in the range 30000-32767, different from our previous example in section 5.
6.4. Deployment and Service Definition for Our Frontend App
After that, we can define Service and Deployment for the frontend:
kind: Service
apiVersion: v1
metadata:
name: demo-frontend
spec:
selector:
app: demo-frontend
ports:
- protocol: TCP
port: 8081
nodePort: 30001
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-frontend
spec:
selector:
matchLabels:
app: demo-frontend
replicas: 3
template:
metadata:
labels:
app: demo-frontend
spec:
containers:
- name: demo-frontend
image: demo-frontend:latest
imagePullPolicy: Never
ports:
- containerPort: 8081
Both frontend and backend are almost identical, the only difference between backend and frontend is the spec of the Service:
For the frontend, we define the type as NodePort (as we want to make the frontend available to outside of the cluster). The backend only has to be reachable from within the cluster, therefore, the type was ClusterIP.
As said before, we also specify NodePort manually, using the nodePort field.
6.5. Deployment of the Frontend App
We can now trigger this deployment the same way:
$> kubectl create -f frontend-deployment.yaml
Let’s quickly verify that the deployment was successful and the Service is available:
$> kubectl get deployments
$> kubectl get services
After that, we can finally call the REST endpoint of the frontend application:
$> minikube service demo-frontend
This command will again start our default browser, opening
6.6. Cleaning up Services and Deployments
In the end, we can clean up by removing Services and Deployments:
$> kubectl delete service demo-frontend
$> kubectl delete deployment demo-frontend
$> kubectl delete service demo-backend
$> kubectl delete deployment demo-backend
7. Conclusion
In this article, we had a quick look at how to deploy a Spring Boot “Hello world” app on a local Kubernetes cluster using Minikube.
We discussed in detail, how to:
- Install Minikube on our local machine
- Develop and build an example consisting of two Spring Boot apps
- Deploy the services on a one-node cluster, using imperative commands with kubectl as well as configuration files
As always, the full source code of the examples is available over on GitHub.