1. Overview
In this tutorial, we’ll see how to integrate Apache Camel and LangChain4j into a Spring Boot application to handle AI-driven conversations over WhatsApp, using a local installation of Ollama for AI processing. Apache Camel handles the routing and transformation of data between different systems, while LangChain4j provides the tools to interact with large language models and extract meaningful information.
We discussed Ollama’s key benefits, installation, and hardware requirements in our tutorial How to Install Ollama Generative AI on Linux. Anyway, it’s cross-platform and available for Windows and macOS as well.
We’ll use Postman to test the Ollama API, the WhatsApp API, and our Spring Boot controllers.
2. Initial Setup of Spring Boot
First, let’s make sure that local port 8080 is unused, as we’ll need it for Spring Boot.
Since we’ll be using the @RequestParam annotation to bind request parameters to Spring Boot controllers, we need to add the -parameters compiler argument:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>17</source>
<target>17</target>
<compilerArgs>
<arg>-parameters</arg>
</compilerArgs>
</configuration>
</plugin>
If we miss it, information about parameter names won’t be available via reflection, so our REST calls will throw a java.lang.IllegalArgumentException.
In addition, DEBUG-level logging of incoming and outgoing messages can help us, so let’s enable it in application.properties:
# Logging configuration
logging.level.root=INFO
logging.level.com.baeldung.chatbot=DEBUG
In case of trouble, we can also analyze the local network traffic between Ollama and Spring Boot with tcpdump for Linux and macOS, or windump for Windows. On the other hand, sniffing traffic between Spring Boot and WhatApp Cloud is much more difficult because it’s over the HTTPS protocol.
3. LangChain4j for Ollama
A typical Ollama installation is listening on port 11434. In this case, we’ll run it with the qwen2:1.5b model because it’s fast enough for chatting, but we’re free to choose any other model.
LangChain4j gives us several ChatLanguageModel.generate(..) methods that differ in their parameters. All these methods call Ollama’s REST API /api/chat, as we can verify by inspecting the network traffic. So let’s make sure it works properly, using one of the JSON examples in the Ollama documentation:
Our query got a valid JSON response, so we’re ready to go to LangChain4j.
In case of trouble, let’s make sure to respect the case of the parameters. For example, “role”: “user” will produce a correct response, while “role”: “USER” won’t.
3.1. Configuring LangChain4j
In the pom.xml, we need two dependencies for LangChain4j. We can check the latest version from the Maven repository:
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-core</artifactId>
<version>0.33.0</version>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-ollama</artifactId>
<version>0.33.0</version>
</dependency>
Then let’s add these parameters to application.properties:
# Ollama API configuration
ollama.api_url=http://localhost:11434/
ollama.model=qwen2:1.5b
ollama.timeout=30
ollama.max_response_length=1000
The parameters ollama.timeout and ollama.max_response_length are optional. We included them as a safety measure because some models are known to have a bug that causes a loop in the response process.
3.2. Implementing ChatbotService
Using the @Value annotation, let’s inject these values from application.properties at runtime, ensuring that the configuration is decoupled from the application logic:
@Value("${ollama.api_url}")
private String apiUrl;
@Value("${ollama.model}")
private String modelName;
@Value("${ollama.timeout}")
private int timeout;
@Value("${ollama.max_response_length}")
private int maxResponseLength;
Here is the initialization logic that needs to be run once the service bean is fully constructed. The OllamaChatModel object holds the configuration necessary to interact with the conversational AI model:
private OllamaChatModel ollamaChatModel;
@PostConstruct
public void init() {
this.ollamaChatModel = OllamaChatModel.builder()
.baseUrl(apiUrl)
.modelName(modelName)
.timeout(Duration.ofSeconds(timeout))
.numPredict(maxResponseLength)
.build();
}
This method gets a question, sends it to the chat model, receives the response, and handles any errors that may occur during the process:
public String getResponse(String question) {
logger.debug("Sending to Ollama: {}", question);
String answer = ollamaChatModel.generate(question);
logger.debug("Receiving from Ollama: {}", answer);
if (answer != null && !answer.isEmpty()) {
return answer;
} else {
logger.error("Invalid Ollama response for:\n\n" + question);
throw new ResponseStatusException(
HttpStatus.SC_INTERNAL_SERVER_ERROR,
"Ollama didn't generate a valid response",
null);
}
}
We’re ready for the controller.
3.3. Creating ChatbotController
This controller is helpful during development to test if ChatbotService works properly:
@Autowired
private ChatbotService chatbotService;
@GetMapping("/api/chatbot/send")
public String getChatbotResponse(@RequestParam String question) {
return chatbotService.getResponse(question);
}
Let’s give it a try:
It works as expected.
4. Apache Camel for WhatsApp
Before we continue, let’s create an account on Meta for Developers. For our testing purposes, using the WhatsApp API is free.
4.1. ngrok Reverse Proxy
To integrate a local Spring Boot application with WhatsApp Business services, we need a cross-platform reverse proxy like ngrok connected to a free static domain. It creates a secure tunnel from a public URL with HTTPS protocol to our local server with HTTP protocol, allowing WhatsApp to communicate with our application. In this command, let’s replace xxx.ngrok-free.app with the static domain assigned to us by ngrok:
ngrok http --domain=xxx.ngrok-free.app 8080
This forwards https://xxx.ngrok-free.app to http://localhost:8080.
4.2. Setting up Apache Camel
The first dependency, camel-spring-boot-starter, integrates Apache Camel into a Spring Boot application and provides the necessary configurations for Camel routes. The second dependency, camel-http-starter, supports the creation of HTTP(S)-based routes, enabling the application to handle HTTP and HTTPS requests. The third dependency, camel-jackson, facilitates JSON processing with the Jackson library, allowing Camel routes to transform and marshal JSON data:
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
<version>4.7.0</version>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-http-starter</artifactId>
<version>4.7.0</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jackson</artifactId>
<version>4.7.0</version>
</dependency>
We can check the latest version of Apache Camel from the Maven repository.
Finally, let’s add this configuration to application.properties:
# WhatsApp API configuration
whatsapp.verify_token=BaeldungDemo-Verify-Token
whatsapp.api_url=https://graph.facebook.com/v20.0/PHONE_NUMBER_ID/messages
whatsapp.access_token=ACCESS_TOKEN
Getting the actual values of PHONE_NUMBER_ID and ACCESS_TOKEN to replace in the values of the properties isn’t trivial. We’ll see how to do it in detail.
4.3. Controller to Verify Webhook Token
As a preliminary step, we also need a Spring Boot controller to validate the WhatsApp webhook token. The purpose is to verify our webhook endpoint before it starts receiving actual data from the WhatsApp service:
@Value("${whatsapp.verify_token}")
private String verifyToken;
@GetMapping("/webhook")
public String verifyWebhook(@RequestParam("hub.mode") String mode,
@RequestParam("hub.verify_token") String token,
@RequestParam("hub.challenge") String challenge) {
if ("subscribe".equals(mode) && verifyToken.equals(token)) {
return challenge;
} else {
return "Verification failed";
}
}
So, let’s recap what we’ve done so far:
- ngrok exposes our local Spring Boot server on a public IP with HTTPS
- Apache Camel dependencies are added
- We have a controller to validate the WhatsApp webhook token
- However, we don’t have the actual values of PHONE_NUMBER_ID and ACCESS_TOKEN yet
It’s time to set up our WhatsApp Business account to get such values and subscribe to the webhook service.
4.4. WhatsApp Business Account
The official Get Started guide is quite difficult to follow and doesn’t fit our needs. That’s why the upcoming videos will be helpful to get the relevant steps for our Spring Boot application.
After creating a business portfolio named “Baeldung Chatbot”, let’s create our business app:
Then let’s get the ID of our WhatsApp business phone number, copy it inside the whatsapp.api_url in application.properties, and send a test message to our personal cell phone. Let’s bookmark this Quickstart API Setup page because we may need it during code development:
At this point, we should have received this message on our cell phone:
Now we need the whatsapp.access_token value in application.properties. Let’s go to System Users to generate a token with no expiration, using an account with administrator full access to our app:
We’re ready to configure our webhook endpoint, which we previously created with the @GetMapping(“/webhook”) controller. Let’s start our Spring Boot application before continuing.
As webhook’s callback URL, we need to insert our ngrok static domain suffixed with /webhook, whereas our verification token is BaeldungDemo-Verify-Token:
It’s important to follow these steps in the order we’ve shown them to avoid errors.
4.5. Configuring WhatsAppService to Send Messages
As a reference, before we get into the init() and sendWhatsAppMessage(…) methods, let’s send a text message to our phone using Postman**. This way we can see the required JSON and headers and compare them to the code**.
The Authorization header value is composed of Bearer followed by a space and our whatsapp.access_token, while the Content-Type header is handled automatically by Postman:
The JSON structure is quite simple. We have to be careful that the HTTP 200 response code doesn’t mean that the message was actually sent. We’ll only receive it if we’ve started a conversation by sending a message from our mobile phone to our WhatsApp business number. In other words, the chatbot we create can never initiate a conversation, it can only answer users’ questions:
That said, let’s inject whatsapp.api_url and whatsapp.access_token:
@Value("${whatsapp.api_url}")
private String apiUrl;
@Value("${whatsapp.access_token}")
private String apiToken;
The init() method is responsible for setting up the necessary configurations for sending messages via the WhatsApp API. It defines and adds a new route to the CamelContext, which is responsible for handling the communication between our Spring Boot application and the WhatsApp service.
Within this route configuration, we specify the headers required for authentication and content type, replicating the headers used when we tested the API with Postman:
@Autowired
private CamelContext camelContext;
@PostConstruct
public void init() throws Exception {
camelContext.addRoutes(new RouteBuilder() {
@Override
public void configure() {
JacksonDataFormat jacksonDataFormat = new JacksonDataFormat();
jacksonDataFormat.setPrettyPrint(true);
from("direct:sendWhatsAppMessage")
.setHeader("Authorization", constant("Bearer " + apiToken))
.setHeader("Content-Type", constant("application/json"))
.marshal(jacksonDataFormat)
.process(exchange -> {
logger.debug("Sending JSON: {}", exchange.getIn().getBody(String.class));
}).to(apiUrl).process(exchange -> {
logger.debug("Response: {}", exchange.getIn().getBody(String.class));
});
}
});
}
This way, the direct:sendWhatsAppMessage endpoint allows the route to be triggered programmatically within the application, ensuring that the message is properly marshaled by Jackson and sent with the necessary headers.
The sendWhatsAppMessage(…) uses the Camel ProducerTemplate to send the JSON payload to the direct:sendWhatsAppMessage route. The structure of the HashMap follows the JSON structure we previously used with Postman. This method ensures seamless integration with the WhatsApp API, providing a structured way to send messages from the Spring Boot application:
@Autowired
private ProducerTemplate producerTemplate;
public void sendWhatsAppMessage(String toNumber, String message) {
Map<String, Object> body = new HashMap<>();
body.put("messaging_product", "whatsapp");
body.put("to", toNumber);
body.put("type", "text");
Map<String, String> text = new HashMap<>();
text.put("body", message);
body.put("text", text);
producerTemplate.sendBody("direct:sendWhatsAppMessage", body);
}
The code for sending messages is ready.
4.6. Configuring WhatsAppService to Receive Messages
To handle incoming messages from our WhatsApp users, the processIncomingMessage(…) method processes the payload received from our webhook endpoint, extracts relevant information such as the sender’s phone number and the message content, and then generates an appropriate response using our chatbot service. Finally, it uses the sendWhatsAppMessage(…) method to send Ollama’s response back to the user:
@Autowired
private ObjectMapper objectMapper;
@Autowired
private ChatbotService chatbotService;
public void processIncomingMessage(String payload) {
try {
JsonNode jsonNode = objectMapper.readTree(payload);
JsonNode messages = jsonNode.at("/entry/0/changes/0/value/messages");
if (messages.isArray() && messages.size() > 0) {
String receivedText = messages.get(0).at("/text/body").asText();
String fromNumber = messages.get(0).at("/from").asText();
logger.debug(fromNumber + " sent the message: " + receivedText);
this.sendWhatsAppMessage(fromNumber, chatbotService.getResponse(receivedText));
}
} catch (Exception e) {
logger.error("Error processing incoming payload: {} ", payload, e);
}
}
The next step is to write the controllers to test our WhatsAppService methods.
4.7. Creating the WhatsAppController
The sendWhatsAppMessage(…) controller will be useful during development to test the process of sending messages:
@Autowired
private WhatsAppService whatsAppService;
@PostMapping("/api/whatsapp/send")
public String sendWhatsAppMessage(@RequestParam String to, @RequestParam String message) {
whatsAppService.sendWhatsAppMessage(to, message);
return "Message sent";
}
Let’s give it a try:
It works as expected. Everything is ready for writing the receiveMessage(…) controller which will receive messages sent by users:
@PostMapping("/webhook")
public void receiveMessage(@RequestBody String payload) {
whatsAppService.processIncomingMessage(payload);
}
This is the final test:
Ollama answered our math question using LaTeX syntax. The qwen2:1.5b LLM we’re using supports 29 languages, and here’s the full list.
5. Conclusion
In this article, we demonstrated how to integrate Apache Camel and LangChain4j into a Spring Boot application to manage AI-driven conversations over WhatsApp, using a local installation of Ollama for AI processing. We started by setting up Ollama and configuring our Spring Boot application to handle request parameters.
We then integrated LangChain4j to interact with an Ollama model, using ChatbotService to handle AI responses and ensure seamless communication.
For WhatsApp integration, we set up a WhatsApp Business account and used ngrok as a reverse proxy to facilitate communication between our local server and WhatsApp. We configured Apache Camel and created WhatsAppService to process incoming messages, generate responses using our ChatbotService, and respond appropriately.
We tested ChatbotService and WhatsAppService using dedicated controllers to ensure full functionality.
As always, the full source code is available over on GitHub.