1. Overview

In this article, we’ll be looking at the KafkaStreams library.

KafkaStreams is engineered by the creators of Apache Kafka*.* The primary goal of this piece of software is to allow programmers to create efficient, real-time, streaming applications that could work as Microservices.

KafkaStreams enables us to consume from Kafka topics, analyze or transform data, and potentially, send it to another Kafka topic.

To demonstrate KafkaStreams, we’ll create a simple application that reads sentences from a topic, counts occurrences of words and prints the count per word.

Important to note is that the KafkaStreams library isn’t reactive and has no support for async operations and backpressure handling.

2. Maven Dependency

To start writing Stream processing logic using KafkaStreams, we need to add a dependency to kafka-streams and kafka-clients:

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-streams</artifactId>
    <version>3.4.0</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>3.4.0</version>
</dependency>

We also need to have Apache Kafka installed and started because we’ll be using a Kafka topic. This topic will be the data source for our streaming job.

We can download Kafka and other required dependencies from the official website.

3. Configuring KafkaStreams Input

The first thing we’ll do is the definition of the input Kafka topic.

We can use the Confluent tool that we downloaded – it contains a Kafka Server. It also contains the kafka-console-producer that we can use to publish messages to Kafka.

To get started let’s run our Kafka cluster:

./confluent start

Once Kafka starts, we can define our data source and name of our application using APPLICATION_ID_CONFIG:

String inputTopic = "inputTopic";
Properties streamsConfiguration = new Properties();
streamsConfiguration.put(
  StreamsConfig.APPLICATION_ID_CONFIG, 
  "wordcount-live-test");

A crucial configuration parameter is the BOOTSTRAP_SERVER_CONFIG. This is the URL to our local Kafka instance that we just started:

private String bootstrapServers = "localhost:9092";
streamsConfiguration.put(
  StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
  bootstrapServers);

Next, we need to pass the type of the key and value of messages that will be consumed from inputTopic:

streamsConfiguration.put(
  StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 
  Serdes.String().getClass().getName());
streamsConfiguration.put(
  StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 
  Serdes.String().getClass().getName());

Stream processing is often stateful. When we want to save intermediate results, we need to specify the STATE_DIR_CONFIG parameter.

In our test, we’re using a local file system:

this.stateDirectory = Files.createTempDirectory("kafka-streams");
streamsConfiguration.put(
  StreamsConfig.STATE_DIR_CONFIG, this.stateDirectory.toAbsolutePath().toString());

4. Building a Streaming Topology

Once we defined our input topic, we can create a Streaming Topology – that is a definition of how events should be handled and transformed.

In our example, we’d like to implement a word counter. For every sentence sent to inputTopic, we want to split it into words and calculate the occurrence of every word.

We can use an instance of the KStreamsBuilder class to start constructing our topology:

StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> textLines = builder.stream(inputTopic);
Pattern pattern = Pattern.compile("\\W+", Pattern.UNICODE_CHARACTER_CLASS);

KTable<String, Long> wordCounts = textLines
  .flatMapValues(value -> Arrays.asList(pattern.split(value.toLowerCase())))
  .groupBy((key, word) -> word)
  .count();

To implement word count, firstly, we need to split the values using the regular expression.

The split method is returning an array. We’re using the flatMapValues() to flatten it. Otherwise, we’d end up with a list of arrays, and it’d be inconvenient to write code using such structure.

Finally, we’re aggregating the values for every word and calling the count() that will calculate occurrences of a specific word.

5. Handling Results

We already calculated the word count of our input messages. Now let’s print the results on the standard output using the foreach() method:

wordCounts.toStream()
  .foreach((word, count) -> System.out.println("word: " + word + " -> " + count));

On production, often such streaming job might publish the output to another Kafka topic.

We could do this using the to() method:

String outputTopic = "outputTopic";
wordCounts.toStream()
  .to(outputTopic, Produced.with(Serdes.String(), Serdes.Long()));

The Serde class gives us preconfigured serializers for Java types that will be used to serialize objects to an array of bytes. The array of bytes will then be sent to the Kafka topic.

We’re using String as a key to our topic and Long as a value for the actual count. The to() method will save the resulting data to outputTopic.

6. Starting KafkaStream Job

Up to this point, we built a topology that can be executed. However, the job hasn’t started yet.

We need to start our job explicitly by calling the start() method on the KafkaStreams instance:

Topology topology = builder.build();
KafkaStreams streams = new KafkaStreams(topology, streamsConfiguration);
streams.start();

Thread.sleep(30000);
streams.close();

Note that we are waiting 30 seconds for the job to finish. In a real-world scenario, that job would be running all the time, processing events from Kafka as they arrive.

We can test our job by publishing some events to our Kafka topic.

Let’s start a kafka-console-producer and manually send some events to our inputTopic:

./kafka-console-producer --topic inputTopic --broker-list localhost:9092
>"this is a pony"
>"this is a horse and pony"

This way, we published two events to Kafka. Our application will consume those events and will print the following output:

word:  -> 1
word: this -> 1
word: is -> 1
word: a -> 1
word: pony -> 1
word:  -> 2
word: this -> 2
word: is -> 2
word: a -> 2
word: horse -> 1
word: and -> 1
word: pony -> 2

We can see that when the first message arrived, the word pony occurred only once. But when we sent the second message, the word pony happened for the second time printing: “word: pony -> 2″.

6. Conclusion

This article discusses how to create a primary stream processing application using Apache Kafka as a data source and the KafkaStreams library as the stream processing library.

All these examples and code snippets can be found in the GitHub project – this is a Maven project, so it should be easy to import and run as it is.