1. Overview
Data serialization is a technique of converting data into binary or text format. There are multiple systems available for this purpose. Apache Avro is one of those data serialization systems.
Avro is a language independent, schema-based data serialization library. It uses a schema to perform serialization and deserialization. Moreover, Avro uses a JSON format to specify the data structure which makes it more powerful.
In this tutorial, we’ll explore more about Avro setup, the Java API to perform serialization and a comparison of Avro with other data serialization systems.
We’ll focus primarily on schema creation which is the base of the whole system.
2. Apache Avro
Avro is a language-independent serialization library. To do this Avro uses a schema which is one of the core components. It stores the schema in a file for further data processing.
Avro is the best fit for Big Data processing. It’s quite popular in Hadoop and Kafka world for its faster processing.
Avro creates a data file where it keeps data along with schema in its metadata section. Above all, it provides a rich data structure which makes it more popular than other similar solutions.
To use Avro for serialization, we need to follow the steps mentioned below.
3. Problem Statement
Let’s start with defining a class called AvroHttRequest that we’ll use for our examples. The class contains primitive as well as complex type attributes:
class AvroHttpRequest {
private long requestTime;
private ClientIdentifier clientIdentifier;
private List<String> employeeNames;
private Active active;
}
Here, requestTime is a primitive value. ClientIdentifier is another class which represents a complex type. We also have employeeName which is again a complex type. Active is an enum to describe whether the given list of employees is active or not.
Our objective is to serialize and de-serialize the AvroHttRequest class using Apache Avro.
4. Avro Data Types
Before proceeding further, let’s discuss the data types supported by Avro.
Avro supports two types of data:
Primitive type: Avro supports all the primitive types. We use primitive type name to define a type of a given field. For example, a value which holds a String should be declared as {“type”: “string”} in Schema
Complex type: Avro supports six kinds of complex types: records, enums, arrays, maps, unions and fixed
For example, in our problem statement, ClientIdentifier is a record.
In that case schema for ClientIdentifier should look like:
{
"type":"record",
"name":"ClientIdentifier",
"namespace":"com.baeldung.avro",
"fields":[
{
"name":"hostName",
"type":"string"
},
{
"name":"ipAddress",
"type":"string"
}
]
}
5. Using Avro
To start with, let’s add the Maven dependencies we’ll need to our pom.xml file.
We should include the following dependencies:
- Apache Avro – core components
- Compiler – Apache Avro Compilers for Avro IDL and Avro Specific Java APIT
- Tools – which includes Apache Avro command line tools and utilities
- Apache Avro Maven Plugin for Maven projects
We’re using version 1.8.2 for this tutorial.
However, it’s always advised to find the latest version on Maven Central:
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro-compiler</artifactId>
<version>1.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>1.8.2</version>
</dependency>
After adding maven dependencies, the next steps will be:
- Schema creation
- Reading the schema in our program
- Serializing our data using Avro
- Finally, de-serialize the data
6. Schema Creation
Avro describes its Schema using a JSON format. There are mainly four attributes for a given Avro Schema:
- Type- which describes the type of Schema whether its complex type or primitive value
- Namespace- which describes the namespace where the given Schema belongs to
- Name – the name of the Schema
- Fields- which tells about the fields associated with a given schema. Fields can be of primitive as well as complex type.
One way of creating the schema is to write the JSON representation, as we saw in the previous sections.
We can also create a schema using SchemaBuilder which is undeniably a better and efficient way to create it.
6.1. SchemaBuilder Utility
The class org.apache.avro.SchemaBuilder is useful for creating the Schema.
First of all, let’s create the schema for ClientIdentifier:
Schema clientIdentifier = SchemaBuilder.record("ClientIdentifier")
.namespace("com.baeldung.avro")
.fields().requiredString("hostName").requiredString("ipAddress")
.endRecord();
Now, let’s use this for creating an avroHttpRequest schema:
Schema avroHttpRequest = SchemaBuilder.record("AvroHttpRequest")
.namespace("com.baeldung.avro")
.fields().requiredLong("requestTime")
.name("clientIdentifier")
.type(clientIdentifier)
.noDefault()
.name("employeeNames")
.type()
.array()
.items()
.stringType()
.arrayDefault(null)
.name("active")
.type()
.enumeration("Active")
.symbols("YES","NO")
.noDefault()
.endRecord();
It’s important to note here that we’ve assigned clientIdentifier as the type for the clientIdentifier field. In this case, clientIdentifier used to define type is the same schema we created before.
Later we can apply the toString method to get the JSON structure of Schema.
Schema files are saved using the .avsc extension. Let’s save our generated schema to the “src/main/resources/avroHttpRequest-schema.avsc” file.
7. Reading the Schema
Reading a schema is more or less about creating Avro classes for the given schema. Once Avro classes are created we can use them to serialize and deserialize objects.
There are two ways to create Avro classes:
- Programmatically generating Avro classes: Classes can be generated using SchemaCompiler. There are a couple of APIs which we can use for generating Java classes. We can find the code for generation classes on GitHub.
- Using Maven to generate classes
We do have one maven plugin which does the job well. We need to include the plugin and run mvn clean install.
Let’s add the plugin to our pom.xml file:
<plugin>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>${avro.version}</version>
<executions>
<execution>
<id>schemas</id>
<phase>generate-sources</phase>
<goals>
<goal>schema</goal>
<goal>protocol</goal>
<goal>idl-protocol</goal>
</goals>
<configuration>
<sourceDirectory>${project.basedir}/src/main/resources/</sourceDirectory>
<outputDirectory>${project.basedir}/src/main/java/</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
8. Serialization and Deserialization with Avro
As we’re done with generating the schema let’s continue exploring the serialization part.
There are two data serialization formats which Avro supports: JSON format and Binary format.
First, we’ll focus on the JSON format and then we’ll discuss the Binary format.
Before proceeding further, we should go through a few key interfaces. We can use the interfaces and classes below for serialization:
DatumWriter
Encoder: Encoder is used or defining the format as previously mentioned. EncoderFactory provides two types of encoders, binary encoder, and JSON encoder.
DatumReader
Decoder: Decoder is used while de-serializing the data. Decoderfactory provides two types of decoders: binary decoder and JSON decoder.
Next, let’s see how serialization and de-serialization happen in Avro.
8.1. Serialization
We’ll take the example of AvroHttpRequest class and try to serialize it using Avro.
First of all, let’s serialize it in JSON format:
public byte[] serializeAvroHttpRequestJSON(
AvroHttpRequest request) {
DatumWriter<AvroHttpRequest> writer = new SpecificDatumWriter<>(
AvroHttpRequest.class);
byte[] data = new byte[0];
ByteArrayOutputStream stream = new ByteArrayOutputStream();
Encoder jsonEncoder = null;
try {
jsonEncoder = EncoderFactory.get().jsonEncoder(
AvroHttpRequest.getClassSchema(), stream);
writer.write(request, jsonEncoder);
jsonEncoder.flush();
data = stream.toByteArray();
} catch (IOException e) {
logger.error("Serialization error:" + e.getMessage());
}
return data;
}
Let’s have a look at a test case for this method:
@Test
public void whenSerialized_UsingJSONEncoder_ObjectGetsSerialized(){
byte[] data = serializer.serializeAvroHttpRequestJSON(request);
assertTrue(Objects.nonNull(data));
assertTrue(data.length > 0);
}
Here we’ve used the jsonEncoder method and passing the schema to it.
If we wanted to use a binary encoder, we need to replace the jsonEncoder() method with binaryEncoder():
Encoder jsonEncoder = EncoderFactory.get().binaryEncoder(stream,null);
8.2. Deserialization
To do this, we’ll be using the above-mentioned DatumReader and Decoder interfaces.
As we used EncoderFactory to get an Encoder, similarly we’ll use DecoderFactory to get a Decoder object.
Let’s de-serialize the data using JSON format:
public AvroHttpRequest deSerializeAvroHttpRequestJSON(byte[] data) {
DatumReader<AvroHttpRequest> reader
= new SpecificDatumReader<>(AvroHttpRequest.class);
Decoder decoder = null;
try {
decoder = DecoderFactory.get().jsonDecoder(
AvroHttpRequest.getClassSchema(), new String(data));
return reader.read(null, decoder);
} catch (IOException e) {
logger.error("Deserialization error:" + e.getMessage());
}
}
And let’s see the test case:
@Test
public void whenDeserializeUsingJSONDecoder_thenActualAndExpectedObjectsAreEqual(){
byte[] data = serializer.serializeAvroHttpRequestJSON(request);
AvroHttpRequest actualRequest = deSerializer
.deSerializeAvroHttpRequestJSON(data);
assertEquals(actualRequest,request);
assertTrue(actualRequest.getRequestTime()
.equals(request.getRequestTime()));
}
Similarly, we can use a binary decoder:
Decoder decoder = DecoderFactory.get().binaryDecoder(data, null);
9. Conclusion
Apache Avro is especially useful while dealing with big data. It offers data serialization in binary as well as JSON format which can be used as per the use case.
The Avro serialization process is faster, and it’s space efficient as well. Avro does not keep the field type information with each field; instead, it creates metadata in a schema.
Last but not least Avro has a great binding with a wide range of programming languages, which gives it an edge.
As always, the code can be found over on GitHub.