1. Overview
In Java, exceptions are generally considered expensive and shouldn’t be used for flow control. This tutorial will prove that this perception is correct and pinpoint what causes the performance issue.
2. Setting Up Environment
Before writing code to evaluate the performance cost, we need to set up a benchmarking environment.
2.1. Java Microbenchmark Harness
Measuring exception overhead isn’t as easy as executing a method in a simple loop and taking note of the total time.
The reason is that a just-in-time compiler can get in the way and optimize the code. Such optimization may make the code perform better than it would actually do in a production environment. In other words, it might yield falsely positive results.
To create a controlled environment that can mitigate JVM optimization, we’ll use Java Microbenchmark Harness, or JMH for short.
The following subsections will walk through setting up a benchmarking environment without going into the details of JMH. For more information about this tool, please check out our Microbenchmarking with Java tutorial.
2.2. Obtaining JMH Artifacts
To get JMH artifacts, add these two dependencies to the POM:
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
<version>1.37</version>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-generator-annprocess</artifactId>
<version>1.37</version>
</dependency>
Please refer to Maven Central for the latest versions of JMH Core and JMH Annotation Processor.
2.3. Benchmark Class
We’ll need a class to hold benchmarks:
@Fork(1)
@Warmup(iterations = 2)
@Measurement(iterations = 10)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
public class ExceptionBenchmark {
private static final int LIMIT = 10_000;
// benchmarks go here
}
Let’s go through the JMH annotations shown above:
- @Fork: Specifying the number of times JMH must spawn a new process to run benchmarks. We set its value to 1 to generate only one process, avoiding waiting for too long to see the result
- @Warmup: Carrying warm-up parameters. The iterations element being 2 means the first two runs are ignored when calculating the result
- @Measurement: Carrying measurement parameters. An iterations value of 10 indicates JMH will execute each method 10 times
- @BenchmarkMode: This is how JHM should collect execution results. The value AverageTime requires JMH to count the average time a method needs to complete its operations
- @OutputTimeUnit: Indicating the output time unit, which is the millisecond in this case
Additionally, there’s a static field inside the class body, namely LIMIT. This is the number of iterations in each method body.
2.4. Executing Benchmarks
To execute benchmarks, we need a main method:
public class MappingFrameworksPerformance {
public static void main(String[] args) throws Exception {
org.openjdk.jmh.Main.main(args);
}
}
We can package the project into a JAR file and run it at the command line. Doing so now will, of course, produce an empty output as we haven’t added any benchmarking method.
For convenience, we can add the maven-jar-plugin to the POM. This plugin allows us the execute the main method inside an IDE:
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.2.0</version>
<configuration>
<archive>
<manifest>
<mainClass>com.baeldung.performancetests.MappingFrameworksPerformance</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
The latest version of maven-jar-plugin can be found here.
3. Performance Measurement
It’s time to have some benchmarking methods to measure performance. Each of these methods must carry the @Benchmark annotation.
3.1. Method Returning Normally
Let’s start with a method returning normally; that is, a method that doesn’t throw an exception:
@Benchmark
public void doNotThrowException(Blackhole blackhole) {
for (int i = 0; i < LIMIT; i++) {
blackhole.consume(new Object());
}
}
The blackhole parameter references an instance of Blackhole. This is a JMH class that helps prevent dead code elimination, an optimization a just-in-time compiler may perform.
The benchmark, in this case, doesn’t throw any exception. In fact, we’ll use it as a reference to evaluate the performance of those that do throw exceptions.
Executing the main method will give us a report:
Benchmark Mode Cnt Score Error Units
ExceptionBenchmark.doNotThrowException avgt 10 0.049 ± 0.006 ms/op
There’s nothing special in this result. The average execution time of the benchmark is 0.049 milliseconds, which is per se pretty meaningless.
3.2. Creating and Throwing an Exception
Here’s another benchmark that throws and catches exceptions:
@Benchmark
public void throwAndCatchException(Blackhole blackhole) {
for (int i = 0; i < LIMIT; i++) {
try {
throw new Exception();
} catch (Exception e) {
blackhole.consume(e);
}
}
}
Let’s have a look at the output:
Benchmark Mode Cnt Score Error Units
ExceptionBenchmark.doNotThrowException avgt 10 0.048 ± 0.003 ms/op
ExceptionBenchmark.throwAndCatchException avgt 10 17.942 ± 0.846 ms/op
The small change in the execution time of method doNotThrowException isn’t important. It’s just the fluctuation in the state of the underlying OS and the JVM. The key takeaway is that throwing an exception makes a method run hundreds of times slower.
The next few subsections will find out what exactly leads to such a dramatic difference.
3.3. Creating an Exception Without Throwing It
Instead of creating, throwing, and catching an exception, we’ll just create it:
@Benchmark
public void createExceptionWithoutThrowingIt(Blackhole blackhole) {
for (int i = 0; i < LIMIT; i++) {
blackhole.consume(new Exception());
}
}
Now, let’s execute the three benchmarks we’ve declared:
Benchmark Mode Cnt Score Error Units
ExceptionBenchmark.createExceptionWithoutThrowingIt avgt 10 17.601 ± 3.152 ms/op
ExceptionBenchmark.doNotThrowException avgt 10 0.054 ± 0.014 ms/op
ExceptionBenchmark.throwAndCatchException avgt 10 17.174 ± 0.474 ms/op
The result may come as a surprise: the execution time of the first and the third methods are nearly the same, while that of the second is substantially smaller.
At this point, it’s clear that the throw and catch statements themselves are fairly cheap. The creation of exceptions, on the other hand, produces high overheads.
3.4. Throwing an Exception Without Adding the Stack Trace
Let’s figure out why constructing an exception is much more expensive than doing an ordinary object:
@Benchmark
@Fork(value = 1, jvmArgs = "-XX:-StackTraceInThrowable")
public void throwExceptionWithoutAddingStackTrace(Blackhole blackhole) {
for (int i = 0; i < LIMIT; i++) {
try {
throw new Exception();
} catch (Exception e) {
blackhole.consume(e);
}
}
}
The only difference between this method and the one in subsection 3.2 is the jvmArgs element. Its value -XX:-StackTraceInThrowable is a JVM option, keeping the stack trace from being added to the exception.
Let’s run the benchmarks again:
Benchmark Mode Cnt Score Error Units
ExceptionBenchmark.createExceptionWithoutThrowingIt avgt 10 17.874 ± 3.199 ms/op
ExceptionBenchmark.doNotThrowException avgt 10 0.046 ± 0.003 ms/op
ExceptionBenchmark.throwAndCatchException avgt 10 16.268 ± 0.239 ms/op
ExceptionBenchmark.throwExceptionWithoutAddingStackTrace avgt 10 1.174 ± 0.014 ms/op
By not populating the exception with the stack trace, we reduced execution duration by more than 100 times. Apparently, walking through the stack and adding its frames to the exception bring about the sluggishness we’ve seen.
3.5. Throwing an Exception and Unwinding Its Stack Trace
Finally, let’s see what happens if we throw an exception and unwind the stack trace when catching it:
@Benchmark
public void throwExceptionAndUnwindStackTrace(Blackhole blackhole) {
for (int i = 0; i < LIMIT; i++) {
try {
throw new Exception();
} catch (Exception e) {
blackhole.consume(e.getStackTrace());
}
}
}
Here’s the outcome:
Benchmark Mode Cnt Score Error Units
ExceptionBenchmark.createExceptionWithoutThrowingIt avgt 10 16.605 ± 0.988 ms/op
ExceptionBenchmark.doNotThrowException avgt 10 0.047 ± 0.006 ms/op
ExceptionBenchmark.throwAndCatchException avgt 10 16.449 ± 0.304 ms/op
ExceptionBenchmark.throwExceptionAndUnwindStackTrace avgt 10 326.560 ± 4.991 ms/op
ExceptionBenchmark.throwExceptionWithoutAddingStackTrace avgt 10 1.185 ± 0.015 ms/op
Just by unwinding the stack trace, we see a whopping increase of some 20 times in the execution duration. Put another way, the performance is much worse if we extract the stack trace from an exception in addition to throwing it.
4. Conclusion
In this tutorial, we analyzed the performance effects of exceptions. Specifically, it found out the performance cost is mostly in the addition of the stack trace to the exception. If this stack trace is unwound afterward, the overhead becomes much larger.
Since throwing and handling exceptions is expensive, we shouldn’t use it for normal program flows. Instead, as its name implies, exceptions should only be used for exceptional cases.
The complete source code can be found over on GitHub.