1. Introduction

In this article, we’ll examine Kotlin’s new K2 compiler. First, we’ll discuss what the K2 compiler aims to solve, followed by other minor improvements.

2. Motivation Behind the K2 Compiler

We’ll start from the beginning. It’s necessary to understand the history of the topic to fully grasp why we’re where we are.

2.1. Kotlin Birth

Kotlin has been around for a pretty significant amount of time already – the development started in 2010. The reason why Kotlin itself exists is because back in the day, there was a strong feeling that Java stagnated. The language development speed was not that great, and the introduction of new features was quite a rare case. This is, for example, in comparison to C#, where language evolution progressed significantly faster.

Kotlin aimed to fill this gap. It was initially offered just as a better Java. Since it runs on top of the JVM, it wasn’t intended to outperform or replace Java by any means, but to offer additional features.

And there were plenty of them: null handling, Elvis operator, generics refinement, type variance, extension functions, and much, much more. All of it is incredibly useful for ordinary developers.

2.2. Kotlin Struggles

Yet, we have to understand that the more features the language has, the more complex it is, and therefore, the compiler should perform more jobs to compile the project. At present, there are 30+ hard keywords in Kotlin and nearly 20 soft keywords, let alone dozens of other modifiers. Go programming language, for instance, has just 25 keywords. The reason for this is Go is designed to be as simple as possible, in terms of features. That is the reason why Go, for instance, compiles incredibly fast, in comparison to other languages.

But what about Kotlin? Because Kotlin is very feature-full and because of how complex the Kotlin type system is, the compilation time of the Kotlin language has always been a weak spot. For instance, in general, Kotlin compiles significantly slower than Java. These numbers are arguable and highly depend on the configuration of our build pipeline, the complexity of the Kotlin code we’re writing, etc. The difference in compilation speed was especially noticeable for earlier Kotlin versions.

Experienced developers probably remember how slow was code completion for Kotlin back in the day in IntelliJ IDEA. It, of course, improved over time, but still, the problem existed.

3. K2 Compiler Improvements

Finally, starting from the Kotlin 2.0.0 language version, the K2 compiler is enabled by default. That means both Maven and Gradle Kotlin plugins compile the Kotlin code via the K2 compiler. Let’s discuss improvements that were made to the compilation process. We’ll start with the main one – performance improvement.

3.1. Compilation Speed

So, developers of the language, of course, understood this problem as well, and in Kotlin 1.7.0, the Alpha version of the K2 compiler was released. Its main focus and motivation were to improve the compilation speed of Kotlin programs and reduce the complexity of the Kotlin compiler overall. The compilation speed improvements, as stated by JetBrains, can be as much as 200%, might be even more. It is important to realize that these benchmarks are only attributed to particular projects in JetBrains; they might differ from our own.

3.2. Smart Casts

Because the Kotlin K2 compiler had rewritten its front end, the smart casts functionality is better now. We’ll discuss what the K2 compiler actually brought and changed; for now, we’ll just focus on the user side. Let’s look at the example:

fun translateExecution(exception: Throwable?) : Throwable? {
    val isInvocationTargetException = exception is InvocationTargetException

    if (isInvocationTargetException) {
        return (exception as InvocationTargetException).targetException
    }
    return exception
}

This code is simple enough, so we won’t stop on this. It compiles both by new and an old Kotlin compiler. The problem here is that we have to make an explicit cast inside an if expression. It is redundant since, at this point, we should already know that the exception variable is of type InvocationTargetException. However, Kotlin, before the K2 compiler, wasn’t capable of inferring the type of exception local variable, so there is no smart-cast here.

However, in the case of the K2 Kotlin compiler, this explicit cast inside if expression would be unnecessary:

fun translateExecutionV2(exception: Throwable?) : Throwable? {
    val isInvocationTargetException = exception is InvocationTargetException

    if (isInvocationTargetException) {
        return exception.targetException
    }

    return exception
}

This code would compile successfully via the K2 compiler. It’s important to note that this:

fun translateExecutionV2(exception: Throwable?) : Throwable? {
    if (isInvocationTargetException(exception)) {
        return exception.targetException
    }

    return exception
}

private fun isInvocationTargetException(exception: Throwable?) = exception is InvocationTargetException

It still wouldn’t work. K2 Compiler does not track the checks that happen inside functions. To address that, we have a contracts API.

There are many other improvements in the smart casts area, but they all boil down to one simple thing – now the K2 compiler can better understand our code, and explicit casts are rarer**.**

4. Main Differences

Let’s discuss what changes lead to such a performance boost. To understand that, we need to first understand what part of the compiler exactly was changed—it is mostly the front end of the Kotlin compiler. The Front-End compiler is responsible for building the PSI (Program Structure Interface), which is a Concrete Syntax Tree, an initial data structure the Kotlin compiler Front-End assembles.

Later, the front end of the Kotlin compiler would derive the FIR (Front-End Intermediate Representation), which is an AST.

4.1. PSI vs. FIR

The first problem with an old compiler was that it relied too much on PSI, which is far larger and more complex than FIR by design. That is because the former contains all information present in the source file, while FIR is already more sparse. So working with FIR is just faster in general.

Another problem with the old compiler was the BindingContext. It is the huge collection of hash tables/maps that stores all the binging information. So for instance, if we would want to look up the variable referenced in string interpolation, the old compiler needs to do 2 map lookups. The new K2 compiler does not do so, but rather, it relies on the tree data structure in FIR. It is faster to access the node member value of the tree than to do 2 searches inside the huge hash table data structure.

4.2. Jumps Reduction

Finally, the K2 Compiler significantly reduced the number of jumps required to resolve, for instance, the return type of the function.

To understand what jump means, let’s look at the following example:

// in ChildClass.kt
class ChildClass : ParentClass() {
    fun greeting(s: String) = hello(s)
}

// in ParentClass.kt
open class ParentClass {
    internal fun hello(s: String) = println("Hello, $s!")
}

So here, the *greeting()*‘s function return type is equal to the return type of the hello() function. So, we need to resolve the return type of the hello() function to resolve the return type of the greetings() function. But we don’t know what actual hello() function the user meant here, so we need to find it. It might be local (in the same member, for instance), or it might be in a parent class, which is most likely in another .kt file, so we need to perform a jump into the parent .kt file. It may not be there.

Furthermore, it might be a function included via a star import, so we need to search the top-level function hello() in those files and so on. These are all jumps to other source files.

Jumps are relatively expensive to make during the resolution stage. An old Kotlin compiler performed a lot of jumps in almost every stage of the compilation process. In contrast, the new Kotlin K2 Compiler has only 2 stages that contain jumps – super type resolution (as in the example above) and implicit type declarations.

5. Multiplatform

There were also two changes introduced in the Kotlin Multiplatform module.

5.1. Separation Of Modules

In the past, the Kotlin compiler required both common and platform code to be available at compile time. In some scenarios, it could’ve led to cases where Kotlin common code would invoke the platform code.

Now, it’s possible to separate those during compilation. This approach is less error-prone and more predictable in terms of the code’s behavior on different platforms.

5.2. Visibility Extension

Now, we can also change the visibility modifier for an actual member for it to be different from the expect one. However, it’s important to note that we can only make an actual platform code more visible, not less. Previously, the compiler demanded the same visibility modifier both for expect member and actual implementation:

// Common module
expect fun getPlatform(): Platform

// Andriod Platform
actual fun getPlatform(): Platform = AndroidPlatform()

In the example above, both functions are implicitly public. Now, it is possible to do this:

// Common module
internal expect fun getPlatform(): Platform

// Android Platform
actual fun getPlatform(): Platform = AndroidPlatform()

So here, the expected declaration has internal visibility modifier. At the same time, the actual implementation for the Android platform, for instance, has an implicit public modifier.

6. Conclusion

Kotlin has existed for a decent amount of time and has accumulated many features. Since Kotlin eventually compiles to Java bytecode (not taking KMM and Kotlin native intro into consideration), the compiler implements all of its features.

Therefore, the compilation process was very complex, which caused Kotlin projects to compile slowly. K2 Compiler addressed exactly that. It was aimed at simplifying the compilation process and making it considerably faster. In the meantime, it also improved smart casting and allowed for the separation of common and platform codes in KMM.

As always, the source code for the article is available over on GitHub.