1. Introduction

Simply put, a shared mutable state very easily leads to problems when concurrency is involved. If access to shared mutable objects is not managed properly, applications can quickly become prone to some hard-to-detect concurrency errors.

In this article, we’ll revisit the use of locks to handle concurrent access, explore some of the disadvantages associated with locks, and finally, introduce atomic variables as an alternative.

2. Locks

Let’s have a look at the class:

public class Counter {
    int counter; 
 
    public void increment() {
        counter++;
    }
}

In the case of a single-threaded environment, this works perfectly; however, as soon as we allow more than one thread to write, we start getting inconsistent results.

This is because of the simple increment operation (counter++), which may look like an atomic operation, but in fact is a combination of three operations: obtaining the value, incrementing, and writing the updated value back.

If two threads try to get and update the value at the same time, it may result in lost updates.

One of the ways to manage access to an object is to use locks. This can be achieved by using the synchronized keyword in the increment method signature. The synchronized keyword ensures that only one thread can enter the method at one time (to learn more about Locking and Synchronization refer to – Guide to Synchronized Keyword in Java):

public class SafeCounterWithLock {
    private int counter;
 
    public synchronized void increment() {
        counter++;
    }
}

Using locks solves the problem. However, the performance takes a hit.

When multiple threads attempt to acquire a lock, one of them wins, while the rest of the threads are either blocked or suspended.

The process of suspending and then resuming a thread is very expensive and affects the overall efficiency of the system.

In a small program, such as the counter, the time spent in context switching may become much more than actual code execution, thus greatly reducing overall efficiency.

3. Atomic Operations

There is a branch of research focused on creating non-blocking algorithms for concurrent environments. These algorithms exploit low-level atomic machine instructions such as compare-and-swap (CAS), to ensure data integrity.

A typical CAS operation works on three operands:

  1. The memory location on which to operate (M)
  2. The existing expected value (A) of the variable
  3. The new value (B) which needs to be set

The CAS operation updates atomically the value in M to B, but only if the existing value in M matches A, otherwise no action is taken.

In both cases, the existing value in M is returned. This combines three steps – getting the value, comparing the value, and updating the value – into a single machine level operation.

When multiple threads attempt to update the same value through CAS, one of them wins and updates the value. However, unlike in the case of locks, no other thread gets suspended; instead, they’re simply informed that they did not manage to update the value. The threads can then proceed to do further work and context switches are completely avoided.

One other consequence is that the core program logic becomes more complex. This is because we have to handle the scenario when the CAS operation didn’t succeed. We can retry it again and again till it succeeds, or we can do nothing and move on depending on the use case.

4. Atomic Variables in Java

The most commonly used atomic variable classes in Java are AtomicInteger, AtomicLong, AtomicBoolean, and AtomicReference. These classes represent an int, long, boolean, and object reference respectively which can be atomically updated. The main methods exposed by these classes are:

  • get() – gets the value from the memory, so that changes made by other threads are visible; equivalent to reading a volatile variable
  • incrementAndGet() – Atomically increments by one the current value
  • set() – writes the value to memory, so that the change is visible to other threads; equivalent to writing a volatile variable
  • lazySet() – eventually writes the value to memory, maybe reordered with subsequent relevant memory operations. One use case is nullifying references, for the sake of garbage collection, which is never going to be accessed again. In this case, better performance is achieved by delaying the null volatile write
  • compareAndSet() – same as described in section 3, returns true when it succeeds, else false
  • weakCompareAndSet() – same as described in section 3, but weaker in the sense, that it does not create happens-before orderings. This means that it may not necessarily see updates made to other variables. As of Java 9, this method has been deprecated in all atomic implementations in favor of weakCompareAndSetPlain(). The memory effects of weakCompareAndSet() were plain but its names implied volatile memory effects. To avoid this confusion, they deprecated this method and added four methods with different memory effects such as weakCompareAndSetPlain() or weakCompareAndSetVolatile()

A thread-safe counter implemented with AtomicInteger is shown in the example below:

public class SafeCounterWithoutLock {
    private final AtomicInteger counter = new AtomicInteger(0);
    
    int getValue() {
        return counter.get();
    }
    
    void increment() {
        counter.incrementAndGet();
    }
}

As you can see, we use the incrementAndGet() that acts as a synchronized code block that gets the current value, increments by one and assigns the new value of counter variable and finally stores it in memory.

5. Conclusion

In this quick tutorial, we described an alternate way of handling concurrency where disadvantages associated with locking can be avoided. We also looked at the main methods exposed by the atomic variable classes in Java.

As always, the examples are all available over on GitHub.

To explore more classes that internally use non-blocking algorithms refer to a guide to ConcurrentMap.