Tuesday 14 January 2014

Race Condition, Synchronization, atomic operations and Volatile keyword.

In this times where multi-threading is an essential feature of almost all systems it is very important for the programmers to handle race conditions. All the concepts mentioned in the title of this post - Race Condition, Synchronization, atomic operations and Volatile keyword are very much related to each other and understanding them together will give you a better and bigger picture of Concurrency.

Race Condition

Consider a very simple increment(++) operation. There is a common misconception that incrementing is an atomic operation. If you are wondering what atomic operations are then be patient. We will discuss it in a more detail later. For now you can understand atomic operation as an operation that takes only one CPU cycle.

But is is not. Incrementing is not an atomic operation. If you think a bit deeper you can imagine increment operation as sequence of following operations.
  1. Read the value from the memory.
  2. add one to it.
  3. Write the changed value back to the memory.
Individual operations may or may or may not be atomic in nature(Will cover this in volatile section). But for our current discussion lets say the counter is of type int  and read/writes for it are atomic. That means as far as our discussion on race condition using increment is considered each subdivided operations mentions above happen in a single CPU cycle.

Now lets see where the problem arises.Lets say we have a single counter instance and it's value is 17. Lets say you have two thread T1 and T2. Both perform increment operation on the same counter instance so that the result we expect at the end is 19. Lets say T1 reads the value of the counter which is 17 and then context switch happens. Then T2 reads the value which is again 17. Now lets say T2 will increment the value to 18 and store it back in the memory. Now the T1 thread again increments the value it has(17) so that the result is 18 and stores it back in the memory which overwrites the T2 threads value. So at the end your value is 18 when you expected it to be 19.  This problem is called race condition. You can visualize the problem with following diagram -


So yes this a problem that every programmer faces while programming in multi-threaded environment. So what is the way out? One could say make an machine level increment operation that happens in a single CPU cycle(or in other words make the increment operation atomic). But we cannot make those machine level changes can we? There are other alternatives or work a rounds that will help us solve the problem. And the first one of it is called - Synchronization.

Synchronization

To avoid race conditions we can use synchronization. By synchronizing a part of code we ensure that only one thread can process that part of the code. In java either you can have synchronized methods or blocks.

Now even these methods or blocks can be further categorized into - 
  1. instance methods/blocks
  2. static/class methods/blocks
To understand the difference between above two it is essential to understand how synchronization mechanism works internally. Java is an Object oriented programming language and everything in it is essentially an Object. Each Object is associated with a monitor. Synchronization involves e thread getting a lock over this monitor. If one thread has a lock over the monitor other thread cannot acquire the lock and consequently cannot access the area governed by that monitor.

Now for instance methods lock is acquired for monitors of the instance itself. So for example if you have a getData()  method of the Employee class then the lock obtained is on the individual Employee instance.

On the other hand for static methods lock obtained is over Class instance of the Employee instance. You can get this Class object by using ClassName.class or classInstance.getClass() method. And this Class instance is same for all the individual Employee instances.

Refer : Interview Question #13 synchronization on static function/method.

So for above counter problem we can synchronize the increment operation with a synchronized function or a block. Eg.

    public synchronized void incrementCounter(){
        counter++;
    }


This involves obtaining lock on the instance or this and then increment. So as long as one thread is carrying out an increment operation no other thread can enter this function. Analogous synchronized block would be

    public void incrementCounter(){
        synchronized (this) {
            counter++;
        }
    }


That is all about the basics of synchronization. We may go into more detailed discussion in another post. But the basic concept of what is synchronization and why is it used should be clear by now.

Note :

Java synchronized keyword is re-entrant in nature it means if a java synchronized method calls another synchronized method which requires same lock then current thread which is holding lock can enter into that method without acquiring lock.(More on Synchronization)


Synchronization is an alternative for operations that are not atomic. So now lets see what atomic operations are.

Atomic operations

As mentioned previously each higher level language operation may require one or more CPU cycles to be completed. It is very much possible that before all the cycles needed for a particular operation are completed there may be context switch and some other process or thread might take control of the CPU resulting in race condition we discussed above. Though we cannot actually manipulate processor instructions there are work a rounds to make an operation atomic.

If you have encountered this context before you must have read -

"The Java language specification guarantees that reading or writing a variable is an atomic operation(unless the variable is of type long or double). Operations variables of type long or double are only atomic if they declared with the volatile keyword."

Even if not lets see what it means. It means read and writes of primitive variables are atomic in nature. Exception to this is long and double.

So why the exception?
Answer : It's not atomic because it's a multiple-step operation at the machine code level. That is, longs and doubles are longer than the processor's word length. So on a machine where processor's word length is greater than the long/double size it is possible for the corresponding read write operations to be atomic.

Note :   Also note it says only read and write. So do not confuse it with any other operation like incrementing, As mentioned above increment operation itself is composed of 3 sub operations listed above.

As per JLS

For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This can result in a situation where a thread sees the first 32 bits of a 64-bit value from one write, and the second 32 bits from another write.

Writes and reads of volatile long and double values are always atomic.

Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values. 


So another way to make read/write for long/double is to declare them volatile. I have just mentioned it here since it comes under atomic title but will discuss it more in next topic - Volatile variables.

Java specially provides atomic classes for purposes like this. For eg we have AtomicInteger or AtomicLong that provides methods like getAndDecrement(), getAndIncrement() and getAndSet() which are atomic.

The AtomicInteger class uses CAS (compare-and-swap) low-level CPU operations (no synchronization needed!) They allow you to modify particular variable only if the present value is equal to something else (and return it it succeed). So when you execute getAndIncrement() it actually runs in a loop (simplified real implementation):

int current;
do {
  current = get();
} while(!compareAndSet(current, current + 1)


Volatile keyword

Synchronization solves the problem faced due to race conditions but there is one another problem that we have missed and that is -  visibility.

Consider following code


public synchronized Singleton getInstance(){
if(_instance == null){   //race condition if two threads sees _instance= null
_instance = new Singleton();
}
}


 Here synchronization makes sure that only one thread enters the function and no other thread is allowed to access it until the thread which has the lock over it's monitor completes it's execution. But what about the variable visibility?

Thread T1 lets say identifies that  _instance is null. It acquires lock and creates new object but before the it comes out of the function lets say there is a context switch. Though new variable is created some other thread in some other function will still see _instance as null . This is because each thread caches the value of a variable and the synchronization among threads happen only when the synchronized method or block is completely executed. To overcome this problem we use volatile keyword.

Volatile keyword in Java guarantees that value of volatile variable will always be read from main memory and not from Thread's local cache thus solving the visibility issue.

Now lets come to the Long and Double issue and how making it volatile make it's read and write atomic. So there is this another concept  - happens before relationship that volatile keyword follows which means if there is a write operation with subsequent reads then reads will be processed only after write is successful.

Useful Links

This post is meant to clear the basic concepts of concurrency in Java and give you a bigger picture of how above topics are correlated. Maybe we will have individual posts to learn them individually. But you can refer to following links. They provide quite useful information.
I would recommend read following book for a good exposure on Java concurrency -
  • Java concurrency in practice - Book by Brian Goetz

t> UA-39527780-1 back to top