Friday 10 June 2016

Locking and Visibility in Java Multithreaded programs

Background

In one of the previous post on Race Condition, Synchronization, atomic operations and Volatile keyword we saw what is race condition, how can we use synchronization to avoid it. We also saw what is volatile variables and what are they used for. Though that post speaks a lot about multi threading issues and it's solution I am writing this post to get even a clearer perspective.

This post is more about memory visibility meaning it is about reading stale values rather than worrying about race conditions.

Issue : Each thread has it's own stack and own cache where values are cached for faster access. Though this is a feature used for performance it can led to undesirable results. Lets say there is a mutable value that is shared among two threads. If one thread modifies it's value and thread two tries to do a subsequent read, it is not guaranteed that thread 2 will read the modified value. This may be because of caching data in threads.



JVM may reorder read/writes for optimizations. Understand we are not talking about race condition here at all. Even if the operations were atomic this issue would happen. So the issue is about memory visibility of a mutable object across threads and the challenge is to get the latest/correct value in read which followed a write.

Issue 2 (Non atomic 64bit operations) : Java memory model required fetch and store operations to be atomic . Exception is for non volatile long and double data types where JVM is permitted to treat a 64 bit read/write operations as two separate 32 bit operations. So reads and writes for these happen in different threads read can give high 32 bits of one value and lower 32 bits of another.[Solution : declare non atomic double and long data types as volatile or guard them with a lock]


Solution to Memory visibility issue

We saw the issue with memory visibility. Now lets see how we can resolve this.

Intrinsic locks guarantee that one thread will see the changes made by another thread in a predictable manner. So lets say in a synchronized region thread T1 makes some changes and then thread T2 enters the same critical region (after T2 releases the lock of course) then T2 is guaranteed to see changes made by T1. So the issue we discussed above will not occur i.e no stale values.

Summing it up : Locking is not just about atomicity i.e making compound operations atomic but it is also about memory visibility. To ensure all threads see latest updated value of shared mutable variable, reading and writing threads must be synchronized on a common lock.


Another solution ofcourse is to make shared mutable variables volatile. I am not going to discuss volatile here. You can refer the previous post for details -  Race Condition, Synchronization, atomic operations and Volatile keyword.[Volatile keyword guarantees that all reads of a volatile variable are read directly from main memory, and all writes to a volatile variable are written directly to main memory]



Related Links

t> UA-39527780-1 back to top