Author:

Florencia Yannuzzi

Published on:

July 14, 2020

demystifying-thread-safety

Demystifying Thread Safety

Modern programming languages support easy, out-of-the-box multithreading, which leads to multitasking becoming synonymous with efficiency. Multithreading gives the ability to simultaneously run or execute multiple tasks, represented as Threads – and it can take different shapes.

Multiple patterns were identified, different techniques were conceived and mechanisms have been exhaustively enhanced to simplify and increase the performance of the code in multi-threaded scenarios.

 

Although it is a powerful feature, multithreading comes with some common disadvantages. The first disadvantage is the inner difficulty and complexity of the subject. Being a sensitive topic, it requires a great attention to detail – it is very easy to get it wrong and this may lead to unpredictable and incorrect results. In a worst-case scenario, issues like the well known deadlock can happen and its misuse can have a negative impact on software systems. Another disadvantage can be the performance overhead – the creation of native threads that have a natural system cost. 

As a Software Engineer, one of the most intriguing things that can happen, is the inability and incapacity of accepting the reality without being able to understand, replicate and validate it. Because of this, I consider the need to debug and test a major threat that multithreading carries. 


The Mystery 

In single and distributed systems, multithreading issues happen often. This usually occurs when these systems share mutable data – and this can come in many different shapes, including accessing databases and orchestrating processes between distributed instances.

Sometimes the simple absence of good locking systems spreads the occurrence of race conditions. Database systems already incorporate fine grained concurrency lock based control mechanisms that keep consistency of data. However, most of the time that doesn\’t rule out getting locking exceptions that some of us seem to struggle to understand. 


Multithreading in Action 

For the sake of simplicity, let\’s analyse the well known Single Pattern implemented in Java. 

</p>
class MySingleton { 
   private static MySingleton mySingleton; 
   public static synchronized MySingleton getMySingleton() { 
      if (mySingleton == null) { 
         mySingleton = new MySingleton();
      }
      return mySingleton;
   }
}
<p>

This is not the most accurate example in terms of performance, although it is considered to be a way of obtaining a reasonable thread-safe singleton implementation. It uses the lightweight synchronization mechanism on the getter method to guarantee synchronized access to the singleton instance. 


Keeping it Simple 

In the Java Concurrent Utilities Framework ecosystem, threads can be created by extending the Thread class or implementing the Runnable interface. The second option lacks the ability to make the thread be able to return a value. The Callable interface exists to support this feature. So, consider the following native thread abstraction that simply gets the instance of the singleton class. 


</p>
public class MyThread implements Callable<MySingleton> { 
   private final CountDownLatch latch; 
   public MyThread(CountDownLatch latch) { 
      this.latch = latch;
   }
@Override public MySingleton call() throws InterruptedException { 
   latch.await();
   return MySingleton.getMySingleton();
} 
<p>

Please note the parameter latch . Restating the source Javadocs, a CountDownLatch is a synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes. This isn\’t obviously necessary to get the singleton instance, but it helps to provide the necessary synchronization in the next steps. 


The execution method call simply invokes and returns the singleton instance. 


The Truth is Worth It 

Consider the following unit test. It recreates a multithreading scenario where 2 thread instances are created and submitted to a Thread Pool

</p>
@Test public void testMySingleton() throws ExecutionException, InterruptedException { 
   ExecutorService executorService = Executors.newFixedThreadPool(2); 
   CountDownLatch latch = new CountDownLatch(1); 
   Future<MySingleton> singleton1 = executorService.submit(new MyThread(latch)); 
   Future<MySingleton> singleton2 = executorService.submit(new MyThread(latch)); 
   latch.countDown(); 
   assertEquals(singleton1.get().hashCode(),  singleton2.get().hashCode());
}
<p>

The CountDownLatch mentioned before is initialized with 1 and will hold the execution of both threads that will be simultaneously released when the countDown() is invoked. This strategy is deferred – it implies capturing the value returned by both threads encapsulated in a Future. Assertion is based on the object hashCode. Same hashCode implies the same object instance. 


Executing this test doesn\’t lead to surprising results. The singleton instantiated twice is thread-safe – the first thread will acquire the lock in the object constructor and instantiate the static field. The second thread will simply return the instance. 


A Silver Lining 

A thread-safe scenario was considered from the beginning. Removing the synchronised keyword from the singleton getter method should, hence, eliminate thread safety. Executing the same test with the suggested change leads to nondeterministic failures because the Singleton is created twice. 


</p>
java.lang.AssertionError: Expected :125993742 Actual :1192108080 
<p>

This is one of the reasons why the Singleton pattern is already considered an anti-pattern. Without proper synchronization, the implementation ridiculously nullifies its own purpose. 


Conclusion Thread safety is a difficult subject but it doesn\’t necessarily have to be a mystery for any developer. While avoiding multithreading issues completely is a big challenge, being able to understand them a priori brings value. As it is shown in this text, possible issues and their impact can be mitigated during development leading to more solid and scalable systems. You might think that the scenario presented is oversimplified and optimistic , however when applied to more complex tasks the same rules still apply. I always keep these pieces of code at hand.

Scroll to Top