Home arrow Development Cycles arrow Page 2 - A Peek into the Future: Transactional Memory

A Peek into the Future: Transactional Memory

It is no secret now that the future is about multiprocessing! The Xbox360, which was released November 2005, is equipped with three hyper-threaded processors. The new PS3 is going to be equipped with the Cell processor having eight processing units (only seven are available to programmers). Even on the PC, we are seeing a rapid increase of dual core processors on both desktops and laptops. The happy days of exponential increase in processor clock speeds are over. Welcome to the world of multiprocessing!

Author Info:
By: Mohamed Saad
Rating: 5 stars5 stars5 stars5 stars5 stars / 4
June 19, 2006
  1. · A Peek into the Future: Transactional Memory
  2. · Previous synchronization mechanisms
  3. · Transactional memory
  4. · So why is it not being used everywhere now?

print this article

A Peek into the Future: Transactional Memory - Previous synchronization mechanisms
(Page 2 of 4 )

Without wasting any time, let's jump to our first synchronization scheme… locks!


Locks are very popular among C/C++ programmers. Here is a simplified version of how they work. Basically, each sensitive piece of data is guarded with a "lock." Before any thread accesses this piece of data, it tries to "acquire" the lock. Let's track a hypothetical thread A trying to access a certain data structure. If thread A can successfully acquire the lock, it proceeds to access this data without problems. If it fails to acquire the lock, this means some other thread B is currently trying to access this same piece of data. In this case, our thread A just waits and waits until the lock is released. When the lock is released, we know that thread B is no longer accessing this data, so we can safely proceed.

Sounds easy, right? It is! But, unfortunately, this simple scheme can cause all sorts of troubles. We will focus on one of those troubles. It is called deadlocks. Deadlocks happen when two threads A and B just can never proceed. For example, assume thread A acquired lock X and is trying to acquire lock Y. Thread B acquired lock Y and is trying to acquire lock X. Since no one is allowed to proceed, we will never move out of this situation. To make things worse, we sometimes end up with a chain deadlock, which involves several threads trying to access some locks in a certain pattern in a way that causes a deadlock.

Do you think it is easy to detect deadlocks? Think again, here is a very simple example. Let's see if you can immediately detect the deadlock in this situation.

Assume we have two arrays. We need to move data frequently between them. Assume each array has a lock associated with it. Here is what we might write…

  Subroutine Move (Array A, Array B)
    Lock (Array A)
Lock (Array B)
Read X from A
Write X to B
Release(Array B)
Release(Array A)

Believe it or not, this very simple code is deadlock prone! Try to discover it first before reading on.

Okay, here is how it happens. Assume we have two threads. One of them is trying to move an element from A to B. The other is trying to move an element from B to A. Both threads are executing at the same time. The first thread locks array A, and the second thread locks array B. Now, there is no way for them to proceed, and we are deadlocked. The same situation can happen if we have three or four arrays. For example, thread A tries to move an element from X to Y, thread B tries to move an element from Y to Z. Thread C tries to move an element from Z to X. This big mess ends up in a similar deadlock, even though we don't get two threads working in the opposite direction like the first example!

This simple example is, in fact, very simple. In a very large project, there might be some very elusive ways in which a deadlock can happen. It might be very hard to detect it. This is one of the huge problems of locks. You need to be very careful when using them.

Oh, and by the way, while writing the Linux source code, the developers decided to have a special statement that acquires two locks at the same time, just to avoid this problem! It is that tricky!

Now, let's jump to our second scheme.


Monitors are the synchronization scheme used in Java. They are very simple to use. The version of monitors used in Java is pretty simple. Each object is associated with a simple token. If you label a method as "synchronized," it means that only one thread can enter this method. The way this happens is that before executing the method, the thread will try to acquire the token. If it can acquire it, it will proceed. If it can't find it, this is an indication that someone else is executing a "synchronized" method. In this case, it goes to sleep and waits for the other thread to notify it when it is finished. A very simple and elegant scheme, and if you have used Java for some time, you might already be familiar with it.

This scheme however, is not without its problems! For example, it doesn't totally prevent deadlocks (think about what happens when a "synchronized" method tries to call another "synchronized" method in another object!).

There is another major problem with both monitors and locks, and I saved it for here, because it is the main motivation for using transactional memory! Notice that, when using locks or monitors, we typically prevent any other thread from accessing our data structure as long as we are using it.

For example, consider what happens if we are accessing a large array. When any thread accesses it, no other thread can access this array, even if it was going to access a totally different portion of it. In other words, if both threads could proceed without problems, they will still have to wait, wasting a good opportunity for concurrency. This can significantly hurt performance. There are ways around this. For example, we can use a separate lock for each array element. This is typically a big waste of memory. Besides, doing the same with monitors is a really tricky business, and can be very error prone.

This is where transactional memory enters the picture. It aims to solve all those problems! Excited? Let's waste no time, and let's start with how it works.

blog comments powered by Disqus

- Division of Large Numbers
- Branch and Bound Algorithm Technique
- Dynamic Programming Algorithm Technique
- Genetic Algorithm Techniques
- Greedy Strategy as an Algorithm Technique
- Divide and Conquer Algorithm Technique
- The Backtracking Algorithm Technique
- More Pattern Matching Algorithms: B-M
- Pattern Matching Algorithms Demystified: KMP
- Coding Standards
- A Peek into the Future: Transactional Memory
- Learning About the Graph Construct using Gam...
- Learning About the Graph Construct using Gam...
- Learning About the Graph Construct using Gam...
- How to Strike a Match

Watch our Tech Videos 
Dev Articles Forums 
 RSS  Articles
 RSS  Forums
 RSS  All Feeds
Write For Us 
Weekly Newsletter
Developer Updates  
Free Website Content 
Contact Us 
Site Map 
Privacy Policy 

Developer Shed Affiliates


© 2003-2019 by Developer Shed. All rights reserved. DS Cluster - Follow our Sitemap
Popular Web Development Topics
All Web Development Tutorials