It is no secret now that the future is about multiprocessing! The Xbox360, which was released November 2005, is equipped with three hyper-threaded processors. The new PS3 is going to be equipped with the Cell processor having eight processing units (only seven are available to programmers). Even on the PC, we are seeing a rapid increase of dual core processors on both desktops and laptops. The happy days of exponential increase in processor clock speeds are over. Welcome to the world of multiprocessing!
A Peek into the Future: Transactional Memory (Page 1 of 4 )
Unfortunately, to make good use of parallel processors, programs must be written with parallelism in mind. If we run our usual old single-threaded programs on multi-core processors, we will probably see weak to no performance gain. To make things worse, parallel programming is not exactly an easy art to master. Several constructs have been proposed to make it easier. However, none of them has provided us with the ease of use we would hope. Parallel programming is still a very tricky art.
This is where transactional memory comes in handy. Transactional memory is a proposed concurrency and synchronization scheme that is believed to make the whole art of parallel programming just that much easier.
In this article, we are going to discuss some previous attempts at writing parallel programs, and explain why they are very difficult to work with. Then, we will explore transactional memory, and describe why it makes life so much easier. Let's move on.
The one problem with writing concurrent programs
If you have been in the business of writing multithreaded programs, you are probably already familiar with the one big problem with parallelism. Please hold on for a second and make a guess as to what that might be…
Well… if you guessed "dependency," you are absolutely right. The major problem with writing concurrent programs is the dependencies in your program. For example, assume we have something as simple as this…
C = A + B E = C + D
It is very unfortunate that we can't run the two statements in parallel. As you can see, the second statement has to wait for the value of C to be evaluated before it can proceed. Had we allowed both statements to execute at the same time, we would simply end up with erroneous results.
This is where synchronization schemes come in. Synchronization schemes are ways of enforcing order in this chaos. We can use synchronization schemes to avoid dependency problems (like the one above). We will see more examples soon.
Several synchronization schemes have been proposed. In the next section we will look at two of those schemes. One of them is widely used by C and C++ programmers (in fact, it is the synchronization scheme used in writing the Linux operating system). The other is widely used by Java programmers (and if you are a Java programmer, you have probably already guessed what this is). We will discuss the problems of each of the two schemes.
Finally, we will describe transactional memory, and explain why it is so much better than those two schemes. If you are already familiar with locks and monitors, you might want to skip ahead to the transactional memory section. The next section is a review of locks and monitors for those not very familiar with how they work.