Multithreading is growing in importance in modern programming for a variety of reasons, not the least of which being that Windows supports multithreading. While C++ does not feature built-in support for multithreading, it can be used to created multithreaded programs, which is the subject of this article. It is taken from chapter three of The Art of C++, written by Herbert Schildt (McGraw-Hill/Osborne, 2004; ISBN: 0072255129).
Multithreading in C++ - Priority Classes (Page 3 of 11 )
By default, a process is given a priority class of normal, and most programs remain in the normal priority class throughout their execution lifetime. Although neither of the examples in this chapter changes the priority class, a brief overview of the thread priority classes is given here in the interest of completeness.
Windows defines six priority classes, which correspond to the value shown here, in order of highest to lowest priority:
Programs are given the NORMAL_PRIORITY_CLASS by default. Usually, you wonít need to alter the priority class of your program. In fact, changing a processí priority class can have negative consequences on the overall performance of the computer system. For example, if you increase a programís priority class to REALTIME_PRIORITY_CLASS, it will dominate the CPU. For some specialized applications, you may need to increase an applicationís priority class, but usually you wonít. As mentioned, neither of the applications in this chapter changes the priority class.
In the event that you do want to change the priority class of a program, you can do by calling SetPriorityClass( ). You can obtain the current priority class by calling GetPriorityClass( ). The prototypes for these functions are shown here:
Here, hApp is the handle of the process. GetPriorityClass( ) returns the priority class of the application or zero on failure. For SetPriorityClass( ), priority specifies the processís new priority class.
For any given priority class, each individual threadís priority determines how much CPU time it receives within its process. When a thread is first created, it is given normal priority, but you can change a threadís priorityóeven while it is executing.
You can obtain a threadís priority setting by calling GetThreadPriority( ). You can increase or decrease a threadís priority using SetThreadPriority( ). The prototypes for these functions are shown here:
BOOL SetThreadPriority(HANDLE hThread, int priority); int GetThreadPriority(HANDLE hThread);
For both functions, hThread is the handle of the thread. For SetThreadPriority( ), priority is the new priority setting. If an error occurs, SetThreadPriority( ) returns zero. It returns nonzero otherwise. For GetThreadPriority( ), the current priority setting is returned. The priority settings are shown here, in order of highest to lowest:
These values are increments or decrements that are applied relative to the priority class of the process. Through the combination of a processí priority class and thread priority, Windows supports 31 different priority settings for application programs.
GetThreadPriority( ) returns THREAD_PRIORITY_ERROR_RETURN if an error occurs.
For the most part, if a thread has the NORMAL_PRIORITY class, you can freely experiment with changing its priority setting without fear of catastrophically affecting overall system performance. As you will see, the thread control panel developed in the next section allows you to alter the priority setting of a thread within a process (but does not change its priority class).
Obtaining the Handle of the Main Thread
It is possible to control the execution of the main thread. To do so, you will need to acquire its handle. The easiest way to do this is to call GetCurrentThread( ), whose prototype is shown here:
This function returns a pseudohandle to the current thread. It is called a pseudohandle because it is a predefined value that always refers to the current thread rather than specifically to the calling thread. It can, however, be used any place that a normal thread handle can.
When using multiple threads or processes, it is sometimes necessary to coordinate the activities of two or more. This process is called synchronization. The most common use of synchronization occurs when two or more threads need access to a shared resource that must be used by only one thread at a time. For example, when one thread is writing to a file, a second thread must be prevented from doing so at the same time. Another reason for synchronization is when one thread is waiting for an event that is caused by another thread. In this case, there must be some means by which the first thread is held in a suspended state until the event has occurred. Then the waiting thread must resume execution.
There are two general states that a task may be in. First, it may be executing (or ready to execute as soon as it obtains its time slice). Second, a task may be blocked, awaiting some resource or event, in which case its execution is suspended until the needed resource is available or the event occurs.
If you are not familiar with the synchronization problem or its most common solution, the semaphore, the next section discusses it.
Understanding the Synchronization Problem
Windows must provide special services that allow access to a shared resource to be synchronized, because without help from the operating system, there is no way for one process or thread to know that it has sole access to a resource. To understand this, imagine that you are writing programs for a multitasking operating system that does not provide any synchronization support. Further imagine that you have two concurrently executing threads, A and B, both of which, from time to time, require access to some resource R (such as a disk file) that must be accessed by only one thread at a time. As a means of preventing one thread from accessing R while the other is using it, you try the following solution. First, you establish a variable called flag that is initialized to zero and can be accessed by both threads. Then, before using each piece of code that accesses R, you wait for flag to be cleared, then set flag, access R, and finally, clear flag. That is, before either thread accesses R, it executes this piece of code:
while(flag) ; // wait for flag to be cleared flag = 1; // set flag // ... access resource R ... flag = 0; // clear the flag
The idea behind this code is that neither thread will access R if flag is set. Conceptually, this approach is in the spirit of the correct solution. However, in actual fact it leaves much to be desired for one simple reason: it wonít always work! Letís see why.
Using the code just given, it is possible for both processes to access R at the same time. The while loop is, in essence, performing repeated load and compare instructions on flag or, in other words, it is testing flagís value. When flag is cleared, the next line of code sets flagís value. The trouble is that it is possible for these two operations to be performed in two different time slices. Between the two time slices, the value of flag might have been accessed by the other thread, thus allowing R to be used by both threads at the same time. To understand this, imagine that thread A enters the while loop and finds that flag is zero, which is the green light to access R. However, before it can set flag to 1, its time slice expires and thread B resumes execution. If B executes its while, it too will find that flag is not set and assume that it is safe to access R. However, when A resumes it will also begin accessing R. The crucial aspect of the problem is that the testing and setting of flag do not comprise one uninterruptible operation. Rather, as just illustrated, they can be separated by a time slice. No matter how you try, there is no way, using only application-level code, that you can absolutely guarantee that one and only one thread will access R at one time.
The solution to the synchronization problem is as elegant as it is simple. The operating system (in this case Windows) provides a routine that in one uninterrupted operation, tests and, if possible, sets a flag. In the language of operating systems engineers, this is called a testand set operation. For historical reasons, the flags used to control access to a shared resource and provide synchronization between threads (and processes) are called semaphores. The semaphore is at the core of the Windows synchronization system.