Fig. 10-4. (a) A system with five threads and three priority levels. (b) Three thread scheduling algorithms.
The second algorithm is round robin. Here the scheduler locates the highest populated queue and runs each thread for a fixed quantum. If a thread blocks or exits before its quantum is up, it is (temporarily) removed from the queue system. If it uses up its entire quantum, it is suspended and placed at the end of its queue. In the middle example of Fig. 10-4(b), the threads A, B, and C will run alternately forever if they want to. The medium-priority threads D and E will never get a chance as long as one of the high-priority threads wants to run.
The third algorithm is the default algorithm. It runs the threads on all the queues using a time-sliced round-robin algorithm, but the higher the priority, the larger the quantum a thread gets. In this manner, all threads get to run and there is no starvation as in the second algorithm.
There is also a fourth algorithm that has variable-sized quanta, but with starvation. However, this one is not defined by POSIX, so it is not portable and should be avoided.
10.2.3. Synchronization
DCE provides two ways for threads to synchronize: mutexes and condition variables. Mutexes are used when it is essential to prevent multiple threads from accessing the same resource at the same time. For example, when moving items around on a linked list, partway through the move, the list will be in an inconsistent state. To prevent disaster, when one thread is manipulating the list, all other threads must be kept away. By requiring a thread to first successfully lock the mutex associated with the list before touching the list (and unlock it afterward), correct operation can be ensured.
Three kinds of mutexes are available, as shown in Fig. 10-5. They differ in the way they deal with nested locks. A fast mutex is analogous to a lock in a data base system. if a process tries to lock an unlocked record, it will succeed. However, if it tries to acquire the same lock a second time, it will block, waiting for the lock to be released, something that will never happen. Deadlock will occur.
Mutex type | Properties |
---|---|
Fast | Locking it a second time causes a deadlock |
Recursive | Locking it a second time is allowed |
Nonrecursive | Locking it a second time gives an error |
Fig. 10-5. Three kinds of mutexes supported by DCE.
A recursive mutex allows a thread to lock a mutex that it has already locked. The idea is this. Suppose that the main program of a thread locks a mutex, then calls a procedure that also locks the mutex. To avoid deadlock, the second lock is accepted. As long as the mutex is ultimately unlocked as many times as it is locked, the nesting can be arbitrarily deep. Although recursive mutexes are more user friendly, they are also considerably slower, so the programmer has to make a choice. As a compromise, DCE provides a third kind of mutex, one in which an attempt to lock a mutex that is already locked does not deadlock, but returns an error instead.
Condition variables provide a second synchronization mechanism. These are used in conjunction with mutexes. Typically, when a thread needs some resource, it uses a mutex to gain exclusive access to a data structure that keeps track of the status of the resource. If the resource is not available, the thread waits on a condition variable, which atomically suspends the thread and releases the mutex. Later, when another thread signals the condition variable, the waiting thread is restarted.
10.2.4. Thread Calls
The DCE threads package has a total of 54 primitives (library procedures). Many of these are not strictly necessary but are provided for convenience only. This approach is somewhat analogous to a four-function pocket calculator that has keys not only for +, –, ×, and /, but also has keys for +1, –1, ×2, ×10, ×π, /2, and /10, on the grounds that these save the user time and effort. Due to the large number of calls, we will discuss only the most important ones (about half the total). Nevertheless, our treatment should give a reasonable impression of the available functionality.
Call | Description |
---|---|
Create | Create a new thread |
Exit | Called by a thread when it is finished |
Join | Like the WAIT system call in UNIX |
Detach | Make it unnecessary for parent thread to wait when caller exits |
Fig. 10-6. Selected DCE thread calls for managing threads. All the calls in this section are actually prefixed by pthread_ (i.e., pthread_create, not create), which we have omitted to save space.
For our discussion, it is convenient to group the calls into seven categories, each dealing with a different aspect of threads and their use. The first category, listed in Fig. 10-6, deals with thread management. These calls allow threads to be created and for them to exit when done. A parent thread can wait for a child using join, which is similar to the WAIT system call in UNIX. If a parent has no interest in a child and does not plan to wait for it, the parent can disown the child by calling detach. In this case, when the child thread exits, its storage is reclaimed immediately instead of having it wait for the parent to call join.
The DCE package allows the user to create, destroy, and manage templates for threads, mutexes, and condition variables. The templates can be set up to have appropriate initial values. When an object is created, one of the parameters to the create call is a pointer to a template. For example, a thread template can be created and given the attribute (property) that the stack size is 4K. Whenever a thread is created with that template as parameter, it will get an 4K stack. The point of having templates is to eliminate the need for specifying all the options as separate parameters. As the package evolves, the create calls can remain the same. Instead, new attributes can be added to the templates. Some of the template calls are listed in Fig. 10-7.
Call | Description |
---|---|
Attr_create | Create template for setting thread parameters |
Attr_delete | Delete template for threads |
Attr_setprio | Set the default scheduling priority in the template |
Attr_getprio | Read the default scheduling priority from the template |
Attr_setstacksize | Set the default stack size in the template |
Attr_getstacksize | Read the default stack size from the template |
Attr_mutexattr_create | Create template for mutex parameters |
Attr_mutexattr_delete | Delete template for mutexes |
Attr_mutexattr_setkind_np | Set the default mutex type in the template |
Attr_mutexattr_getkind_np | Read the default mutex type from the template |
Attr_condattr_create | Create template for condition variable parameters |
Attr_condattr_delete | Delete template for condition variables |