The most trivial example of process synchronization is exclusive execution, which prevents all but one process from executing. Technical means of achieving exclusive execution include disabling processor interrupts and raising process priority.
Disabling interrupts yields exclusive execution in an environment that uses interrupts to schedule multiple processes on a single processor, simply because when no interrutps arrive, no scheduling happens. Since interrupts are used to service devices, disabling interrupts can lead to failure in servicing devices. As such, disabling interrupts is only permitted to privileged processes, which should limit disabling interrupts to short periods of time. Disabling interrupts does not yield exclusive execution on systems with multiple processors.
Active waiting is an approach to process synchronization where a process that waits for a condition does so by repeatedly checking whether the condition is true. In the following, multiple solutions to the mutual exclusion problem based on active waiting are developed to illustrate the concept.
Assume availability of shared memory that can be atomically read and
written. A shared boolean variable bCriticalSectionBusy
can be used to track the availability of the critical section. A naive solution
to the mutual exclusion problem would consist of waiting for the variable to
become false before entering the critical section, setting it to true upon
entering, and setting it to false upon leaving.
while (bCriticalSectionBusy) { // Active waiting cycle until the // bCriticalSectionBusy variable // becomes false } bCriticalSectionBusy = true; // Code of critical section comes here ... bCriticalSectionBusy = false;
The principal flaw of the naive solution can be revealed by considering what
would happen if two processes attempted to enter the critical section at exactly the
same time. Both processes would wait for the bCriticalSectionBusy
to become false, both would see it become false at exactly the same time, and both would
leave the active waiting cycle at exactly the same time, neither process noticing that
the other process is about to enter the critical section.
Staying with two processes, the flaw of the naive solution can be remedied
by splitting the bCriticalSectionBusy
variable into two, each
indicating the intent of one process to enter the critical section. A process first
indicates its intent to enter the critical section, then checks if the other process
indicates the same intent, and enters the critical section when alone or backs off
when not.
while (true) { // Indicate the intent to enter the critical section bIWantToEnter = true; // Enter the critical section if the other // process does not indicate the same intent if (!bHeWantsToEnter) break; // Back off to give the other process // a chance and continue the active // waiting cycle bIWantToEnter = false; } // Code of critical section comes here ... bIWantToEnter = false;
The solution is safe in that, unlike its naive predecessor, it never lets more than one process into the critical section. Unfortunately, a process waiting to enter the critical section can be overtaken infinitely many times, violating the fairness property. Additionally, all processes waiting to enter the critical section can form an infinite cycle, violating the liveness property.
A safe solution that also guarantees bounded waiting is known as the Dekker Algorithm.
// Indicate the intent to enter the critical section bIWantToEnter = true; while (bHeWantsToEnter) { // If the other process indicates the same intent and // it is not our turn, back off to give the other // process a chance if (iWhoseTurn != MY_TURN) { bIWantToEnter = false; while (iWhoseTurn != MY_TURN) { } bIWantToEnter = true; } } // Code of critical section comes here ... iWhoseTurn = HIS_TURN; bIWantToEnter = false;
Another similar algorithm is the Peterson Algorithm.
// Indicate the intent to enter the critical section bIWantToEnter = true; // Be polite and act as if it is not our // turn to enter the critical section iWhoseTurn = HIS_TURN; // Wait until the other process either does not // intend to enter the critical section or // acts as if its our turn to enter while (bHeWantsToEnter && (iWhoseTurn != MY_TURN)) { } // Code of critical section comes here ... bIWantToEnter = false;
Other variants of the two algorithms exist, supporting various numbers of processes and providing various fairness guarantees. When the only means for synchronization is a shared memory that supports atomic reads and writes, any fair deterministic solution of the mutual exclusion problem for N processes has been proven to need at least N shared variables.
From practical point of view, our assumption that shared memory can only
be atomically read and written is broadly correct but often too stringent. Many
processors offer atomic operations such as test-and-set or compare-and-swap,
which test wheter a shared variable meets a condition and set its value
only if it does. The utility of these operations is illustrated by
fixing the naive solution to the mutual exclusion problem, which
is made safe by using the AtomicSwap
operation. The operation sets a new value of a shared
variable and returns the previous value.
while (AtomicSwap (bCriticalSectionBusy, true)) { // Active waiting cycle until the // value of the bCriticalSectionBusy // variable has changed from false to true } // Code of critical section comes here ... bCriticalSectionBusy = false;
When the only means for synchronization is a shared memory that supports atomic compare-and-swap alongside atomic reads and writes, any fair deterministic solution of the mutual exclusion problem for N processes has been proven to need at least N/2 shared variables.
Active waiting is useful when the potential for contention is relatively small and the duration of waiting is relatively short. In such cases, the overhead of active waiting is smaller than the overhead of passive waiting, which necessarily includes context switching. Some situations also require the use of active waiting, for example when there is no other process that would wake up the passively waiting process.
Passive waiting is an approach to process synchronization where a process that waits for a condition does so by sleeping, to be woken by another process that has either caused or observed the condition to become true.
Consider the solution to the mutual exclusion problem using the AtomicSwap
operation. A naive extension of the solution to support passive
waiting uses the Sleep
and Wake
operations
to remove and add a process from and to the ready queue, and the oWaitingProcesses
shared queue variable to keep track of the waiting processes.
if (AtomicSwap (bCriticalSectionBusy, true)) { // The critical section is busy, put // the process into the waiting queue oWaitingProcesses.Put (GetCurrentProcess ()); // Wait until somebody wakes the process Sleep (); } // Code of critical section comes here ... // See if any process is waiting in the queue oWaitingProcess = oWaitingProcesses.Get (); if (oWaitingProcess) { // A process was waiting, let it enter the critical section Wake (oWaitingProcess); } else { // No process was waiting, mark the critical section as free bCriticalSectionBusy = false; }
One major flaw of the naive solution is that the decision to wait and the consecutive adding of the process to the wait queue and removing of the process from the ready queue are not performed atomically. It is possible that a process decides to wait just before the critical section becomes free, but is only added to the wait queue and removed from the ready queue after the critical section becomes free. Such a process would continue waiting even though the critical section would be free.
Another major flaw of the naive solution is that the access to the shared queue variable is not synchronized. The implementation of the shared queue variable would be a critical section in itself.
Both flaws of the naive solution can be remedied, for example by employing active waiting both to make the decision to wait and the consecutive queue operations atomic and to synchronize access to the shared queue variable. The solution is too long to be presented in one piece though.
Passive waiting is useful when the potential for contention is relatively high or the duration of waiting is relatively long. Passive waiting also requires existence of another process that will wake up the passively waiting process.
From practical perspective, many synchronization problems include bounds on waiting. Besides the intuitive requirement of fairness, three categories of solutions to synchronization problems are defined:
A wait free solution guarantees every process will finish in a finite number of its own steps. This is the strongest category, where bounds on waiting always exist.
A lock free solution guarantees some process will finish in a finite number of its own steps. This is a somewhat weaker category, with the practical implication that starvation of all processes will not occur and progress will not stop should any single process block or crash.
An obstruction free solution guarantees every process will finish in a finite number of its own steps provided other processes take no steps. This is an even weaker category, with the practical implication that progress will not stop should any single process block or crash.
To be done.
Wait free hierarchy based on consensus number. Shared registers (1), test and set, swap, queue, stack (2), atomic assignment to N shared registers (2N-2), memory to memory move and swap, queue with peek, compare and swap (infinity).
Impossibility results. Visible object for N processes is an object for which any of N processes can execute a sequence of operations that will necessarily be observed by some process, regardless of the operations executed by other processes. An implementation of a visible object for N processes over shared registers takes at least N of those shared registers if the registers can do either read and write or conditional update, or at least N/2 of those shared registers if the registers can do both read and write and conditional update. The same goes for starvation free mutual exclusion.
Randomized synchronization primitives.
References.Â
Maurice Herlihy: Wait-Free Synchronization.
Faith Fich, Danny Hendler, Nir Shavit: On the Inherent Weakness of Conditional Synchronization Primitives.