Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Concurrent processes or threads often need access to shared data and shared resources. If there is no controlled access to shared data, it is often possible to obtain an inconsistent state of this data. Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes, and hence various process synchronization methods are used. In the producer-consumer problem that was discussed earlier, the version only allows one item less than the buffer size to be stored, to provide a solution for the buffer to use its entire capacity of N items is not simple. The producer and consumer share data structure ‘buffer’ and use other variables shown below:
The code for the producer process is:
The code for the consumer process is:
Both producer and consumer routines may not execute properly if executed concurrently.
Suppose that the value of the counter is 5, and that both the producer and the consumer
execute the statement counter++ and counter- - concurrently. Following the execution of
these statements the value of the counter may be 4,5,or 6! The only correct result of these
statements should be counter= =5, which is generated if the consumer and the producer
execute separately. Suppose counter++ is implemented in machine code as the following
instructions:
MOV R1, counter
INC R1
MOV counter, R1
whereas counter- - maybe implemented as:
MOV R2, counter
DEC R2
MOV counter, R2
If both the producer and consumer attempt to update the buffer concurrently, the machine language statements may get interleaved. Interleaving depends upon how the producer and consumer processes are scheduled. Assume counter is initially 5. One interleaving of statements is:
producer: MOV R1, counter (R1 = 5)
INC R1 (R1 = 6)
consumer: MOV R2, counter (R2 = 5)
DEC R2 (R2 = 4)
producer: MOV counter, R1 (counter = 6)
consumer: MOV counter, R2 (counter = 4)
The value of count will be 4, where the correct result should be 5. The value of count
could also be 6 if producer executes MOV counter, R1 at the end. The reason for this
state is that we allowed both processes to manipulate the variable counter concurrently.
A situation like this, where several processes access and manipulate the same data
concurrently and the outcome of the manipulation depends on the particular order in
which the access takes place, is called a race condition. To guard against such race
conditions, we require synchronization of processes.
Concurrent transactions in a bank or in an airline reservation (or travel agent) office
are a couple of other examples that illustrates the critical section problem. We show interleaving of two bank transactions, a deposit and a withdrawal. Here are the details of
the transactions:
The codes for deposit and withdrawal are shown in Figure 18.1.
Figure 18.1 Bank transactions—deposit and withdrawal
Here is what may happen if the two transactions are allowed to execute concurrently, i.e., the transactions are allowed to interleave. Note that in this case the final balance will be Rs. 45,000, i.e., a loss of Rs. 5,000. If MOV Balance,A executes at the end, the result will be a gain of Rs. 5,000. In both cases, the final result is wrong.
Critical Section: A piece of code in a cooperating process in which the process may
updates shared data (variable, file, database, etc.).
Critical Section Problem: Serialize executions of critical sections in cooperating
processes.
When a process executes code that manipulates shared data (or resource), we say that
the process is in its critical section (for that shared data). The execution of critical
sections must be mutually exclusive: at any time, only one process is allowed to execute
in its critical section (even with multiple processors). So each process must first request
permission to enter its critical section. The section of code implementing this request is called the entry section. The remaining code is the remainder section. The critical
section problem is to design a protocol that the processes can use so that their action will
not depend on the order in which their execution is interleaved (possibly on many
processors).
There can be three kinds of solution to the critical section problem:
We discuss the software solutions first. Regardless of the type of solution, the structure of the solution should be as follows. The Entry and Exist sections comprise solution for the problem.
A solution to the critical section problem must satisfy the following three requirements:
While formulating a solution, we must keep the following assumptions in mind:
In this section algorithms that are applicable to two processes will be discussed. The
processes are P0 and P1. When presenting Pi, we use Pj to denote the other process. An
assumption is that the basic machine language instructions such as load and store are
executed atomically, that is an operation that completes in its entirety without
interruption.
The first approach is to let the processes share a common integer variable turn initialized to 0 or 1. If turn = = i, then process Pi is allowed to execute in its critical section. The structure of the process Pi is as follows:
This solution ensures mutual exclusion, that is only one process at a time can be in its critical section. However it does not satisfy the progress requirement, since it requires strict alternation of processes in the execution of the critical section. For example, if turn= =0 and P1 is ready to enter its critical section, P1 cannot do so even though P0 may be in its remainder section. The bounded wait condition is satisfied though, because there is an alternation between the turns of the two processes.
In algorithm two, the variable turn is replaced with an array boolean flag[2]whose elements are initialized to false. If flag is true for a process that indicates that the process is ready to enter its critical section. The structure of process Pi is shown:
In this algorithm Pi sets flag[i]= true signaling that it is ready to enter its critical
section. Then Pi checks to verify that process Pj is not also ready to enter its critical
section. If Pj were ready, then Pi would wait until Pj had indicated that it no longer needed
to be in the critical section (that is until flag[j]=false). At this point Pi would enter
the critical section. On exiting the critical section, Pi would set flag[i]=false
allowing the other process to enter its critical section. In this solution, the mutual
exclusion requirement is satisfied. Unfortunately the progress condition is not met;
consider the following execution sequence:
T0: P0 sets flag[0]= true
T1: P1 sets flag[1]= true
Now both the processes are looping forever in their respective while statements.