Page 1
Synchronization:
• Currency arises in three different contexts:
o Multiple applications - Multiple programs are allowed to dynamically
share processing time.
o Structured applications - Some applications can be effectively
programmed as a set of concurrent processes.
° Operating system structure - The OS themselves are implemented as set
of processes.
• Concurrent processes (or threads) often need access to shared data and
shared resources.
° Processes use and update shared data such as shared variables, files,
and data bases.
• Writing must be mutually exclusive to prevent a condition leading to
inconsistent data views.
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.
Race Condition
• The race condition is a situation where several processes access (read/write)
shared data concurrently and the final value of the shared data depends upon
which process finishes last
° The actions performed by concurrent processes will then depend on the
order in which their execution is interleaved.
• To prevent race conditions, concurrent processes must be coordinated or
synchronized.
° It means that neither process will proceed beyond a certain point in the
computation until both have reached their respective synchronization
Page 2
Synchronization:
• Currency arises in three different contexts:
o Multiple applications - Multiple programs are allowed to dynamically
share processing time.
o Structured applications - Some applications can be effectively
programmed as a set of concurrent processes.
° Operating system structure - The OS themselves are implemented as set
of processes.
• Concurrent processes (or threads) often need access to shared data and
shared resources.
° Processes use and update shared data such as shared variables, files,
and data bases.
• Writing must be mutually exclusive to prevent a condition leading to
inconsistent data views.
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.
Race Condition
• The race condition is a situation where several processes access (read/write)
shared data concurrently and the final value of the shared data depends upon
which process finishes last
° The actions performed by concurrent processes will then depend on the
order in which their execution is interleaved.
• To prevent race conditions, concurrent processes must be coordinated or
synchronized.
° It means that neither process will proceed beyond a certain point in the
computation until both have reached their respective synchronization
point.
Critical Section/Region
1. Consider a system consisting of n processes all competing to use some
shared data.
2. Each process has a code segment, called critical section, in which the shared
data is accessed.
The Critical-Section Problem
1. The critical-section problem is to design a protocol that the processes can
cooperate. The protocol must ensure that when one process is executing in its
critical section, no other process is allowed to execute in its critical section.
2. The critical section problem is to design a protocol that the processes can
use so that their action will not depend on the order in which their execution is
interleaved (possibly on many processors).
Solution to Critical Section Problem - Requirements
• A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual Exclusion. If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
¦ Implications:
¦ Critical sections better be focused and short.
¦ Better not get into an infinite loop in there.
¦ If a process somehow halts/waits in its critical section, it
must not interfere with other processes.
2. Progress. If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next cannot
be postponed indefinitely.
¦ If only one process wants to enter, it should be able to.
¦ If two or more want to enter, one of them should succeed.
3. Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
¦ Assume that each process executes at a nonzero speed
¦ No assumption concerning relative speed of the n processes.
Types of Solutions
• Software solutionsAlgorithms whose correctness does not rely on any other
assumptions.
• Hardware solutionsRely on some special machine instructions.
• Operating System solutionsProvide some functions and data structures to the
programmer through system/library calls.
• Programming Language solutionsLinguistic constructs provided as part of a
language.
Page 3
Synchronization:
• Currency arises in three different contexts:
o Multiple applications - Multiple programs are allowed to dynamically
share processing time.
o Structured applications - Some applications can be effectively
programmed as a set of concurrent processes.
° Operating system structure - The OS themselves are implemented as set
of processes.
• Concurrent processes (or threads) often need access to shared data and
shared resources.
° Processes use and update shared data such as shared variables, files,
and data bases.
• Writing must be mutually exclusive to prevent a condition leading to
inconsistent data views.
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.
Race Condition
• The race condition is a situation where several processes access (read/write)
shared data concurrently and the final value of the shared data depends upon
which process finishes last
° The actions performed by concurrent processes will then depend on the
order in which their execution is interleaved.
• To prevent race conditions, concurrent processes must be coordinated or
synchronized.
° It means that neither process will proceed beyond a certain point in the
computation until both have reached their respective synchronization
point.
Critical Section/Region
1. Consider a system consisting of n processes all competing to use some
shared data.
2. Each process has a code segment, called critical section, in which the shared
data is accessed.
The Critical-Section Problem
1. The critical-section problem is to design a protocol that the processes can
cooperate. The protocol must ensure that when one process is executing in its
critical section, no other process is allowed to execute in its critical section.
2. The critical section problem is to design a protocol that the processes can
use so that their action will not depend on the order in which their execution is
interleaved (possibly on many processors).
Solution to Critical Section Problem - Requirements
• A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual Exclusion. If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
¦ Implications:
¦ Critical sections better be focused and short.
¦ Better not get into an infinite loop in there.
¦ If a process somehow halts/waits in its critical section, it
must not interfere with other processes.
2. Progress. If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next cannot
be postponed indefinitely.
¦ If only one process wants to enter, it should be able to.
¦ If two or more want to enter, one of them should succeed.
3. Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
¦ Assume that each process executes at a nonzero speed
¦ No assumption concerning relative speed of the n processes.
Types of Solutions
• Software solutionsAlgorithms whose correctness does not rely on any other
assumptions.
• Hardware solutionsRely on some special machine instructions.
• Operating System solutionsProvide some functions and data structures to the
programmer through system/library calls.
• Programming Language solutionsLinguistic constructs provided as part of a
language.
Initial Attempts to Solve the Problem
• The execution of critical sections must be mutually exclusive: at any time,
only one process is allowed to execute in its critical section (even with
multiple processors).
• So each process must first request permission to enter its critical section.
• The section of code implementing this request is called the Entry Section
(ES).
• The critical section (CS) might be followed by a Leave/Exit Section (LS).
• The remaining code is the Remainder Section (RS).
Software Solutions
• We consider first the case of 2 processes: Algorithm 1, 2 and 3
• Then we generalize to n processes: The Bakery algorithm
• Initial notation Only 2 processes, P0 and PI When usually just presenting
process Pi, always denotes other process Pj (i != j).
• General structure of process Pi (other process Pj) do { entry section critical
section exit section remainder section } while (1);
• Processes may share some common variables to synchronize/coordinate
their actions.
Algorithm 1
Shared variables: int turn; //initially turn = 0 // turn = i, Pi can enter its critical
section Process Pi do { while(turn !=i); critical section; turn=j; remainder section }
while (1);
• Satisfies mutual exclusion, but not progress.
Algorithms 2
Shared variables
Boolean flag[2]; // initially flag[0]=flag[1]=false. // flag[i]=true means that Pi is ready
// to enter its critical section Process Pi do { flag[i] = true; while (flag[j]); critical
section flag[i] = false; remainder section } while(l);
• Satisfies mutual exclusion, but not progress requirement.
Algorithm 3
• Combined shared variables of algorithms 1 and 2.
Process Pi do {
flag[i] = true;
turn =j;
while (flag[j] and trun =j);
critical section
flag[i] = false;
remainder section
} while(l);
Page 4
Synchronization:
• Currency arises in three different contexts:
o Multiple applications - Multiple programs are allowed to dynamically
share processing time.
o Structured applications - Some applications can be effectively
programmed as a set of concurrent processes.
° Operating system structure - The OS themselves are implemented as set
of processes.
• Concurrent processes (or threads) often need access to shared data and
shared resources.
° Processes use and update shared data such as shared variables, files,
and data bases.
• Writing must be mutually exclusive to prevent a condition leading to
inconsistent data views.
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.
Race Condition
• The race condition is a situation where several processes access (read/write)
shared data concurrently and the final value of the shared data depends upon
which process finishes last
° The actions performed by concurrent processes will then depend on the
order in which their execution is interleaved.
• To prevent race conditions, concurrent processes must be coordinated or
synchronized.
° It means that neither process will proceed beyond a certain point in the
computation until both have reached their respective synchronization
point.
Critical Section/Region
1. Consider a system consisting of n processes all competing to use some
shared data.
2. Each process has a code segment, called critical section, in which the shared
data is accessed.
The Critical-Section Problem
1. The critical-section problem is to design a protocol that the processes can
cooperate. The protocol must ensure that when one process is executing in its
critical section, no other process is allowed to execute in its critical section.
2. The critical section problem is to design a protocol that the processes can
use so that their action will not depend on the order in which their execution is
interleaved (possibly on many processors).
Solution to Critical Section Problem - Requirements
• A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual Exclusion. If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
¦ Implications:
¦ Critical sections better be focused and short.
¦ Better not get into an infinite loop in there.
¦ If a process somehow halts/waits in its critical section, it
must not interfere with other processes.
2. Progress. If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next cannot
be postponed indefinitely.
¦ If only one process wants to enter, it should be able to.
¦ If two or more want to enter, one of them should succeed.
3. Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
¦ Assume that each process executes at a nonzero speed
¦ No assumption concerning relative speed of the n processes.
Types of Solutions
• Software solutionsAlgorithms whose correctness does not rely on any other
assumptions.
• Hardware solutionsRely on some special machine instructions.
• Operating System solutionsProvide some functions and data structures to the
programmer through system/library calls.
• Programming Language solutionsLinguistic constructs provided as part of a
language.
Initial Attempts to Solve the Problem
• The execution of critical sections must be mutually exclusive: at any time,
only one process is allowed to execute in its critical section (even with
multiple processors).
• So each process must first request permission to enter its critical section.
• The section of code implementing this request is called the Entry Section
(ES).
• The critical section (CS) might be followed by a Leave/Exit Section (LS).
• The remaining code is the Remainder Section (RS).
Software Solutions
• We consider first the case of 2 processes: Algorithm 1, 2 and 3
• Then we generalize to n processes: The Bakery algorithm
• Initial notation Only 2 processes, P0 and PI When usually just presenting
process Pi, always denotes other process Pj (i != j).
• General structure of process Pi (other process Pj) do { entry section critical
section exit section remainder section } while (1);
• Processes may share some common variables to synchronize/coordinate
their actions.
Algorithm 1
Shared variables: int turn; //initially turn = 0 // turn = i, Pi can enter its critical
section Process Pi do { while(turn !=i); critical section; turn=j; remainder section }
while (1);
• Satisfies mutual exclusion, but not progress.
Algorithms 2
Shared variables
Boolean flag[2]; // initially flag[0]=flag[1]=false. // flag[i]=true means that Pi is ready
// to enter its critical section Process Pi do { flag[i] = true; while (flag[j]); critical
section flag[i] = false; remainder section } while(l);
• Satisfies mutual exclusion, but not progress requirement.
Algorithm 3
• Combined shared variables of algorithms 1 and 2.
Process Pi do {
flag[i] = true;
turn =j;
while (flag[j] and trun =j);
critical section
flag[i] = false;
remainder section
} while(l);
• Meets all three requirements; solves the critical-section problem for two
processes.
Bakery Algorithm
• The algorithm which solves the critical-section problem for n processes.
• Before entering its critical section, process receives a number. Holder of the
smallest number enters the critical section.
• If processes Pi and Pj receive the same number, if / < j, then Pi is served first;
else Pj is served first.
• The numbering scheme always generates numbers in increasing order of
enumeration; i.e., 1,2,3,3,3,3,4,5...
• NotationLexicographical order (ticket #, process id #)(a,b) < (c,d) if a < c or if
a = c and b < dmax (aO ,..., an-1) is a number, k, such thatk ? ai for / = 0,..., n - 1
• Shared data
boolean choosing[n]; // Initialized to false int number[n]; //Initialized to zero do {
choosing[i] = false; for(j=0; j<n; j++) {
while( choosinglj]);
while( (number[j]!=0) && (number[j, j] < number[l,i]));
} Critical Section number[i]=0; Remainder section } while(l);
• To prove that the Bakery algorithm works, we have to show that if P; is in its
critical section and Pk (k^i) has already chosen its number[k]^0 then
(number[i],i)<(number[k],k)
• Consider that P; is already in its critical-section and Pk trying to enter the P^s
critical-section. When process Pk executes the second while loop for j=i, it
finds that:
number[i] ^ 0
(number[i],i)<(number[k],k) Thus it continues looping in the while loop until pi
leaves the critical section.
• Note that processes enter in their critical-section on a first-come, first-serve
basis.
What about Process Failure
• If all 3 criteria (ME, progress, bounded waiting) are satisfied, then a valid
solution will provide robustness against failure of a process in its remainder
section (RS).Since failure in R S is just like having an infinitely long R S.
• However, no valid solution can provide robustness against a process failing in
its critical section (CS).A process Pi that fails in its CS does not signal that
fact to other processes: for them Pi is still in its CS.
Drawback of Software Solutions
• Processes that are requesting to enter their critical section are busy waiting
(consuming processor time needlessly).
Page 5
Synchronization:
• Currency arises in three different contexts:
o Multiple applications - Multiple programs are allowed to dynamically
share processing time.
o Structured applications - Some applications can be effectively
programmed as a set of concurrent processes.
° Operating system structure - The OS themselves are implemented as set
of processes.
• Concurrent processes (or threads) often need access to shared data and
shared resources.
° Processes use and update shared data such as shared variables, files,
and data bases.
• Writing must be mutually exclusive to prevent a condition leading to
inconsistent data views.
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.
Race Condition
• The race condition is a situation where several processes access (read/write)
shared data concurrently and the final value of the shared data depends upon
which process finishes last
° The actions performed by concurrent processes will then depend on the
order in which their execution is interleaved.
• To prevent race conditions, concurrent processes must be coordinated or
synchronized.
° It means that neither process will proceed beyond a certain point in the
computation until both have reached their respective synchronization
point.
Critical Section/Region
1. Consider a system consisting of n processes all competing to use some
shared data.
2. Each process has a code segment, called critical section, in which the shared
data is accessed.
The Critical-Section Problem
1. The critical-section problem is to design a protocol that the processes can
cooperate. The protocol must ensure that when one process is executing in its
critical section, no other process is allowed to execute in its critical section.
2. The critical section problem is to design a protocol that the processes can
use so that their action will not depend on the order in which their execution is
interleaved (possibly on many processors).
Solution to Critical Section Problem - Requirements
• A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual Exclusion. If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
¦ Implications:
¦ Critical sections better be focused and short.
¦ Better not get into an infinite loop in there.
¦ If a process somehow halts/waits in its critical section, it
must not interfere with other processes.
2. Progress. If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next cannot
be postponed indefinitely.
¦ If only one process wants to enter, it should be able to.
¦ If two or more want to enter, one of them should succeed.
3. Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
¦ Assume that each process executes at a nonzero speed
¦ No assumption concerning relative speed of the n processes.
Types of Solutions
• Software solutionsAlgorithms whose correctness does not rely on any other
assumptions.
• Hardware solutionsRely on some special machine instructions.
• Operating System solutionsProvide some functions and data structures to the
programmer through system/library calls.
• Programming Language solutionsLinguistic constructs provided as part of a
language.
Initial Attempts to Solve the Problem
• The execution of critical sections must be mutually exclusive: at any time,
only one process is allowed to execute in its critical section (even with
multiple processors).
• So each process must first request permission to enter its critical section.
• The section of code implementing this request is called the Entry Section
(ES).
• The critical section (CS) might be followed by a Leave/Exit Section (LS).
• The remaining code is the Remainder Section (RS).
Software Solutions
• We consider first the case of 2 processes: Algorithm 1, 2 and 3
• Then we generalize to n processes: The Bakery algorithm
• Initial notation Only 2 processes, P0 and PI When usually just presenting
process Pi, always denotes other process Pj (i != j).
• General structure of process Pi (other process Pj) do { entry section critical
section exit section remainder section } while (1);
• Processes may share some common variables to synchronize/coordinate
their actions.
Algorithm 1
Shared variables: int turn; //initially turn = 0 // turn = i, Pi can enter its critical
section Process Pi do { while(turn !=i); critical section; turn=j; remainder section }
while (1);
• Satisfies mutual exclusion, but not progress.
Algorithms 2
Shared variables
Boolean flag[2]; // initially flag[0]=flag[1]=false. // flag[i]=true means that Pi is ready
// to enter its critical section Process Pi do { flag[i] = true; while (flag[j]); critical
section flag[i] = false; remainder section } while(l);
• Satisfies mutual exclusion, but not progress requirement.
Algorithm 3
• Combined shared variables of algorithms 1 and 2.
Process Pi do {
flag[i] = true;
turn =j;
while (flag[j] and trun =j);
critical section
flag[i] = false;
remainder section
} while(l);
• Meets all three requirements; solves the critical-section problem for two
processes.
Bakery Algorithm
• The algorithm which solves the critical-section problem for n processes.
• Before entering its critical section, process receives a number. Holder of the
smallest number enters the critical section.
• If processes Pi and Pj receive the same number, if / < j, then Pi is served first;
else Pj is served first.
• The numbering scheme always generates numbers in increasing order of
enumeration; i.e., 1,2,3,3,3,3,4,5...
• NotationLexicographical order (ticket #, process id #)(a,b) < (c,d) if a < c or if
a = c and b < dmax (aO ,..., an-1) is a number, k, such thatk ? ai for / = 0,..., n - 1
• Shared data
boolean choosing[n]; // Initialized to false int number[n]; //Initialized to zero do {
choosing[i] = false; for(j=0; j<n; j++) {
while( choosinglj]);
while( (number[j]!=0) && (number[j, j] < number[l,i]));
} Critical Section number[i]=0; Remainder section } while(l);
• To prove that the Bakery algorithm works, we have to show that if P; is in its
critical section and Pk (k^i) has already chosen its number[k]^0 then
(number[i],i)<(number[k],k)
• Consider that P; is already in its critical-section and Pk trying to enter the P^s
critical-section. When process Pk executes the second while loop for j=i, it
finds that:
number[i] ^ 0
(number[i],i)<(number[k],k) Thus it continues looping in the while loop until pi
leaves the critical section.
• Note that processes enter in their critical-section on a first-come, first-serve
basis.
What about Process Failure
• If all 3 criteria (ME, progress, bounded waiting) are satisfied, then a valid
solution will provide robustness against failure of a process in its remainder
section (RS).Since failure in R S is just like having an infinitely long R S.
• However, no valid solution can provide robustness against a process failing in
its critical section (CS).A process Pi that fails in its CS does not signal that
fact to other processes: for them Pi is still in its CS.
Drawback of Software Solutions
• Processes that are requesting to enter their critical section are busy waiting
(consuming processor time needlessly).
• If critical sections are long, it would be more efficient to block processes tha1
are waiting.
Mutual Exclusion - Hardware Support
• Interrupt disabling
° A process runs until it invokes an operating-system service or until it is
interrupted
° Disabling interrupts guarantees mutual exclusion
° Processor is limited in its ability to interleave programs
° Multiprocessing
¦ disabling interrupts on one processor will not guarantee mutual
exclusion
• Special machine instructions
° Normally, access to a memory location excludes other access to that
same location.
° Extension: designers have proposed machines instructions that perform
2 actions atomically (indivisible) on the same memory location (e.g.,
reading and writing).
° The execution of such an instruction is also mutually exclusive (even on
Multiprocessors).
° They can be used to provide mutual exclusion but need to be
complemented by other mechanisms to satisfy the other 2 requirements
of the CS problem.
Test And Set Synchronization Hardware
• Test and modify the content of a word atomically.
boolean TestAndSet(boolean &target) {
boolean rv = target;
target = true;
return rv;
I
Mutual Exclusion with Test-and-Set
Shared data: boolean lock = false; Process Pi
do (
while (TestAndSet(lock));
critical section
lock = false;
remainder section
)
Read More