| Download, print and study this document offline |
Page 1
Process
Synchronization in
Operating Systems
Page 2
Process
Synchronization in
Operating Systems
Race Conditions
Definition
A situation where the outcome of
process execution depends on the
unpredictable order of execution.
Example Scenario
Two processes increment a shared
variable counter = 0. Process A reads
counter = 0, Process B reads counter
= 0. Both increment and write back,
resulting in counter = 1 (instead of
2).
Solution
Enforce mutual exclusion in critical
sections to prevent simultaneous
access to shared resources.
Race conditions occur due to lack of synchronization when accessing shared resources. The key terms to understand
are critical section (code accessing shared resources) and race condition (incorrect output due to concurrent access).
Page 3
Process
Synchronization in
Operating Systems
Race Conditions
Definition
A situation where the outcome of
process execution depends on the
unpredictable order of execution.
Example Scenario
Two processes increment a shared
variable counter = 0. Process A reads
counter = 0, Process B reads counter
= 0. Both increment and write back,
resulting in counter = 1 (instead of
2).
Solution
Enforce mutual exclusion in critical
sections to prevent simultaneous
access to shared resources.
Race conditions occur due to lack of synchronization when accessing shared resources. The key terms to understand
are critical section (code accessing shared resources) and race condition (incorrect output due to concurrent access).
Critical Section Problem
Definition
A protocol to ensure only one process executes its critical
section at a time, preventing race conditions and maintaining
data consistency.
Requirements
Mutual Exclusion: Only one process in the critical section
Progress: If no process is in the critical section, any
process can enter
Bounded Waiting: Limited waiting time for processes
The structure of a process with a critical section follows this pattern:
do { Entry Section ³ Critical Section ³ Exit Section ³ Remainder Section } while (true);
The goal is to design entry and exit protocols that satisfy all requirements, ensuring safe concurrent execution.
Page 4
Process
Synchronization in
Operating Systems
Race Conditions
Definition
A situation where the outcome of
process execution depends on the
unpredictable order of execution.
Example Scenario
Two processes increment a shared
variable counter = 0. Process A reads
counter = 0, Process B reads counter
= 0. Both increment and write back,
resulting in counter = 1 (instead of
2).
Solution
Enforce mutual exclusion in critical
sections to prevent simultaneous
access to shared resources.
Race conditions occur due to lack of synchronization when accessing shared resources. The key terms to understand
are critical section (code accessing shared resources) and race condition (incorrect output due to concurrent access).
Critical Section Problem
Definition
A protocol to ensure only one process executes its critical
section at a time, preventing race conditions and maintaining
data consistency.
Requirements
Mutual Exclusion: Only one process in the critical section
Progress: If no process is in the critical section, any
process can enter
Bounded Waiting: Limited waiting time for processes
The structure of a process with a critical section follows this pattern:
do { Entry Section ³ Critical Section ³ Exit Section ³ Remainder Section } while (true);
The goal is to design entry and exit protocols that satisfy all requirements, ensuring safe concurrent execution.
Synchronization Mechanisms
Hardware-Based Solutions
Test-and-Set: Atomic instruction to set a lock and return its previous value. Swap: Atomically exchanges values of two
variables. Simple and fast for single-processor systems but causes busy waiting.
Software-Based Solutions
Peterson's Algorithm: For two processes, uses turn and flag variables. Dekker's Algorithm: Similar to Peterson's, ensures
mutual exclusion between processes.
Higher-Level Solutions
Semaphores, Monitors, and Locks provide more abstract and easier-to-use synchronization primitives for developers.
The key focus is understanding the trade-offs between simplicity and efficiency in these different synchronization approaches.
Page 5
Process
Synchronization in
Operating Systems
Race Conditions
Definition
A situation where the outcome of
process execution depends on the
unpredictable order of execution.
Example Scenario
Two processes increment a shared
variable counter = 0. Process A reads
counter = 0, Process B reads counter
= 0. Both increment and write back,
resulting in counter = 1 (instead of
2).
Solution
Enforce mutual exclusion in critical
sections to prevent simultaneous
access to shared resources.
Race conditions occur due to lack of synchronization when accessing shared resources. The key terms to understand
are critical section (code accessing shared resources) and race condition (incorrect output due to concurrent access).
Critical Section Problem
Definition
A protocol to ensure only one process executes its critical
section at a time, preventing race conditions and maintaining
data consistency.
Requirements
Mutual Exclusion: Only one process in the critical section
Progress: If no process is in the critical section, any
process can enter
Bounded Waiting: Limited waiting time for processes
The structure of a process with a critical section follows this pattern:
do { Entry Section ³ Critical Section ³ Exit Section ³ Remainder Section } while (true);
The goal is to design entry and exit protocols that satisfy all requirements, ensuring safe concurrent execution.
Synchronization Mechanisms
Hardware-Based Solutions
Test-and-Set: Atomic instruction to set a lock and return its previous value. Swap: Atomically exchanges values of two
variables. Simple and fast for single-processor systems but causes busy waiting.
Software-Based Solutions
Peterson's Algorithm: For two processes, uses turn and flag variables. Dekker's Algorithm: Similar to Peterson's, ensures
mutual exclusion between processes.
Higher-Level Solutions
Semaphores, Monitors, and Locks provide more abstract and easier-to-use synchronization primitives for developers.
The key focus is understanding the trade-offs between simplicity and efficiency in these different synchronization approaches.
Semaphores
Binary Semaphore
Values 0 or 1 only, similar to a
mutex lock. Used for simple
mutual exclusion scenarios.
Counting Semaphore
Any non-negative integer value.
Used for managing multiple
instances of a resource.
Atomic Operations
Wait (P): Decrements semaphore;
blocks if value < 0. Signal (V):
Increments semaphore; wakes a
waiting process.
A semaphore is a synchronization tool (integer variable) that manages access to shared resources. For example, with
Semaphore S = 1, Process 1 executes wait(S) making S = 0 and enters critical section. Process 2 then executes wait(S),
making S = -1 and blocks. When Process 1 executes signal(S), S becomes 0 and Process 2 unblocks.
Semaphores are particularly useful for controlling access to a fixed number of resources in a system.
Read More| 1. What is process synchronization and why is it important in operating systems? | ![]() |
| 2. What are the common synchronization problems faced in operating systems? | ![]() |
| 3. What are semaphores, and how do they work in process synchronization? | ![]() |
| 4. What is the difference between a mutex and a semaphore? | ![]() |
| 5. How does deadlock occur in process synchronization, and what are its prevention strategies? | ![]() |