Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE) PDF Download

Process Synchronization | Introduction

On the basis of synchronization, processes are categorized as one of the following two types:

  • Independent Process : Execution of one process does not affects the execution of other processes.
  • Cooperative Process : Execution of one process affects the execution of other processes.

Process synchronization problem arises in the case of Cooperative process also because resources are shared in Cooperative processes.
 
Critical Section Problem

Critical section is a code segment that can be accessed by only one process at a time. Critical section contains shared variables which need to be synchronized to maintain consistency of data variables.
 

Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE)  
 

In the entry section, the process requests for entry in the Critical Section.
 
Any solution to the critical section problem must satisfy three requirements:

  • Mutual Exclusion : If a process is executing in its critical section, then no other process is allowed to execute in the critical section.
  • Progress : If no process is in the critical section, then no other process from outside can block it from entering the critical section.
  • Bounded Waiting : A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

 Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section problem.

In Peterson’s solution, we have two shared variables:

  • boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical section
  • int turn : The process whose turn is to enter the critical section.
     

Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE)  


Peterson’s Solution preserves all three conditions :

Disadvantages of Peterson’s Solution

TestAndSet

TestAndSet is a hardware solution to the synchronization problem. In TestAndSet, we have a shared lock variable which can take either of the two values, 0 or 1.

  • Mutual Exclusion is assured as only one process can access the critical section at any time.
  • Progress is also assured, as a process outside the critical section does not blocks other processes from entering the critical section.
  • Bounded Waiting is preserved as every process gets a fair chance.
    • It involves Busy waiting
    • It is limited to 2 processes.

0 Unlock
1 Lock

Before entering into the critical section, a process inquires about the lock. If it is locked, it keeps on waiting till it become free and if it is not locked, it takes the lock and executes the critical section.

In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting cannot be preserved.
 
Question : The enter_CS() and leave_CS() functions to implement critical section of a process are realized using test-and-set instruction as follows:

void enter_CS(X)
{
  while test-and-set(X) ;
}

void leave_CS(X)
{
  X = 0;
}

In the above solution, X is a memory location associated with the CS and is initialized to 0. Now, consider the following statements:
I. The above solution to CS problem is deadlock-free
II. The solution is starvation free.
III. The processes enter CS in FIFO order.
IV. More than one process can enter CS at the same time.
 
Which of the above statements is TRUE?
(A) I
(B) II and III
(C) II and IV
(D) IV

Semaphores

A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal().
There are two types of semaphores : Binary Semaphores and Counting Semaphores


    • Binary Semaphores : They can only be either 0 or 1. They are also known as mutex locks, as the locks can provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized to 1. Then, a process has to wait until the lock becomes 0. Then, the process can make the mutex semaphore 1 and start its critical section. When it completes its critical section, it can reset the value of mutex semaphore to 0 and some other process can enter its critical section.
    • Counting Semaphores : They can have any value and are not restricted over a certain domain. They can be used to control access a resource that has a limitation on the number of simultaneous accesses. The semaphore can be initialized to the number of instances of the resource. Whenever a process wants to use that resource, it checks if the number of remaining instances is more than zero, i.e., the process has an instance available. Then, the process can enter its critical section thereby decreasing the value of the counting semaphore by 1. After the process is over with the use of the instance of the resource, it can leave the critical section thereby adding 1 to the number of available instances of the resource.

Process Synchronization | Monitors

Monitor is one of the ways to achieve Process synchronization. Monitor is supported by programming languages to achieve mutual exclusion between processes. For example Java Synchronized methods. Java provides wait() and notify() constructs.

1. It is the collection of condition variables and procedures combined together in a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal variable of monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

Syntax of Monitor

Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE)

Condition Variables

Two different operations are performed on the condition variables of the monitor.

Wait.
signal.

et say we have 2 condition variables
condition x, y; //Declaring variable

Wait operation

x.wait() : Process performing wait operation on any condition variable are suspended. The suspended processes are placed in block queue of that condition variable.

Note: Each condition variable has its unique block queue.

Signal operation

x.signal(): When a process performs signal operation on condition variable, one of the blocked processes is given chance.

If (x block queue empty)
  // Ignore signal
else
  // Resume a process from block queue.

Readers-Writers Problem | (Introduction and Readers Preference Solution)

Consider a situation where we have a file shared between many people.

  • If one of the people tries editing the file, no other person should be reading or writing at the same time, otherwise changes will not be visible to him/her.
  • However if some person is reading the file, then others may read it at the same time.

Precisely in OS we call this situation as the readers-writers problem

Problem parameters:

  • One set of data is shared among a number of processes
  • Once a writer is ready, it performs its write. Only one writer may write at a time
  • If a process is writing, no other process can read it
  • If at least one reader is reading, no other process can write
  • Readers may not write and only read

Solution when Reader has the Priority over Writer

Here priority means, no reader should wait if the share is currently opened for reading.

Three variables are used: mutex, wrt, readcnt to implement solution

  1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion when readcnt is updated i.e. when any reader enters or exit from the critical section and semaphore wrt is used by both readers and writers
  2. int readcnt;  //    readcnt tells the number of processes performing read in the critical section, initially 0

Functions for sempahore :

– wait() : decrements the semaphore value.

– signal() : increments the semaphore value.

Writer process:

  1. Writer requests the entry to critical section.
  2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not allowed, it keeps on waiting.
  3. It exits the critical section.

do {
    // writer requests for critical section
    wait(wrt);  
   
    // performs the write

    // leaves the critical section
    signal(wrt);

} while(true);

Reader process:

  1. Reader requests the entry to critical section.
  2. If allowed:
    • it increments the count of number of readers inside the critical section. If this reader is the first reader entering, it locks the wrt semaphore to restrict the entry of writers if any reader is inside.
    • It then, signals mutex as any other reader is allowed to enter while others are already reading.
    • After performing reading, it exits the critical section. When exiting, it checks if no more reader is inside, it signals the semaphore “wrt” as now, writer can enter the critical section.
  3. If not allowed, it keeps on waiting.

do {
    
   // Reader wants to enter the critical section
   wait(mutex);

   // The number of readers has now increased by 1
   readcnt++;                          

   // there is atleast one reader in the critical section
   // this ensure no writer can enter if there is even one reader
   // thus we give preference to readers here
   if (readcnt==1)     
      wait(wrt);                    

   // other readers can enter while this current reader is inside 
   // the critical section
   signal(mutex);                   

   // current reader performs reading here
   wait(mutex);   // a reader wants to leave

   readcnt--;

   // that is, no reader is left in the critical section,
   if (readcnt == 0) 
       signal(wrt);         // writers can enter

   signal(mutex); // reader leaves

} while(true);

Thus, the mutex ‘wrt‘ is queued on both readers and writers in a manner such that preference is given to readers if writers are also there. Thus, no reader is waiting simply because a writer has requested to enter the critical section.

The document Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE) is a part of the Computer Science Engineering (CSE) Course Operating System.
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)
10 videos|99 docs|33 tests

Top Courses for Computer Science Engineering (CSE)

FAQs on Concurrency & Synchronization - Operating System - Computer Science Engineering (CSE)

1. What is concurrency in computer science engineering?
Ans. Concurrency in computer science engineering refers to the ability of a computer system to execute multiple tasks simultaneously. It allows different parts of a program to be executed out of order or in overlapping time intervals, resulting in improved efficiency and utilization of system resources.
2. What is synchronization in computer science engineering?
Ans. Synchronization in computer science engineering is the coordination of multiple concurrent threads, processes, or tasks to ensure their proper execution and avoid conflicts or race conditions. It involves the use of various synchronization mechanisms like locks, semaphores, and barriers to enforce mutual exclusion and order of execution.
3. What are the challenges of concurrency in computer science engineering?
Ans. Concurrency in computer science engineering brings several challenges, including: - Race Conditions: When multiple threads or processes access shared resources simultaneously, race conditions can occur, leading to unpredictable and incorrect results. - Deadlocks: Deadlocks happen when two or more threads or processes are stuck waiting for each other to release resources, causing the system to halt. - Starvation: Starvation occurs when a thread or process is unable to access a shared resource due to other threads or processes continuously acquiring it, leading to unfairness. - Synchronization Overhead: Implementing synchronization mechanisms incurs overhead in terms of performance, memory, and complexity. - Scalability Issues: Concurrent programs may face scalability issues as the number of threads or processes increases, resulting in diminishing performance gains.
4. What are the common synchronization mechanisms used in computer science engineering?
Ans. Common synchronization mechanisms used in computer science engineering include: - Locks: Locks are used to provide mutual exclusion, allowing only one thread or process to access a shared resource at a time. - Semaphores: Semaphores are used to control the access to a shared resource by maintaining a count of available resources. - Monitors: Monitors are high-level synchronization constructs that encapsulate shared data and the associated synchronization mechanisms. - Barriers: Barriers are synchronization points that ensure that a set of threads or processes wait for each other before proceeding to the next stage of execution. - Condition Variables: Condition variables are used to enable threads or processes to wait for a certain condition to become true before proceeding.
5. How can concurrency and synchronization impact the performance of a computer system?
Ans. Concurrency and synchronization can have both positive and negative impacts on the performance of a computer system. While they can improve efficiency and resource utilization by allowing tasks to execute simultaneously, they can also introduce overhead and potential issues such as race conditions and deadlocks. Poorly implemented or excessive synchronization can lead to decreased performance and scalability issues. Therefore, it is crucial to carefully design and optimize concurrent systems to strike a balance between parallelism and synchronization.
10 videos|99 docs|33 tests
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Viva Questions

,

pdf

,

Semester Notes

,

ppt

,

Important questions

,

mock tests for examination

,

Sample Paper

,

Objective type Questions

,

shortcuts and tricks

,

Free

,

Extra Questions

,

MCQs

,

practice quizzes

,

study material

,

Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE)

,

video lectures

,

Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE)

,

Previous Year Questions with Solutions

,

past year papers

,

Concurrency & Synchronization | Operating System - Computer Science Engineering (CSE)

,

Exam

,

Summary

;