All Exams  >   Computer Science Engineering (CSE)  >   Operating System  >   All Questions

All questions of Process Synchronization for Computer Science Engineering (CSE) Exam

Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method Used by P1
while (S1 == S2) ;
Critica1 Section
S1 = S2;
Method Used by P2
while (S1 != S2) ;
Critica1 Section
S2 = not (S1);
Which one of the following statements describes the properties achieved?
  • a)
    Mutual exclusion but not progress
  • b)
    Progress but not mutual exclusion
  • c)
    Neither mutual exclusion nor progress
  • d)
    Both mutual exclusion and progress
Correct answer is option 'A'. Can you explain this answer?

Ravi Singh answered
It can be easily observed that the Mutual Exclusion requirement is satisfied by the above solution, P1 can enter critical section only if S1 is not equal to S2, and P2 can enter critical section only if S1 is equal to S2. But here Progress Requirement is not satisfied. Suppose when s1=1 and s2=0 and process p1 is not interested to enter into critical section but p2 want to enter critical section. P2 is not able to enter critical section in this as only when p1 finishes execution, then only p2 can enter (then only s1 = s2 condition be satisfied). Progress will not be satisfied when any process which is not interested to enter into the critical section will not allow other interested process to enter into the critical section.

Which of the following need not necessarily be saved on a Context Switch between processes?
  • a)
    General purpose registers
  • b)
    Translation look-aside buffer
  • c)
    Program counter
  • d)
    Stack pointer
Correct answer is option 'B'. Can you explain this answer?

Ravi Singh answered
The values stored in registers, stack pointers and program counters are saved on context switch between the processes so as to resume the execution of the process. There's no need for saving the contents of TLB as it is being invalid after each context switch. So, option (B) is correct

In a certain operating system, deadlock prevention is attempted using the following scheme. Each process is assigned a unique timestamp, and is restarted with the same timestamp if killed. Let Ph be the process holding a resource R, Pr be a process requesting for the same resource R, and T(Ph) and T(Pr) be their timestamps respectively. The decision to wait or preempt one of the processes is based on the following algorithm.
 if T(Pr) < T(Ph)
then kill Pr
else wait
Which one of the following is TRUE?
  • a)
    The scheme is deadlock-free, but not starvation-free
  • b)
    The scheme is not deadlock-free, but starvation-free
  • c)
    The scheme is neither deadlock-free nor starvation-free
  • d)
    The scheme is both deadlock-free and starvation-free
Correct answer is option 'A'. Can you explain this answer?

Sanya Agarwal answered
  1. This scheme is making sure that the timestamp of requesting process is always lesser than holding process
  2. The process is restarted with same timestamp if killed and that timestamp can NOT be greater than the existing time stamp
From 1 and 2,it is clear that any new process coming having LESSER timestamp will be KILLED.So,NO DEADLOCK possible However, a new process will lower timestamp may have to wait  infinitely because of its LOWER timestamp(as killed process will also have same timestamp ,as it was killed earlier).STARVATION IS Definitely POSSIBLE So Answer is A

What is the name of the technique in which the operating system of a computer executes several programs concurrently by switching back and forth between them?
  • a)
    Partitioning
  • b)
    Multi-tasking
  • c)
    Windowing
  • d)
    Paging
Correct answer is option 'B'. Can you explain this answer?

Shanaya Chopra answered
Multi-tasking
Multi-tasking is the technique in which the operating system of a computer executes several programs concurrently by switching back and forth between them. This allows multiple applications to run simultaneously on a single computer.

Types of Multi-tasking
- Pre-emptive Multi-tasking: In this type, the operating system decides when to switch between different tasks based on priority levels or time slices. This ensures that all tasks get a fair share of the CPU's resources.
- Cooperative Multi-tasking: In this type, each program must voluntarily give up control to allow other programs to run. If one program hangs or crashes, it can affect the entire system.

Advantages of Multi-tasking
- Increased productivity: Users can work on multiple tasks simultaneously, improving efficiency and productivity.
- Resource utilization: Multi-tasking allows better utilization of system resources by running multiple programs concurrently.
- Improved user experience: Multi-tasking provides a seamless user experience by allowing users to switch between applications quickly.

Drawbacks of Multi-tasking
- Resource contention: Running multiple programs simultaneously can lead to resource contention, where programs compete for system resources like CPU, memory, and disk.
- Complexity: Managing multiple tasks concurrently can be complex, leading to potential issues like bottlenecks, deadlocks, and performance degradation.
In conclusion, multi-tasking is a fundamental feature of modern operating systems that enables users to run multiple programs concurrently, enhancing productivity and user experience.

Which of the following conditions does not hold good for a solution to a critical section problem ?
  • a)
    No assumptions may be made about speeds or the number of CPUs.
  • b)
    No two processes may be simultaneously inside their critical sections.
  • c)
    Processes running outside its critical section may block other processes.
  • d)
    Processes do not wait forever to enter its critical section.
Correct answer is option 'C'. Can you explain this answer?

In Critical section problem:
  • No assumptions may be made about speeds or the number of CPUs.
  • No two processes may be simultaneously inside their critical sections.
  • Processes running outside its critical section can't block other processes getting enter into critical section.
  • Processes do not wait forever to enter its critical section.
Option (C) is correct.

Feedback queues
  • a)
    are very simple to implement
  • b)
    dispatch tasks according to execution characteristics
  • c)
    are used to favour real time tasks
  • d)
    require manual intervention to implement properly
Correct answer is option 'B'. Can you explain this answer?

Ravi Singh answered
Multilevel Feedback Queue Scheduling (MLFQ) keep analyzing the behavior (time of execution) of processes and according to which it changes its priority of execution of processes. Option (C) is correct. 

Process is:
  • a)
    A program in high level language kept on disk
  • b)
    Contents of main memory
  • c)
    A program in execution
  • d)
    A job in secondary memory
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
A process is termed as a program in execution. Whenever a program needs to be executed, its process image is created inside the main memory and is placed in the ready queue. Later CPU is assigned to the process and it starts its execution. So, option (C) is correct.

The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the old value of x in y without allowing any intervening access to the memory location x. consider the following implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {
  unsigned y;
  unsigned *x = &(s->value);
  do {
     fetch-and-set x, y;
  } while (y);
}
void V (binary_semaphore *s) {
  S->value = 0;
}
Which one of the following is true?
  • a)
    The implementation may not work if context switching is disabled in P.
  • b)
    Instead of using fetch-and-set, a pair of normal load/store can be used
  • c)
    The implementation of V is wrong
  • d)
    The code does not implement a binary semaphore
Correct answer is option 'A'. Can you explain this answer?

Devanshi Desai answered
&(s->value); // get the memory location of s
do {
atomic_fetch_and_set(x, 1, &y); // set x to 1 and get the old value in y
} while (y == 1); // keep trying until x was 0 before we set it to 1
}

void V (binary_semaphore *s) {
atomic_fetch_and_set(&(s->value), 0, NULL); // set s to 0 and discard old value
}

This implementation ensures that only one process can enter the critical section at a time, as the P function will block until the semaphore value is 0 before setting it to 1. The V function simply sets the semaphore value back to 0, allowing another process to enter the critical section. The use of atomic_fetch_and_set ensures that these operations are performed atomically, without allowing any other process to access the semaphore value in between.

In indirect communication between processes P and Q _____
  • a)
    there is another process R to handle and pass on the messages between P and Q
  • b)
    there is another machine between the two processes to help communication
  • c)
    there is a mailbox to help communication between P and Q
  • d)
    none of the mentioned
Correct answer is option 'C'. Can you explain this answer?

In indirect communication between processes P and Q, there is a mailbox to help communication between them.

Explanation:
1. Indirect Communication:
- Indirect communication refers to a method of communication where processes interact with each other through an intermediary entity instead of directly communicating with each other.
- This intermediary entity can be a mailbox, message queue, or any other communication mechanism that allows processes to exchange messages indirectly.

2. Process Communication:
- In a computer system, processes may need to communicate with each other to exchange information, coordinate activities, or share resources.
- Direct communication involves processes communicating with each other directly, whereas indirect communication involves processes communicating through an intermediary entity.

3. Mailbox:
- In the context of process communication, a mailbox is a data structure or a system resource used to exchange messages between processes.
- Each process can send messages to a mailbox and receive messages from a mailbox.
- The mailbox acts as a temporary storage location for messages, allowing processes to communicate asynchronously.
- When a process sends a message to a mailbox, it does not need to wait for the recipient process to receive the message immediately. The recipient process can retrieve the message from the mailbox at its convenience.

4. Role of Mailbox in Indirect Communication:
- In indirect communication between processes P and Q, a mailbox is used as a communication channel.
- Process P can send a message to the mailbox associated with process Q, and process Q can retrieve the message from the mailbox when it wants to.
- The mailbox ensures that the message is stored safely until the recipient process is ready to receive it.
- This allows processes to communicate asynchronously, meaning they can continue their execution without waiting for immediate response or synchronization.

Hence, in indirect communication between processes P and Q, there is a mailbox to help communication between them.

Consider the following code fragment:
if (fork() == 0)
{ a = a + 5; printf("%d,%dn", a, &a); }
else { a = a –5; printf("%d, %dn", a, &a); }
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process. Which one of the following is TRUE?
  • a)
    u = x + 10 and v = y
  • b)
    u = x + 10 and v != y
  • c)
    u + 10 = x and v = y
  • d)
    u + 10 = x and v != y
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
fork() returns 0 in child process and process ID of child process in parent process. In Child (x), a = a + 5 In Parent (u), a = a – 5; Therefore x = u + 10. The physical addresses of ‘a’ in parent and child must be different. But our program accesses virtual addresses (assuming we are running on an OS that uses virtual memory). The child process gets an exact copy of parent process and virtual address of ‘a’ doesn’t change in child process. Therefore, we get same addresses in both parent and child. See this run for example.

Three concurrent processes X, Y, and Z execute three different code segments that access and update certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before entering the respective code segments. After completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one. Which one of the following represents a deadlockfree order of invoking the P operations by the processes?
  • a)
    X: P(a)P(b)P(c) Y:P(b)P(c)P(d) Z:P(c)P(d)P(a)
  • b)
    X: P(b)P(a)P(c) Y:P(b)P(c)P(d) Z:P(a)P(c)P(d)
  • c)
    X: P(b)P(a)P(c) Y:P(c)P(b)P(d) Z:P(a)P(c)P(d)
  • d)
    X: P(a)P(b)P(c) Y:P(c)P(b)P(d) Z:P(c)P(d)P(a)
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock. Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a. There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired a and b, process Y has acquired c. X and Y circularly waiting for each other. 
Consider option A) for example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly one can figure out that for B) completion order is Z,X then Y. 

The link between two processes P and Q to send and receive messages is called __________
  • a)
    communication link
  • b)
    message-passing link
  • c)
    synchronization link
  • d)
    all of the mentioned
Correct answer is option 'A'. Can you explain this answer?

Sudhir Patel answered
The link between two processes P and Q to send and receive messages is called communication link. Two processes P and Q want to communicate with each other; there should be a communication link that must exist between these two processes so that both processes can able to send and receive messages using that link.

A task in a blocked state
  • a)
    is executable
  • b)
    is running
  • c)
    must still be placed in the run queues
  • d)
    is waiting for some temporarily unavailable resources
Correct answer is option 'D'. Can you explain this answer?

Maulik Pillai answered


Blocked State in Task

Blocked state in a task refers to a situation where the task is waiting for some temporarily unavailable resources before it can proceed with its execution. This state occurs when a task cannot continue for some reason and needs to wait for the required resources to become available.

Explanation of the Correct Answer

The correct answer, option 'D', states that a task in a blocked state is waiting for some temporarily unavailable resources. This means that the task is unable to proceed with its execution because it is waiting for certain resources that are currently unavailable. These resources could include input/output operations, synchronization objects, or any other dependencies necessary for the task to continue.

Implications of a Task in Blocked State

When a task is in a blocked state, it cannot be executed or run until the required resources become available. The task must wait in this state until it can acquire the necessary resources to resume its execution. This can impact the overall performance and efficiency of the system, as the task is unable to progress until the blocking condition is resolved.

Resolution of Blocked State

In order to resolve the blocked state of a task, the necessary resources must be made available to the task. This could involve releasing resources held by other tasks, completing input/output operations, or resolving any dependencies that are causing the blocking condition. Once the resources become available, the task can transition out of the blocked state and continue with its execution.

There are three processes in the ready queue. When the currently running process requests for I/O how many process switches take place?
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'A'. Can you explain this answer?

Yash Patel answered
Single process switch will take place as when the currently running process requests for I/O, it would be placed in the blocked list and the first process residing inside the Ready Queue will be placed in the Running list to start its execution. Option (A) is correct.

Bounded capacity and Unbounded capacity queues are referred to as __________
  • a)
    Programmed buffering
  • b)
    Automatic buffering
  • c)
    User defined buffering
  • d)
    No buffering
Correct answer is option 'B'. Can you explain this answer?

Sudhir Patel answered
Bounded capacity and Unbounded capacity queues are referred to as Automatic buffering. Buffer capacity of the Bounded capacity queue is finite length and buffer capacity of the Unbounded queue is infinite.

Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T. The code for the processes P and Q is shown below.
Process P:
while (1) {
W:
   print '0';
   print '0';
X:
}
    
Process Q:
while (1) {
Y:
   print '1';
   print '1';
Z:
}
Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following will always lead to an output staring with '001100110011' ?
  • a)
    P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
  • b)
    P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
  • c)
    P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
  • d)
    P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0
Correct answer is option 'B'. Can you explain this answer?

Sanya Agarwal answered
P(S) means wait on semaphore ‘S’ and V(S) means signal on semaphore ‘S’. [sourcecode] Wait(S) { while (i <= 0) --S; } Signal(S) { S++; } [/sourcecode] Initially, we assume S = 1 and T = 0 to support mutual exclusion in process P and Q. Since S = 1, only process P will be executed and wait(S) will decrement the value of S. Therefore, S = 0. At the same instant, in process Q, value of T = 0. Therefore, in process Q, control will be stuck in while loop till the time process P prints 00 and increments the value of T by calling the function V(T). While the control is in process Q, semaphore S = 0 and process P would be stuck in while loop and would not execute till the time process Q prints 11 and makes the value of S = 1 by calling the function V(S). This whole process will repeat to give the output 00 11 00 11 … .
 
Thus, B is the correct choice.

The following C program
main()
{
    fork() ; fork() ; printf ("yes");
}
If we execute this core segment, how many times the string yes will be printed ?
  • a)
    Only once
  • b)
    2 times
  • c)
    4 times
  • d)
    8 times
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
Number of times YES printed is equal to number of process created. Total Number of Processes = 2n where n is number of fork system calls. So here n = 2, 24 = 4
fork ();   // Line 1
fork ();   // Line 2
So, there are total 4 processes (3 new child processes and one original process). Option (C) is correct.

Which of the following does not interrupt a running process?
  • a)
    A device
  • b)
    Timer
  • c)
    Scheduler process
  • d)
    Power failure
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
Scheduler process doesn’t interrupt any process, it’s Job is to select the processes for following three purposes. Long-term scheduler(or job scheduler) –selects which processes should be brought into the ready queue Short-term scheduler(or CPU scheduler) –selects which process should be executed next and allocates CPU. Mid-term Scheduler (Swapper)- present in all systems with virtual memory, temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. The mid-term scheduler may decide to swap out a process which has not been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource.

Fork is
  • a)
    the creation of a new job
  • b)
    the dispatching of a task
  • c)
    increasing the priority of a task
  • d)
    the creation of a new process
Correct answer is option 'D'. Can you explain this answer?

Sanya Agarwal answered
fork() creates a new process by duplicating the calling process, The new process, referred to as child, is an exact duplicate of the calling process, referred to as parent, except for the following : The child has its own unique process ID, and this PID does not match the ID of any existing process group. The child’s parent process ID is the same as the parent’s process ID. The child does not inherit its parent’s memory locks and semaphore adjustments. The child does not inherit outstanding asynchronous I/O operations from its parent nor does it inherit any asynchronous I/O contexts from its parent. So, option (D) is correct.

Messages sent by a process __________
  • a)
    have to be of a fixed size
  • b)
    have to be a variable size
  • c)
    can be fixed or variable sized
  • d)
    none of the mentioned
Correct answer is option 'C'. Can you explain this answer?

Introduction:
In distributed systems, processes communicate with each other by sending and receiving messages. These messages can vary in size depending on the information being transmitted. The size of the message can be fixed or variable, depending on the requirements and design of the system.

Fixed-size messages:
Some systems require messages to always have a fixed size. This can simplify the communication process as each message is guaranteed to have a consistent format and length. The receiving process knows exactly how much data to expect and can easily parse the message accordingly. Fixed-size messages are commonly used in systems where efficiency and speed are critical, such as real-time applications or high-performance computing.

Variable-size messages:
In other cases, messages may have a variable size. This allows for more flexibility in the information being transmitted. Variable-size messages are useful when the size of the data being sent can vary significantly. For example, in a file transfer protocol, the size of the files being transmitted can vary greatly. Using fixed-size messages would lead to inefficiencies, as messages would need to be padded or split to fit the fixed size.

Advantages of fixed-size messages:
- Simplifies the communication process as each message has a consistent format and length.
- Allows for efficient processing and parsing of messages.
- Well-suited for real-time applications or high-performance computing where speed and efficiency are critical.

Advantages of variable-size messages:
- Provides flexibility in transmitting data of varying sizes.
- Eliminates the need for padding or splitting messages to fit a fixed size.
- Well-suited for applications where the size of the data being transmitted can vary significantly, such as file transfers.

Conclusion:
In conclusion, messages sent by a process in a distributed system can be of fixed or variable size. The choice between fixed or variable-size messages depends on the requirements and design of the system. Fixed-size messages offer simplicity and efficiency, while variable-size messages provide flexibility and adaptability to varying data sizes.

What is Interprocess communication?
  • a)
    allows processes to communicate and synchronize their actions when using the same address space
  • b)
    allows processes to communicate and synchronize their actions
  • c)
    allows the processes to only synchronize their actions without communication
  • d)
    none of the mentioned
Correct answer is option 'B'. Can you explain this answer?

Interprocess communication (IPC) is a mechanism that allows processes to communicate and synchronize their actions. It enables different processes running on the same or different systems to exchange data and coordinate their activities. IPC is essential in modern operating systems to facilitate collaboration and resource sharing among processes.

IPC provides a standardized way for processes to interact with each other, regardless of their location or architecture. It allows processes to send and receive messages, share data, and coordinate their actions to achieve a common goal. Here, we will explore the reasons why option 'B' is the correct answer.

Allows processes to communicate:
IPC provides a means for processes to communicate with each other. Processes can exchange information, such as messages or data, through various communication channels. These channels can be shared memory regions, pipes, sockets, or message queues. By sending and receiving messages, processes can convey information, request services, or notify each other about significant events.

Allows processes to synchronize their actions:
IPC also enables processes to synchronize their actions. Synchronization ensures that processes coordinate their activities and avoid conflicts or inconsistencies. Processes can use synchronization mechanisms, such as semaphores, locks, or condition variables, to control access to shared resources or coordinate their execution. By synchronizing their actions, processes can cooperate effectively and avoid race conditions or other concurrency issues.

Benefits of IPC:
- Collaboration: IPC allows processes to work together and exchange information, enabling collaboration and coordination among different components of a system.
- Resource sharing: Processes can share resources, such as memory, files, or devices, through IPC mechanisms. This enables efficient utilization of resources and avoids duplication.
- Modularity: IPC facilitates the development of modular systems, where different processes can be developed independently and communicate through well-defined interfaces.
- Fault isolation: IPC can help isolate faulty processes from the rest of the system. By using separate processes for different components, failures in one process do not affect the overall system.

In conclusion, IPC is a crucial mechanism that allows processes to communicate and synchronize their actions. By enabling collaboration and resource sharing, IPC plays a vital role in the efficient and coordinated operation of modern computer systems.

On a system using non-preemptive scheduling, processes with expected run times of 5, 18, 9 and 12 are in the ready queue. In what order should they be run to minimize wait time?
  • a)
    5, 12, 9, 18
  • b)
    5, 9, 12, 18
  • c)
    12, 18, 9, 5
  • d)
    9, 12, 18, 5
Correct answer is option 'B'. Can you explain this answer?

Ananya Shah answered
Explanation:

In non-preemptive scheduling, once a process starts running, it will continue to run until it completes or blocks. Therefore, the order in which the processes are scheduled can have an impact on the wait time.

To minimize the wait time, we need to consider the expected run times of the processes. The idea is to schedule the shorter processes first, so that they complete quickly and reduce the overall wait time for the longer processes.

Order of Processes:
1. Select the process with the shortest expected run time from the ready queue.
2. Schedule the selected process to run.
3. Repeat steps 1 and 2 until all processes are scheduled.

Based on this approach, let's evaluate the given options:

a) 5, 12, 9, 18
- The process with expected run time 5 is the shortest, so we select it first.
- After the process with run time 5 completes, we have processes with run times 12, 9, and 18 remaining.
- The next shortest process is 9, so we schedule it.
- After the process with run time 9 completes, we have processes with run times 12 and 18 remaining.
- The next shortest process is 12, so we schedule it.
- Finally, we schedule the process with run time 18.
- The order is 5, 9, 12, 18.

c) 12, 18, 9, 5
- The process with expected run time 12 is the longest, so selecting it first would increase the overall wait time.
- Similarly, selecting the process with run time 18 next would also increase the wait time.
- Therefore, this order is not optimal.

d) 9, 12, 18, 5
- This order is similar to option a, but the process with run time 9 is scheduled before the process with run time 12.
- Since the process with run time 12 is shorter, it should be scheduled before the longer process with run time 9.
- Therefore, this order is not optimal.

Conclusion:
The optimal order to minimize the wait time is option b) 5, 9, 12, 18.

The performance of Round Robin algorithm depends heavily on
  • a)
    size of the process
  • b)
    the I/O bursts of the process
  • c)
    the CPU bursts of the process
  • d)
    the size of the time quantum
Correct answer is option 'D'. Can you explain this answer?

Introduction:
The Round Robin algorithm is a widely used CPU scheduling algorithm in operating systems. It is a preemptive algorithm that assigns a fixed time quantum to each process in a circular manner. When a process's time quantum expires, it is preempted and moved to the back of the ready queue.

Explanation:
The performance of the Round Robin algorithm depends heavily on the size of the time quantum. Let's understand why:

1. Fairness and Responsiveness:
The Round Robin algorithm aims to provide fairness and responsiveness to all processes. By assigning a fixed time quantum to each process, it ensures that no single process monopolizes the CPU for an extended period. This prevents any process from becoming starved and provides equal opportunities to all processes.

2. Context Switching Overhead:
Context switching is the process of saving and restoring the state of a process so that it can be resumed later. In Round Robin, context switching occurs whenever a process's time quantum expires. The smaller the time quantum, the more frequently context switches occur, leading to higher overhead due to the additional time required to save and restore the process state.

3. Throughput and Turnaround Time:
The time quantum affects the throughput and turnaround time of processes. A smaller time quantum allows for more frequent context switches and better response time, but it also increases the overhead due to context switching. On the other hand, a larger time quantum reduces the frequency of context switches but may lead to longer response times for interactive processes.

4. Overhead and Efficiency:
A smaller time quantum reduces the overhead due to context switching but may result in a higher number of context switches. This can lead to decreased overall efficiency as more time is spent on context switching rather than executing the actual processes. Conversely, a larger time quantum reduces the number of context switches but can result in lower responsiveness for processes.

Conclusion:
In conclusion, the performance of the Round Robin algorithm is heavily dependent on the size of the time quantum. It is crucial to strike a balance between fairness, responsiveness, overhead, throughput, and turnaround time when choosing an appropriate time quantum for a given system.

Four jobs to be executed on a single processor system arrive at time 0 in the order A, B, C, D . Their burst CPU time requirements are 4, 1, 8, 1 time units respectively. The completion time of A under round robin scheduling with time slice of one time unit is
  • a)
    10
  • b)
    4
  • c)
    8
  • d)
    9
Correct answer is option 'D'. Can you explain this answer?

Ravi Singh answered
The order of execution of the processes with the arrival time of each process = 0, using round robin algorithm with time quanta = 1 A B C D A C A C A, i.e, after 8 context switches, A finally completes it execution So, the correct option is (D).

For switching from a CPU user mode to the supervisor mode following type of interrupt is most appropriate
  • a)
    Internal interrupts
  • b)
    External interrupts
  • c)
    Software interrupts
     
  • d)
    None of the above
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
For switching from a CPU user mode to the supervisor mode Software interrupts occurs. Software interrupts is internal interrupt triggered by some some software instruction. And external interrupt is caused by some hardware module. Option (C) is correct.

In an operating system, indivisibility of operation means:
  • a)
    Operation is interruptable
  • b)
    Race - condition may occur
  • c)
    Processor can not be pre-empted
  • d)
    All of the above
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
In an operating system, indivisibility of operation means processor can not be pre-empted. One a process starts its execution it will not suspended or stop its execution inside the processor. So, option (C) is correct.

At a particular time of computation, the value of a counting semaphore is 7. Then 20 P operation and x V operations were completed on this semaphore. If the final value of the semaphore is 5, x will be
  • a)
    8
  • b)
    12
  • c)
    18
  • d)
    11
Correct answer is option 'C'. Can you explain this answer?

Gargi Menon answered
Understanding the scenario:
Given that the initial value of the counting semaphore is 10, 12 P operations and "x" V operations are performed on it. After these operations, the final value of the semaphore is 7.

Calculating the total change in value:
- P operation decreases the semaphore value by 1 each time it is executed.
- V operation increases the semaphore value by 1 each time it is executed.
- Therefore, the total change in value due to 12 P operations is -12, and the total change in value due to "x" V operations is +x.

Calculating the final value of the semaphore:
Given that the final value of the semaphore is 7, we can write the equation as:
Initial value - total change in value = final value
10 - 12 + x = 7
x = 7 + 12 - 10
x = 9
Therefore, the number of V operations performed on the semaphore is 9. Thus, the correct answer is option C which is 10.

Which of the following need not necessarily be saved on a context switch between processes?
  • a)
    General purpose registers
  • b)
    Translation look-aside buffer
  • c)
    Program counter
  • d)
    All of the above
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
The values stored in registers, stack pointers and program counters are saved on context switch between the processes so as to resume the execution of the process. There's no need of saving the contents of TLB as it is invalidated after each context switch. So, option (B) is correct

Which is the correct definition of a valid process transition in an operating system?
  • a)
    Wake up: ready → running
  • b)
    Dispatch: ready → running
  • c)
    Block: ready → running
  • d)
    Timer runout: ready → running
Correct answer is option 'B'. Can you explain this answer?

Sanya Agarwal answered
The statetransition diagram of a process(preemptive scheduling):
Option 1: Wake up: ready → running It is incorrect as when a process wakes up it is shifted from blocked state to ready state and not from ready to running. Option 2: Dispatch: ready → running It is correct as the dispatcher selectively assigns the CPU to one of the process in the ready queue based on a well defined algorithm. Option 3: Block: ready → running It is incorrect as a process is blocked when it is either pre-empted by some other process or due to some i/o operation. So when a process gets blocked it shifts from running state to blocked state. Option 4: Timer runout: ready → running When the time duration of execution of a process expires, the timer interrupts and the process shifts from the running state to ready queue. So, option (B) is correct.

What is the name of the technique in which the operating system of a computer executes several programs concurrently by switching back and forth between them?
  • a)
    Partitioning
  • b)
    Multi-tasking
  • c)
    Windowing
  • d)
    Paging
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
In a multitasking system, a computer executes several programs simultaneously by switching them back and forth to increase the user interactivity. Processes share the CPU and execute in an interleaving manner. This allows the user to run more than one program at a time. Option (B) is correct.

In a dot matrix printer the time to print a character is 6 m.sec., time to space in between characters is 2 m.sec., and the number of characters in a line are 200. The printing speed of the dot matrix printer in characters per second and the time to print a character line are given by which of the following options?
  • a)
    125 chars/second and 0.8 seconds
  • b)
    250 chars/second and 0.6 seconds
  • c)
    166 chars/second and 0.8 seconds
  • d)
    250 chars/second and 0.4 seconds
  • e)
    125 chars/second and 1.6 seconds
Correct answer is option 'E'. Can you explain this answer?

Yash Patel answered
Total no of characters = 200 (each character having one space) Time taken to print one character = 6 ms; Time taken to print one space = 2 ms. character printing = 200 * 6 = 1200 ms space printing = 200 * 2 = 400 ms total printing time = 1200 + 400 = 1600 ms = 1.6 s. The printing speed of the dot matrix printer in characters per second = 200 / 1.6 = 125 / s. So, option (E) is correct.

Consider a set of n tasks with known runtimes r1, r2....rn to be run on a uniprocessor machine. Which of the following processor scheduling algorithms will result in the maximum throughput?
  • a)
    Round Robin
  • b)
    Shortest job first
  • c)
    Highest response ratio next
  • d)
    first come first served
Correct answer is option 'B'. Can you explain this answer?

Ravi Singh answered
Throughput means total number of tasks executed per unit time i.e. sum of waiting time and burst time. Shortest job first scheduling is a scheduling policy that selects the waiting process with the smallest execution time to execute next. Thus, in shortest job first scheduling, shortest jobs are executed first. This means CPU utilization is maximum. So, maximum number of tasks are completed. Option (B) is correct.

Chapter doubts & questions for Process Synchronization - Operating System 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Process Synchronization - Operating System in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Operating System

10 videos|100 docs|33 tests

Top Courses Computer Science Engineering (CSE)

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev