All Exams  >   Computer Science Engineering (CSE)  >   Operating System  >   All Questions

All questions of CPU Scheduling for Computer Science Engineering (CSE) Exam

In a time-sharing operating system, when the time slot given to a process is completed, the process goes from the RUNNING state to the
  • a)
    BLOCKED state
  • b)
    READY state
  • c)
    SUSPENDED state
  • d)
    TERMINATED state
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered

In time-sharing operating system (example in Round-Robin), whenever the time slot given to a process expires, it goes back to READY state and if it requests for same I/O operation, then it goes to BLOCKED state.

Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2 and 6, respectively. How many context switches are needed if the operating system implements a shortest remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'B'. Can you explain this answer?

Explanation:
Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that is a pre-emptive version of shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount of time remaining until completion is selected to execute. Since the currently executing process is the one with the shortest amount of time remaining by definition, and since that time should only reduce as execution progresses, processes will always run until they complete or a new process is added that requires a smaller amount of time.

Solution:
Let three process be P0, P1 and P2 with arrival times 0, 2 and 6 respectively and CPU burst times 10, 20 and 30 respectively. At time 0, P0 is the only available process so it runs. At time 2, P1 arrives, but P0 has the shortest remaining time, so it continues. At time 6, P2 also arrives, but P0 still has the shortest remaining time, so it continues. At time 10, P1 is scheduled as it is the shortest remaining time process. At time 30, P2 is scheduled. Only two context switches are needed. P0 to P1 and P1 to P2.

Consider a set of 5 processes whose arrival time, CPU time needed and the priority are given below:

Note: Smaller the number, higher the priority.
If the CPU scheduling policy is FCFS, the average waiting time will be 
  • a)
    12.8 ms
  • b)
    8 ms
  • c)
    16 ms
  • d)
    None of the above
Correct answer is option 'A'. Can you explain this answer?

Crack Gate answered
According to FCFS process solve are p1 p2 p3 p4 p5 so  
For p1 waiting time = 0 process time = 10 then 
For p2 waiting time = (process time of p1-arrival time of p2) = 10 - 0 = 10 then 
For p3 waiting time = (pr. time of (p1+p2) - arrival time of p3) = (10 + 5) - 2 = 13 and 
Same for p4 waiting time = 18 - 5 = 13 
Same for p5 waiting time = 38 - 10 = 28 
So total average waiting time = (0 + 10 + 13 + 13 + 28) / 5
= 12.8

A system has 6 identical resources and N processes competing for them. Each process can request atmost 2 resources. Which one of the following values of N could lead to a deadlock?
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'D'. Can you explain this answer?

Ravi Singh answered
It seems to be wrong question in GATE exam. Below seems to be the worst case situation. In below situation P1 and P2 can continue as they have 2 resources each
P1 --> 2
P2 --> 2
P3 --> 1
P4 --> 1 

Which of the following is FALSE about SJF (Shortest Job First Scheduling)?
S1: It causes minimum average waiting time
S2: It can cause starvation
  • a)
    Only S1
  • b)
    Only S2
  • c)
    Both S1 and S2
  • d)
    Neither S1 nor S2
Correct answer is option 'D'. Can you explain this answer?

Abhay Ghoshal answered
Shortest Job First Scheduling (SJF)
Shortest Job First Scheduling (SJF) is a scheduling algorithm that selects the process with the smallest execution time to execute next. Here, we will discuss the statements given in the question and determine which one is false.

Statement Analysis:

S1: It causes minimum average waiting time
SJF scheduling indeed minimizes the average waiting time because it prioritizes shorter jobs over longer ones. By executing the shortest job first, the waiting time for all processes is reduced, leading to a lower average waiting time. Therefore, this statement is true.

S2: It can cause starvation
Starvation occurs when a process is unable to progress because it is constantly being delayed by other processes with shorter execution times. In SJF scheduling, if shorter jobs constantly keep arriving, longer jobs might suffer from starvation as they may not get a chance to execute. Hence, this statement is also true.

Conclusion:
Based on the analysis of both statements:
- Statement S1: True (SJF minimizes average waiting time)
- Statement S2: True (SJF can cause starvation)

Final Verdict:
The false statement was not present in the given options. Therefore, the correct answer is option 'D' - Neither S1 nor S2, as both statements about SJF scheduling are true.

Which of the following statements are true?
I. Shortest remaining time first scheduling may cause starvation
II. Preemptive scheduling may cause starvation
III. Round robin is better than FCFS in terms of response time
  • a)
    I only
  • b)
    I and III only
  • c)
    II and III only
  • d)
    I, II and III
Correct answer is option 'D'. Can you explain this answer?

I) Shortest remaining time first scheduling is a pre-emptive version of shortest job scheduling. In SRTF, job with the shortest CPU burst will be scheduled first. Because of this process, It may cause starvation as shorter processes may keep coming and a long CPU burst process never gets CPU.
II) Pre-emptive just means a process before completing its execution is stopped and other process can start execution. The stopped process can later come back and continue from where it was stopped. In pre-emptive scheduling, suppose process P1 is executing in CPU and after some time process P2 with high priority then P1 will arrive in ready queue then p1 is pre-empted and p2 will brought into CPU for execution. In this way if process which is arriving in ready queue is of higher priority then p1, then p1 is always pre-empted and it may possible that it suffer from starvation.
III) round robin will give better response time then FCFS ,in FCFS when process is executing ,it executed up to its complete burst time, but in round robin it will execute up to time quantum. So Round Robin Scheduling improves response time as all processes get CPU after a specified time. So, I,II,III are true which is option (D). Option (D) is correct answer. 

An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes. Consider the following set of processes with their arrival times and CPU burst times (in milliseconds):
Process   Arrival Time    Burst Time
  P1           0                    12
  P2           2                     4
  P3           3                     6
  P4           8                     5
The average waiting time (in milliseconds) of the processes is _________.
  • a)
    4.5
  • b)
    5.0
  • c)
    5.5
  • d)
    6.5
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
Process   Arrival Time   Burst Time
  P1           0                   12
  P2           2                    4
  P3           3                    6
  P4           8                    5
Burst Time - The total time needed by a process from the CPU for its complete execution. Waiting Time - How much time processes spend in the ready queue waiting their turn to get on the CPU Now, The Gantt chart for the above processes is :
P1 - 0 to 2 milliseconds
P2 - 2 to 6 milliseconds
P3 - 6 to 12 milliseconds
P4 - 12 to 17 milliseconds
P1 - 17 to 27 milliseconds 
Process p1 arrived at time 0, hence cpu started executing it. After 2 units of time P2 arrives and burst time of P2 was 4 units, and the remaining time of the process p1 was 10 units,hence cpu started executing P2, putting P1 in waiting state(Pre-emptive and Shortest remaining time first scheduling). Due to P1's highest remaining time it was executed by the cpu in the end.
Now calculating the waiting time of each process:
P1 -> 17 -2 = 15
P2 -> 0
P3 -> 6 - 3 = 3
P4 -> 12 - 8 = 4 
Hence total waiting time of all the processes is 
= 15+0+3+4=22
Total no of processes = 4
Average waiting time = 22 / 4 = 5.5
Hence C is the answer. 

Which of the following is not true with respect to deadlock prevention and deadlock avoidance schemes?
  • a)
    In deadlock prevention, the request for resources is always granted if resulting state is safe
  • b)
    In deadlock avoidance, the request for resources is always granted, if the resulting state is safe
  • c)
    Deadlock avoidance requires knowledge of resource requirements a priori
  • d)
    Deadlock prevention is more restrictive than deadlock avoidance
Correct answer is option 'A'. Can you explain this answer?

Abhijeet Mehta answered
Deadlock Prevention and Deadlock Avoidance Schemes



Deadlock prevention and deadlock avoidance are two different strategies used to handle the problem of deadlock in operating systems. While both aim to prevent or avoid deadlocks, they have different approaches and requirements. Let's examine each option and determine which one is not true.



a) In deadlock prevention, the request for resources is always granted if the resulting state is safe


This statement is not true. Deadlock prevention aims to prevent the occurrence of deadlocks by ensuring that the system is always in a safe state. In this approach, requests for resources may be denied even if the resulting state is safe. The goal is to avoid the possibility of future deadlocks by ensuring that the system always remains in a safe state.



b) In deadlock avoidance, the request for resources is always granted if the resulting state is safe


This statement is true. Deadlock avoidance, also known as bankers algorithm, uses a careful resource allocation strategy to ensure that the system remains in a safe state. If a resource request can be granted without leading to a deadlock, it is allowed. This approach requires knowledge of resource requirements a priori and performs a resource allocation check before granting the request.



c) Deadlock avoidance requires knowledge of resource requirements a priori


This statement is true. Deadlock avoidance requires knowledge of resource requirements a priori, meaning that the system needs to know in advance the maximum number of resources that a process may need during its execution. This information is used to determine if a resource request can be granted without leading to a deadlock. The system uses algorithms like the bankers algorithm to allocate resources in a way that avoids deadlocks.



d) Deadlock prevention is more restrictive than deadlock avoidance


This statement is true. Deadlock prevention is more restrictive than deadlock avoidance. Deadlock prevention aims to prevent the occurrence of deadlocks by denying resource requests if granting them may lead to a deadlock. This approach can be overly restrictive and may result in resource underutilization. On the other hand, deadlock avoidance allows resource requests to be granted if they do not lead to a deadlock, resulting in better resource utilization.



In conclusion, option 'a' is not true because in deadlock prevention, the request for resources is not always granted even if the resulting state is safe.

Concurrent processes are processes that
  • a)
    Do not overlap in time
  • b)
    Overlap in time
  • c)
    Are executed by a processor at the same time
  • d)
    None of the above
Correct answer is option 'B'. Can you explain this answer?

Roshni Kumar answered
Concurrent processes are processes that share the CPU and memory. They do overlap in time while execution. At a time CPU entertain only one process but it can switch to other without completing it as a whole.

Processes P1 and P2 use critical_flag in the following routine to achieve mutual exclusion. Assume that critical_flag is initialized to FALSE in the main program.
get_exclusive_access ( ) { if (critical _flag == FALSE) { critical_flag = TRUE ; critical_region () ; critical_flag = FALSE; } }
Consider the following statements.
i. It is possible for both P1 and P2 to access critical_region concurrently.
ii. This may lead to a deadlock.
Which of the following holds?
  • a)
    (i) is false and (ii) is true
  • b)
    Both (i) and (ii) are false
  • c)
    (i) is true and (ii) is false
  • d)
    Both (i) and (ii) are true
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
Say P1 starts first and executes statement 1, after that system context switches to P2 (before executing statement 2), and it enters inside if statement, since the flag is still false. So now both processes are in critical section!! so (i) is true.. (ii) is false By no way it happens that flag is true and no process' are inside the if clause, if someone enters the critical section, it will definitely make flag = false. So no deadlock.

An operating system uses Shortest Remaining Time first (SRT) process scheduling algorithm. Consider the arrival times and execution times for the following processes:
Process                   Execution time                      Arrival time
P1                              20                                              0
P2                              25                                              15
P3                              10                                              30
P4                              15                                              45
Q. What is the total waiting time for process P2?
  • a)
    5
  • b)
    15
  • c)
    40
  • d)
    55
Correct answer is option 'B'. Can you explain this answer?

Nidhi Desai answered
Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that is a pre-emptive version of shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount of time remaining until completion is selected to execute. Since the currently executing process is the one with the shortest amount of time remaining by definition, and since that time should only reduce as execution progresses, processes will always run until they complete or a new process is added that requires a smaller amount of time. The Gantt chart of execution of processes: 
At time 0, P1 is the only process, P1 runs for 15 time units. At time 15, P2 arrives, but P1 has the shortest remaining time. So P1 continues for 5 more time units. At time 20, P2 is the only process. So it runs for 10 time units. at time 30, P3 is the shortest remaining time process. So it runs for 10 time units. at time 40, P2 runs as it is the only process. P2 runs for 5 time units. At time 45, P3 arrives, but P2 has the shortest remaining time. So P2 continues for 10 more time units. P2 completes its execution at time 55.
As we know, turn around time is total time between submission of the process and its completion. Waiting time is the time The amount of time that is taken by a process in ready queue and waiting time is the difference between Turn around time and burst time. Total turnaround time for P2 = Completion time - Arrival time = 55 - 15 = 40 Total Waiting Time for P2= turn around time - Burst time = 40 – 25 = 15

Which of the following scheduling algorithms is non-preemptive?
  • a)
    Round Robin
  • b)
    First-In First-Out
  • c)
    Multilevel Queue Scheduling
  • d)
    Multilevel Queue Scheduling with Feedback
Correct answer is option 'B'. Can you explain this answer?

Ravi Singh answered
Round Robin - Preemption takes place when the time quantum expires First In First Out - No Preemption, the process once started completes before the other process takes over Multi Level Queue Scheduling - Preemption takes place when a process of higher priority arrives Multi Level Queue Scheduling with Feedback - Preemption takes a place when process of higher priority arrives or when the quantum of high priority queue expires and we need to move the process to low priority queue So, B is the correct choice.

For the processes listed in the following table, which of the following scheduling schemes will give the lowest average turnaround time?
Process    Arrival Time    Processing Time
  A                  0                      3
  B                 1                       6
  C                 4                       4
  D                 6                       2
  • a)
    First Come First Serve
  • b)
    Non-preemptive Shortest Job First
  • c)
    Shortest Remaining Time
  • d)
    Round Robin with Quantum value two
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
Turnaround time is the total time taken between the submission of a program/process/thread/task (Linux) for execution and the return of the complete output to the customer/user. Turnaround Time = Completion Time - Arrival Time. FCFS = First Come First Serve (A, B, C, D) SJF = Non-preemptive Shortest Job First (A, B, D, C) SRT = Shortest Remaining Time (A(3), B(1), C(4), D(2), B(5)) RR = Round Robin with Quantum value 2 (A(2), B(2), A(1),C(2),B(2),D(2),C(2),B(2)  

Consider a uniprocessor system executing three tasks T1, T2 and T3, each of which is composed of an infinite sequence of jobs (or instances) which arrive periodically at intervals of 3, 7 and 20 milliseconds, respectively. The priority of each task is the inverse of its period and the available tasks are scheduled in order of priority, with the highest priority task scheduled first. Each instance of T1, T2 and T3 requires an execution time of 1, 2 and 4 milliseconds, respectively. Given that all tasks initially arrive at the beginning of the 1st milliseconds and task preemptions are allowed, the first instance of T3 completes its execution at the end of ______________ milliseconds.
  • a)
    5
  • b)
    10
  • c)
    12
  • d)
    15
Correct answer is option 'C'. Can you explain this answer?

Periods of T1, T2 and T3 are 3ms, 7ms and 20ms. Since priority is inverse of period, T1 is the highest priority task, then T2 and finally T3 Every instance of T1 requires 1ms, that of T2 requires 2ms and that of T3 requires 4ms Initially all T1, T2 and T3 are ready to get processor, T1 is preferred Second instances of T1, T2, and T3 shall arrive at 3, 7, and 20 respectively. Third instance of T1, T2 and T3 shall arrive at 6, 14, and 40 respectively.
Time-Interval  Tasks   
 0-1            T1     
 1-2            T2      
 2-3            T2     
 3-4            T1  [Second Instance of T1 arrives] 
 4-5            T3     
 5-6            T3     
 6-7            T1  [Third Instance of T1 arrives]  
                     [Therefore T3 is preempted] 
 7-8            T2  [Second instance of T2 arrives]                            
 8-9            T2
 9-10          T1  [Fourth Instance of T1 arrives]
 10-11        T3
11-12         T3 [First Instance of T3 completed]

In Round Robin CPU scheduling, as the time quantum is increased, the average turn around time
  • a)
    Increases
  • b)
    Decreases
  • c)
    Remains constant
  • d)
    Varies irregularly
Correct answer is option 'D'. Can you explain this answer?

The Round Robin CPU scheduling technique will become insignificance if the time slice is very small or very large. When it is large it acts as FIFO which does not have a fixed determination over the turn around time. If processes with small burst arrived earlier turn around time will be less else it will be more.

Which module gives control of the CPU to the process selected by the short - term schedular ?
  • a)
    Dispatcher
  • b)
    Interrupt
  • c)
    Schedular
  • d)
    Threading
Correct answer is option 'A'. Can you explain this answer?

Shanaya Chopra answered
Dispatcher module in Operating System

The dispatcher module is a component of the operating system that is responsible for giving control of the CPU to the process selected by the short-term scheduler. It is a critical component of the operating system that manages the execution of processes and ensures that the CPU is utilized efficiently.

The dispatcher module performs the following functions:

1. Context Switching: It saves the context of the currently executing process and restores the context of the selected process.

2. Process State Transition: It changes the state of the process from running to ready, waiting, or terminated based on the events.

3. Resource Allocation: It allocates system resources like memory, I/O devices, etc. to the selected process.

4. Process Synchronization: It ensures that the processes do not interfere with each other and use the shared resources efficiently.

5. Interrupt Handling: It handles the interrupts generated by the hardware devices and the software.

The dispatcher module works closely with the scheduler modules of the operating system to ensure that the processes are executed efficiently. The scheduler module selects the process to be executed from the ready queue, and the dispatcher module gives control of the CPU to the selected process.

In summary, the dispatcher module is a critical component of the operating system that manages the execution of processes and ensures that the CPU is utilized efficiently. It works closely with the scheduler module to select the process to be executed and gives control of the CPU to the selected process.

Consider a set of n tasks with known runtimes r1, r2, .... rn to be run on a uniprocessor machine. Which of the following processor scheduling algorithms will result in the maximum throughput?
  • a)
    Round-Robin
  • b)
    Shortest-Job-First
  • c)
    Highest-Response-Ratio-Next
  • d)
    First-Come-First-Served
Correct answer is option 'B'. Can you explain this answer?

Throughput means total number of tasks executed per unit time i.e. sum of waiting time and burst time.
Shortest job first scheduling is a scheduling policy that selects the waiting process with the smallest execution time to execute next.
Thus, in shortest job first scheduling, shortest jobs are executed first. This means CPU utilization is maximum. So, maximum number of tasks are completed.
Thus, option (B) is correct.

What is the minimum number of resources required to ensure that deadlock will never occur, if there are currently three processes P1, P2 and P3 running in a system whose maximum demand for the resources of same type are 3, 4, and 5 respectively.
  • a)
    3
  • b)
    7
  • c)
    9
  • d)
    10
Correct answer is option 'D'. Can you explain this answer?

Ravi Singh answered
let the resources needed by P1 = R1 =3, the number of resources needed by P2 = R2 = 4 and so on.. Minimum resources required to ensure that deadlock will never occur = (R1-1) + (R2 1) + (R3-1) + 1 = (3-1) + (4-1)+ (5-1) + 1 = 10. Option (D) is correct.

Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2 and 6, respectively. How many context switches are needed if the operating system implements a shortest remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'B'. Can you explain this answer?

Rajeev Menon answered
Let three process be P0, P1 and P2 with arrival times 0, 2 and 6 respectively and CPU burst times 10, 20 and 30 respectively. At time 0, P0 is the only available process so it runs. At time 2, P1 arrives, but P0 has the shortest remaining time, so it continues. At time 6, P2 arrives, but P0 has the shortest remaining time, so it continues. At time 10, P1 is scheduled as it is the shortest remaining time process. At time 30, P2 is scheduled. Only two context switches are needed. P0 to P1 and P1 to P2.

A counting semaphore was initialized to 10. Then 6 P (wait) operations and 4 V (signal) operations were completed on this semaphore. The resulting value of the semaphore is
  • a)
    0
  • b)
    8
  • c)
    10
  • d)
    12
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
Initially we have semaphore value = 10 Now we have to perform 6 p operation means when we perform one p operation it decreases the semaphore values to one. So after performing 6 p operation we get, semaphore values = 10 - 6 = 4 and now we have to perform 4 v operation means when we perform one V operation it increases the semaphore values to one. So after performing 4 v operation we get, semaphore values = 4 + 4 = 8. Option (B) is correct.

A critical section is a program segment
  • a)
    which should run in a certain specified amount of time
  • b)
    which avoids deadlocks
  • c)
    where shared resources are accessed
  • d)
    which must be enclosed by a pair of semaphore operations, P and V
Correct answer is option 'C'. Can you explain this answer?

Anshu Mehta answered
A critical section is a specific section of a program where shared resources are accessed and modified by multiple threads or processes. It is important to ensure that only one thread or process can access the critical section at a time to avoid data inconsistencies or conflicts.

Shared Resources:
In a multi-threaded or multi-process environment, multiple threads or processes may need to access certain resources, such as variables, files, or devices. These resources are typically shared among the threads or processes. However, when multiple threads or processes try to access and modify the shared resources simultaneously, it can lead to data inconsistencies or conflicts.

Need for Synchronization:
To avoid such issues, synchronization mechanisms are used to control the access to shared resources. A critical section is a part of the program where shared resources are accessed, and it needs to be synchronized to ensure that only one thread or process can access it at a time.

Avoiding Data Inconsistencies:
By ensuring that only one thread or process can access the critical section at a time, we can prevent data inconsistencies. For example, if two threads try to increment a shared variable simultaneously, they may both read the variable's value, increment it, and write it back, resulting in a lost update. By synchronizing the critical section, we can ensure that only one thread increments the variable at a time, avoiding data inconsistencies.

Using Semaphores:
One way to implement synchronization in a critical section is by using semaphores. Semaphores are variables that can be used for signaling and controlling access to shared resources. In many cases, a pair of semaphore operations, usually referred to as P and V, are used to control access to the critical section.

The P operation, also known as wait or acquire, is used to acquire the semaphore and enter the critical section. If the semaphore value is greater than zero, it is decremented, and the thread or process can proceed to access the critical section. If the semaphore value is zero, the thread or process is blocked until the semaphore becomes available.

The V operation, also known as signal or release, is used to release the semaphore and exit the critical section. It increments the semaphore value, allowing another thread or process to acquire it and enter the critical section.

Conclusion:
In summary, a critical section is a program segment where shared resources are accessed and modified. It is important to synchronize the access to the critical section to avoid data inconsistencies or conflicts. Semaphore operations, such as P and V, can be used to control access to the critical section and ensure that only one thread or process can access it at a time.

Consider the following proposed solution for the critical section problem. There are n processes: P0 ...Pn−1. In the code, function pmax returns an integer not smaller than any of its arguments. For all i, t[i] is initialized to zero.
Which one of the following is TRUE about the above solution?
  • a)
    At most one process can be in the critical section at any time
  • b)
    The bounded wait condition is satisfied
  • c)
    The progress condition is satisfied
  • d)
    It cannot cause a deadlock
Correct answer is option 'A'. Can you explain this answer?

Yash Patel answered
Mutual exclusion  is satisfied:
All other processes j started before i must have value (i.e. t[j]) less than the value of process i (i.e. t[i])  as function pMax() return a integer not smaller  than any of its arguments. So if anyone out of the processes j have positive value will be executing in its critical section as long as the condition t[j] > 0 && t[j] <= t[i] within while will persist. And when  this j process comes out of its critical section, it sets t[j] = 0;  and next process will be selected in for loop.
So when i process reaches to its critical section none of the  processes j which started earlier before process i  is in its critical section. This ensure that only one process is executing its critical section at a time. 
Deadlock and progress are  not satisfied:  
while (t[j] != 0 && t[j] <=t[i]); because of this condition deadlock is possible when value of j process becomes equals to the value of process i (i.e t[j] == t[i]).  because of the deadlock progess is also not possible (i.e. Progess == no deadlock) as no one process is able to make progress by stoping other process. 
Bounded waiting is also not satisfied: 
In this case both deadlock and bounded waiting to be arising from the same reason as if t[j] == t[i] is possible then starvation is possible means infinite waiting.

Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system. Which one of the following process scheduling algorithms would minimize the average waiting time in the ready queue?
  • a)
    Shortest remaining time first
  • b)
    Round-robin with time quantum less than the shortest CPU burst
  • c)
    Uniform random
  • d)
    Highest priority first with priority proportional to CPU burst length
Correct answer is option 'A'. Can you explain this answer?

Sanya Agarwal answered
Turnaround time is the total time taken by the process between starting and the completion and waiting time is the time for which process is ready to run but not executed by CPU scheduler. As we know, in all CPU Scheduling algorithms, shortest job first is optimal i.ie. it gives minimum turn round time, minimum average waiting time and high throughput and the most important thing is that shortest remaining time first is the pre-emptive version of shortest job first. shortest remaining time first scheduling algorithm may lead to starvation because If the short processes are added to the cpu scheduler continuously then the currently running process will never be able to execute as they will get pre-empted but here all the processes are arrived at same time so there will be no issue such as starvation. So, the answer is Shortest remaining time first, which is answer (A).

In which of the following scheduling criteria, context switching will never take place ?
  • a)
    ROUND ROBIN
  • b)
    Preemptive SJF
  • c)
    Non-preemptive SJF
  • d)
    Preemptive priority
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
In Non - preemptive algorithms context switching will never take place because it doesn't allow to switch the process until it is completed fully. So, option (C) is correct.

In a system using single processor, a new process arrives at the rate of six processes per minute and each such process requires seven seconds of service time. What is the CPU utilization?
  • a)
    70%
  • b)
    30%
  • c)
    60%
  • d)
    64%
Correct answer is option 'A'. Can you explain this answer?

Sanya Agarwal answered
Number of processes per minute = 6 Burst time of each process = 7 secs
CPU utilization time within a minute = 6*7 = 42 secs
% CPU utilization = useful time / total time * 100
= (42/60) * 100 
 70%
Option (A) is correct.

A system shares 9 tape drives. The current allocation and maximum requirement of tape drives for 4 processes are shown below:
Which of the following best describes the current state of the system?
  • a)
    Safe, Deadlocked
  • b)
    Safe, Not Deadlocked
  • c)
    Not Safe, Deadlocked
  • d)
    Not Safe, Not Deadlocked
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
There are 9 tapes and allocated (3+1+3+0 =) 7, so remaining available tapes are 9 - 7 = 2.
Now we can satisfies only process P3 first which require only 2 tapes, then current available will be total 5. Similarly, we can satisfies only process P2 which require only 5 tapes, then current available will be total 6. Then, we can satisfies only process P1 which require only 6 tapes, then current available will be total 9. But we can not satisfied remaining need of process P4, because it requires 10 tapes and remaining need 9.
Option (C) is correct.

Which of the following is not a necessary condition for deadlock?
  • a)
    Mutual exclusion
  • b)
    Reentrancy
  • c)
    Hold and wait
  • d)
    No pre-emption
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
There are 4 necessary conditions for Deadlock to occur:
1) Mutual Exclusion
2) No pre-emption
3) Hold and Wait
4) Circular wait
Option (B) is correct.

In a multiprogramming environment
  • a)
    The processor executes more than one process at a time
  • b)
    The programs are developed by more than one person
  • c)
    More than one process resides in the memory
  • d)
    A single user can execute many programs at the same time
Correct answer is option 'C'. Can you explain this answer?

Naina Sharma answered
Multiprogramming environment means processor is executing multiple processes simultaneously by continuously switching between one-another. Therefore, multiple processes should reside in memory. However, processor can't executes more than one process at a time.

A counting semaphore was initialized to 10. Then 6 P (wait) operations and 4 V (signal) operations were completed on this semaphore. The resulting value of the semaphore is
  • a)
    0
  • b)
    8
  • c)
    10
  • d)
    12
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
Initially we have semaphore value = 10 Now we have to perform 6 p operation means when we perform one p operation it decreases the semaphore values to one. So after performing 6 p operation we get, semaphore values = 10 - 6 = 4 and now we have to perform 4 v operation means when we perform one V operation it increases the semaphore values to one. So after performing 4 v operation we get, semaphore values = 4 + 4 = 8. Option (B) is correct.

A total of 9 units of a resource type available, and given the safe state shown below, which of the following sequence will be a safe state?
  • a)
    (P4, P1, P3, P2)
  • b)
    (P4, P2, P1, P3)
  • c)
    (P4, P2, P3, P1)
  • d)
    (P3, P1, P2, P4)
Correct answer is option 'D'. Can you explain this answer?

Sanya Agarwal answered
Applying Banker’s Algorithm, Need matrix of the processes are:
Process   Used    Max    Need
  P1              2         7        5
  P2              1         6        5
  P3              2         5        3
  P4              1         4        3 
Currently Available resources = Available - Allocated resources = 9 - 6 = 3 If the request of P4 is granted first then it would release a maximum of 4 resources after its execution, and if P1 or P2 are allocated next then their requests can not be fulfilled as both of them require 5 resources each. So, that eliminates Options (A), (B) and (C). Option (D) is correct.

Which of the following scheduling algorithms may cause starvation ? a. First-come-first-served b. Round Robin c. Priority d. Shortest process next e. Shortest remaining time first
  • a)
    a, c and e
  • b)
    c, d and e
  • c)
    b, d and e
  • d)
    b, c and d
Correct answer is option 'B'. Can you explain this answer?

Sanya Agarwal answered
  1. In First Come First Serve(FCFS) if a process with a very large Burst Time comes before other processes, the other process will have to wait for a long time but it is clear that other process will definitely get their chance to execute, so it will not suffer from starvation.
  2. In Round Robin there is a fixed time quant and every process will get their chance to be executed, so no starvation is here.
  3. In Priority based scheduling if higher priority process keep on coming then low priority process will suffer from starvation.
  4. In Shortest Job First(SJF) if process with short process time keep on coming continuously then process with higher burst time will do wait and suffer from starvation.
  5. In Shortest remaining time first(SRTF) process with shortest burst time will execute first because of this process with high burst time may suffer from starvation.
So, option (C) is correct.

Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units, respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing computation, and the last 10% of time doing I/O again. The operating system uses a shortest remaining compute time first scheduling algorithm and schedules a new process either when the running process gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O operations can be overlapped as much as possible. For what percentage of time does the CPU remain idle?
  • a)
    0%
  • b)
    10.6%
  • c)
    30.0%
  • d)
    89.4%
Correct answer is option 'B'. Can you explain this answer?

Anirban Khanna answered
Let three processes be p0, p1 and p2. Their execution time is 10, 20 and 30 respectively. p0 spends first 2 time units in I/O, 7 units of CPU time and finally 1 unit in I/O. p1 spends first 4 units in I/O, 14 units of CPU time and finally 2 units in I/O. p2 spends first 6 units in I/O, 21 units of CPU time and finally 3 units in I/O.

 idle   p0    p1     p2    idle
0    2     9     23     44     47

Total time spent = 47
Idle time = 2 + 3 = 5
Percentage of idle time = (5/47)*100 = 10.6 %

Three concurrent processes X, Y, and Z execute three different code segments that access and update certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before entering the respective code segments. After completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one. Which one of the following represents a deadlockfree order of invoking the P operations by the processes?
  • a)
    X: P(a)P(b)P(c) Y:P(b)P(c)P(d) Z:P(c)P(d)P(a)
  • b)
    X: P(b)P(a)P(c) Y:P(b)P(c)P(d) Z:P(a)P(c)P(d)
  • c)
    X: P(b)P(a)P(c) Y:P(c)P(b)P(d) Z:P(a)P(c)P(d)
  • d)
    X: P(a)P(b)P(c) Y:P(c)P(b)P(d) Z:P(c)P(d)P(a)
Correct answer is option 'B'. Can you explain this answer?

Anshul Malik answered
Explanation:
- Initial Situation:
- Processes X, Y, and Z need to acquire semaphores a, b, c, and d in a deadlock-free manner to avoid any synchronization issues.
- All semaphores are initialized to one.
- Order of P operations:
- X: P(b)P(a)P(c)
- X first acquires semaphore b, then a, and finally c.
- Y: P(b)P(c)P(d)
- Y first acquires semaphore b, then c, and finally d.
- Z: P(a)P(c)P(d)
- Z first acquires semaphore a, then c, and finally d.
- Explanation of the chosen order (Option B):
- This order ensures that each process can acquire the necessary semaphores without causing a deadlock.
- X acquires b before a to prevent a deadlock with Y.
- Y acquires b before c to avoid a deadlock with X.
- Z acquires a first to prevent a deadlock with X and then acquires c and d.
- Conclusion:
- The order of invoking P operations in Option B (X: P(b)P(a)P(c), Y: P(b)P(c)P(d), Z: P(a)P(c)P(d)) represents a deadlock-free sequence for the processes.

Consider the following snapshot of a system running n processes. Process i is holding Xi instances of a resource R, 1 <= i <= n. currently, all instances of R are occupied. Further, for all i, process i has placed a request for an additional Yi instances while holding the Xi instances it already has. There are exactly two processes p and q such that Yp = Yq = 0. Which one of the following can serve as a necessary condition to guarantee that the system is not approaching a deadlock?
  • a)
    min (Xp, Xq) < max (Yk) where k != p and k != q
  • b)
    Xp + Xq >= min (Yk) where k != p and k != q
  • c)
    max (Xp, Xq) > 1
  • d)
    min (Xp, Xq) > 1
Correct answer is option 'B'. Can you explain this answer?

Ravi Singh answered
Deadlock refers to a specific condition when two or more processes are each waiting for another to release a resource, or more than two processes are waiting for resources. Solution: Necessary condition to guarantee no deadlock which means without satisfying this condition, no deadlock is possible. Both the process p and q have no additional requirement; they both can be finished releasing Xp + Xq resources without asking for any additional resource. Using this, we can finish one more process only if condition B is satisfied. If the resources released by p and q are sufficient for another process waiting for Yk resources, then system is not approaching deadlock. i.e Xp+Xq > min (Yk) where k != p and k != q Note: Option B just ensured that the system can proceed from the current state. It does not guarantee that there won’t be a deadlock before all processes are finished.

Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a maximum of K instances. Resource instances can be requested and released only one at a time. The largest value of K that will always avoid deadlock is _______ .
Note - This was Numerical Type question.
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'B'. Can you explain this answer?

Sanya Agarwal answered
Given, Number of processes (P) = 3 Number of resources (R) = 4 Since deadlock-free condition is:
R ≥ P(N − 1) + 1
Where R is total number of resources, P is the number of processes, and N is the max need for each resource.
4 ≥ 3(N − 1) + 1
3 ≥ 3(N − 1)
1 ≥ (N − 1)
N ≤ 2
Therefore, the largest value of K that will always avoid deadlock is 2. Option (B) is correct.

Which of the following process scheduling algorithm may lead to starvation
  • a)
    FIFO
  • b)
    Round Robin
  • c)
    Shortest Job Next
  • d)
    None of the above
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
Shortest job next may lead to process starvation for processes which will require a long time to complete if short processes are continually added.

Which of the following is FALSE about SJF (Shortest Job First Scheduling)?
S1: It causes minimum average waiting time
S2: It can cause starvation
  • a)
    Only S1
  • b)
    Only S2
  • c)
    Both S1 and S2
  • d)
    Neither S1 nor S2
Correct answer is option 'D'. Can you explain this answer?

  1. Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a situation when long process is there in ready queue and shorter processes keep coming.
  2. SJF is optimal in terms of average waiting time for a given set of processes, but problems with SJF is how to know/predict time of next job.

Chapter doubts & questions for CPU Scheduling - Operating System 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of CPU Scheduling - Operating System in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Operating System

10 videos|100 docs|33 tests

Top Courses Computer Science Engineering (CSE)

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev