All Exams  >   Computer Science Engineering (CSE)  >   Operating System  >   All Questions

All questions of Process Management for Computer Science Engineering (CSE) Exam

A certain computation generates two arrays a and b such that a[i]=f(i) for 0 ≤ i < n and b[i]=g(a[i]) for 0 ≤ i < n. Suppose this computation is decomposed into two concurrent processes X and Y such that X computes the array a and Y computes the array b. The processes employ two binary semaphores R and S, both initialized to zero. The array a is shared by the two processes. The structures of the processes are shown below.
Process X:                        Process Y:
private i;                              private i;
for (i=0; i < n; i++) {             for (i=0; i < n; i++) {
     a[i] = f(i);                             EntryY(R, S);
     ExitX(R, S);                        b[i]=g(a[i]);
}                                            }
 
Q. Which one of the following represents the CORRECT implementations of ExitX and EntryY?
  • a)
    ExitX(R, S) {
        P(R);
        V(S);
    }
    EntryY (R, S) {
        P(S);
        V(R);
    }
  • b)
    ExitX(R, S) {
        V(R);
        V(S);
    }
    EntryY(R, S) {
        P(R);
        P(S);
    }
  • c)
    ExitX(R, S) {
       P(S);
       V(R);
    }
    EntryY(R, S) {
       V(S);
       P(R);
    }
  • d)
    ExitX(R, S) {
        V(R);
        P(S);
    }
    EntryY(R, S) {
        V(S);
        P(R);
    }
Correct answer is option 'C'. Can you explain this answer?

Bijoy Kapoor answered
The purpose here is neither the deadlock should occur nor the binary semaphores be assigned value greater 
than one.
A leads to deadlock
B can increase value of semaphores b/w 1 to n
D may increase the value of semaphore R and S to  2 in some cases

Which of the following DMA transfer modes and interrupt handling mechanisms will enable the highest I/O band-width?  
  • a)
    Transparent DMA and Polling interrupts
  • b)
    Cycle-stealing and Vectored interrupts
  • c)
    Block transfer and Vectored interrupts
  • d)
    Block transfer and Polling interrupts
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
CPU  get highest bandwidth in transparent DMA and polling. but it asked for I/O bandwidth not cpu bandwidth so option (A) is wrong.

In case of Cycle stealing, in each cycle time device send data then wait again after few CPU cycle it sends to memory . So option (B) is wrong.

In case of Polling CPU takes the initiative so I/O bandwidth can not be high so option (D) is wrong .

Consider Block transfer, in each single block device send data so bandwidth ( means the amount of data ) must be high . This makes option (C) correct.

The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the old value of x in y without allowing any intervening access to the memory location x. consider the following implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {
     unsigned y;
     unsigned *x = &(s->value);
     do {
           fetch-and-set x, y;
      } while (y);
}
void V (binary_semaphore *s) {
    S->value = 0;
}
 
Q. Which one of the following is true?
  • a)
    The implementation may not work if context switching is disabled in P.
  • b)
    Instead of using fetch-and-set, a pair of normal load/store can be used
  • c)
    The implementation of V is wrong
  • d)
    The code does not implement a binary semaphore
Correct answer is option 'A'. Can you explain this answer?

Option (B) :- If we use normal load & Store instead of Fetch & Set there is good chance that more than one Process sees S.value as 0 & then mutual exclusion wont be satisfied. So this option is wrong.
Option (C) :- Here we are setting S->value to 0, which is correct. (As in fetch & Set we wait if value of S-> value is 1. So implementation is correct. This option is wrong.

Option (D) :- I don't see why this code does not implement binary semaphore, only one Process can be in critical section here at a time. So this is binary semaphore & Option D is wrong

Answer :- Option A. This is correct because the implementation may not work if context switching is disabled in P , then process which is currently blocked may never give control to the process which might eventually execute V. So Context switching is must !

Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T. The code for the processes P and Q is shown below.
Process P:
while (1) {
W:
    print '0';
    print '0';
X:
}
Process Q:
while (1) {
Y:
     print '1';
     print '1';
Z:
}
 
Q. Synchronization statements can be inserted only at points W, X, Y and Z Which of the following will ensure that the output string never contains a substring of the form 01n0 or 10n1 where n is odd?
  • a)
    P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
  • b)
    P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
  • c)
    P(S) at W, V(S) at X, P(S) at Y, V(S) at Z, S initially 1
  • d)
    V(S) at W, V(T) at X, P(S) at Y, P(T) at Z, S and T initially 1
Correct answer is option 'C'. Can you explain this answer?

P(S) means wait on semaphore ’S’ and V(S) means signal on semaphore ‘S’. The definition of these functions are : 
Wait(S) {
       while (i <= 0) ;
       S-- ;
}
Signal(S) {
       S++ ;
}
 
Initially S = 1 and T = 0 to support mutual exclusion in process ‘P’ and ‘Q’. 
Since, S = 1 , process ‘P’ will be executed and function Wait(S) will decrement the value of ‘S’. So, S = 0 now. 
Simultaneously, in process ‘Q’ , T = 0 . Therefore, in process ‘Q’ control will be stuck in while loop till the time process ‘P’ prints ‘00’ and increments the value of ‘T’ by calling function V(T). 
While the control is in process ‘Q’, S = 0 and process ‘P’ will be stuck in while loop. Process ‘P’ will not execute till the time process ‘Q’ prints ‘11’ and makes S = 1 by calling function V(S). 
 
Thus, process 'P' and 'Q' will keep on repeating to give the output ‘00110011 …… ‘ . 
 
Please comment below if you find anything wrong in the above post.

Consider a computer C1 has n CPUs and k processes. Which of the following statements are False?
  • a)
    The maximum number of processes in ready state is K.
  • b)
    The maximum number of processes in block state is K.
  • c)
    The maximum number of processes in running state is K.
  • d)
    The maximum number of processes in running state is n.
Correct answer is option 'B,C'. Can you explain this answer?

Baishali Reddy answered
Understanding Process States in a Multi-CPU Environment
In a computer system with n CPUs and k processes, various states exist for processes: ready, blocked, and running. Let's analyze the statements to identify which are false.
Statement A: Maximum Processes in Ready State
- The ready state represents processes that are prepared to run but are not currently executing.
- Since there can be many processes waiting, the maximum number of processes in the ready state is indeed k.
Statement B: Maximum Processes in Block State
- The blocked state comprises processes waiting for resources (like I/O operations) to become available.
- Unlike the ready state, there can be many processes blocked, but the number of blocked processes is not constrained by k in a direct manner.
- Processes can be blocked without necessarily competing for a CPU, meaning this statement is false.
Statement C: Maximum Processes in Running State
- The running state indicates processes currently being executed by CPUs.
- In a system with n CPUs, only n processes can be running simultaneously. Hence, the maximum number of processes in the running state is not k but n. This statement is false.
Statement D: Maximum Processes in Running State is n
- This statement accurately reflects the system's constraints, as only n CPUs can execute n processes at any given time.
Conclusion
- The false statements are B and C because:
- B incorrectly implies an upper limit on blocked processes based solely on k.
- C inaccurately suggests that k processes can run simultaneously when limited by the number of CPUs, n.
Understanding these distinctions is crucial in process management within operating systems.

Two processes, P1 and P2, need to access a critical section of code. Consider the following synchronization construct used by the processes:Here, wants1 and wants2 are shared variables, which are initialized to false. Which one of the following statements is TRUE about the above construct?v
   /* P1 */
while (true) {
     wants1 = true;
     while (wants2 == true);
     /* Critical
         Section */
    wants1=false;
}
/* Remainder section */
/* P2 */
while (true) {
    wants2 = true;
    while (wants1==true);
     /* Critical
        Section */
     wants2 = false;
}
/* Remainder section */
  • a)
    It does not ensure mutual exclusion.
  • b)
    It does not ensure bounded waiting.
  • c)
    It requires that processes enter the critical section in strict alternation.
  • d)
    It does not prevent deadlocks, but ensures mutual exclusion.
Correct answer is option 'D'. Can you explain this answer?

Nidhi Tiwari answered
Bounded waiting :There exists a bound, or limit, on the number of times other processes are allowed to enter their critical sections after a process has made request to enter its critical section and before that request is granted. mutual exclusion prevents simultaneous access to a shared resource. This concept is used in concurrent programming with a critical section, a piece of code in which processes or threads access a shared resource. Solution: Two processes, P1 and P2, need to access a critical section of code. Here, wants1 and wants2 are shared variables, which are initialized to false. Now, when both wants1 and wants2 become true, both process p1 and p2 enter in while loop and waiting for each other to finish. This while loop run indefinitely which leads to deadlock. Now, Assume P1 is in critical section (it means wants1=true, wants2 can be anything, true or false). So this ensures that p2 won’t enter in critical section and vice versa. This satisfies the property of mutual exclusion. Here bounded waiting condition is also satisfied as there is a bound on the number of process which gets access to critical section after a process request access to it. 

Consider the following statements about user level threads and kernel level threads. Which one of the following statement is FALSE?
  • a)
    Context switch time is longer for kernel level threads than for user level threads.
  • b)
    User level threads do not need any hardware support.
  • c)
    Related kernel level threads can be scheduled on different processors in a multi-processor system.
  • d)
    Blocking one kernel level thread blocks all related threads.
Correct answer is option 'D'. Can you explain this answer?

Kernel level threads are managed by the OS, therefore, thread operations are implemented in the kernel code. Kernel level threads can also utilize multiprocessor systems by splitting threads on different processors. If one thread blocks it does not cause the entire process to block. Kernel level threads have disadvantages as well. They are slower than user level threads due to the management overhead. Kernel level context switch involves more steps than just saving some registers. Finally, they are not portable because the implementation is operating system dependent.
Option (A): Context switch time is longer for kernel level threads than for user level threads. True, As User level threads are managed by user and Kernel level threads are managed by OS. There are many overheads involved in Kernel level thread management, which are not present in User level thread management. So context switch time is longer for kernel level threads than for user level threads.
Option (B): User level threads do not need any hardware support True, as User level threads are managed by user and implemented by Libraries, User level threads do not need any hardware support.
Option (C): Related kernel level threads can be scheduled on different processors in a multi- processor system. This is true.
Option (D): Blocking one kernel level thread blocks all related threads. false, since kernel level threads are managed by operating system, if one thread blocks, it does not cause all threads or entire process to block. 

The P and V operations on counting semaphores, where s is a counting semaphore, are defined as follows:
P(s) : s = s - 1;
   if (s < 0) then wait;
V(s) : s = s + 1;
    if (s <= 0) then wakeup a process waiting on s;
Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two binary semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as follows:
P(s) : Pb(Xb);
    s = s - 1;
   if (s < 0) {
  Vb(Xb) ;
  Pb(Yb) ;
}
else Vb(Xb);

V(s) : Pb(Xb) ;
    s = s + 1;
    if (s <= 0) Vb(Yb) ;
Vb(Xb) ;

The initial values of Xb and Yb are respectively
  • a)
    0 and 0
  • b)
    0 and 1
  • c)
    1 and 0
  • d)
    1 and 1
Correct answer is option 'C'. Can you explain this answer?

Suppose Xb = 0, then because of P(s): Pb(Xb) operation, Xb will be -1 and processs will get blocked as it will enter into waiting section. So, Xb will be one. Suppose s=2(means 2 process are accessing shared resource), taking Xb as 1,
first P(s): Pb(Xb) operation will make Xb as zero. s will be 1 and Then Vb(Xb) operation will be executed which will increase the count of Xb as one. Then same process will be repeated making Xb as one and s as zero.
Now suppose one more process comes, then Xb will be 1 but s will be -1 which will make this process go into loop (s <0) and will result into calling Vb(Xb) and Pb(Yb) operations. Vb(Xb) will result into Xb as 2 and Pb(Yb) will result into decrementing the value of Yb.
case 1: if Yb has value as 0, it will be -1 and it will go into waiting and will be blocked.total 2 process will access shared resource (according to counting semaphore, max 3 process can access shared resource) and value of s is -1 means only 1 process will be waiting for resources and just now, one process got blocked. So it is still true.
case 2: if Yb has value as 1, it will be 0. Total 3 process will access shared resource (according to counting semaphore, max 3 process can access shared resource) and value of s is -1 means only 1 process will be waiting for resources and but there is no process waiting for resources.So it is false.

Consider two processes P1 and P2 accessing the shared variables X and Y protected by two binary semaphores SX and SY respectively, both initialized to 1. P and V denote the usual semaphone operators, where P decrements the semaphore value, and V increments the semaphore value. The pseudo-code of P1 and P2 is as follows : P1 :
While true do {
     L1 : ................
     L2 : ................
     X = X + 1;
     Y = Y - 1;
     V(SX);
     V(SY);
}
P2 :
While true do {
L3 : ................
L4 : ................
Y = Y + 1;
X = Y - 1;
V(SY);
V(SX);
}
 
Q. In order to avoid deadlock, the correct operators at L1, L2, L3 and L4 are respectively
  • a)
    P(SY), P(SX); P(SX), P(SY)
  • b)
    P(SX), P(SY); P(SY), P(SX)
  • c)
    P(SX), P(SX); P(SY), P(SY)
  • d)
    P(SX), P(SY); P(SX), P(SY)
Correct answer is option 'D'. Can you explain this answer?

Option A: In line L1 (p(Sy)) i.e. process p1 wants lock on Sy that is held by process p2 and line L3 (p(Sx)) p2 wants lock on Sx which held by p1. So here circular and wait condition exist means deadlock.
Option B : In line L1 (p(Sx)) i.e. process p1 wants lock on Sx that is held by process p2 and line L3 (p(Sy)) p2 wants lock on Sx which held by p1. So here circular and wait condition exist means deadlock.
Option C: In line L1 (p(Sx)) i.e. process p1 wants lock on Sx and line L3 (p(Sy)) p2 wants lock on Sx . But Sx and Sy can’t be released by its processes p1 and p2.
Please read the following to learn more about process synchronization and semaphores: Process Synchronization Set 1

Time taken to switch between user and kernel models is _______ the time taken to switch between two processes.
  • a)
    More than
  • b)
    Independent of
  • c)
    Less than
  • d)
    Equal to
Correct answer is option 'C'. Can you explain this answer?

Crack Gate answered
  • Switching from kernel to user mode is a very fast operation, OS has to just change single bit at hardware level.
  • Switching from one process to another process is time consuming process, first we have to move to kernel mode then we have to save PCB and some registers.
Time taken to switch between user and kernel models is less than the time taken to switch between two processes, so option 3 is the correct answer.

Which of the following need not necessarily be saved on a context switch between processes?
  • a)
    General purpose registers
  • b)
    Translation look aside buffer
  • c)
    Program counter
  • d)
    All of the above
Correct answer is option 'B'. Can you explain this answer?

Ravi Singh answered
In a process context switch, the state of the first process must be saved somehow, so that, when the scheduler gets back to the execution of the first process, it can restore this state and continue. 
The state of the process includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. 
A Translation lookaside buffer (TLB) is a CPU cache that memory management hardware uses to improve virtual address translation speed. A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. On a context switch, some TLB entries can become invalid, since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely flush the TLB. 

The semaphore variables full, empty and mutex are initialized to 0, n and 1, respectively. Process P1 repeatedly adds one item at a time to a buffer of size n, and process Prepeatedly removes one item at a time from the same buffer using the programs given below. In the programs, K, L, M and N are unspecified statements.
 P1
while (1) {     K; P(mutex); Add an item to the buffer; V(mutex);     L; } P2 while (1) {    M; P(mutex); Remove an item from the buffer; V(mutex);     N; } The statements K, L, M and N are respectively
  • a)
    P(full), V(empty), P(full), V(empty)
  • b)
    P(full), V(empty), P(empty), V(full)
  • c)
    P(empty), V(full), P(empty), V(full)
  • d)
    P(empty), V(full), P(full), V(empty)
Correct answer is option 'D'. Can you explain this answer?

Nilesh Saha answered
Process P1 is the producer and process P2 is the consumer. 
Semaphore ‘full’ is initialized to '0'. This means there is no item in the buffer. Semaphore ‘empty’ is initialized to 'n'. This means there is space for n items in the buffer. 
In process P1, wait on semaphore 'empty' signifies that if there is no space in buffer then P1 can not produce more items. Signal on semaphore 'full' is to signify that one item has been added to the buffer. 
In process P2, wait on semaphore 'full' signifies that if the buffer is empty then consumer can’t not consume any item. Signal on semaphore 'empty' increments a space in the buffer after consumption of an item. 
 
Thus, option (D) is correct. 
 
Please comment below if you find anything wrong in the above post.

Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method Used by P1
while (S1 == S2) ;
Critica1 Section
S1 = S2;
Method Used by P2
while (S1 != S2) ;
Critica1 Section
S2 = not (S1);
 
Q. Which one of the following statements describes the properties achieved?
  • a)
    Mutual exclusion but not progress
  • b)
    Progress but not mutual exclusion
  • c)
    Neither mutual exclusion nor progress
  • d)
    Both mutual exclusion and progress
Correct answer is option 'A'. Can you explain this answer?

Explanation:

The given methods used by processes P1 and P2 are used for accessing their critical sections. Let's analyze the properties achieved by these methods.

Mutual Exclusion:
Mutual exclusion means that only one process can access the critical section at a time. In this case, both processes P1 and P2 are using while loops to check the values of shared boolean variables S1 and S2. It can be observed that both processes are checking for different conditions in their while loops.

- Process P1 is checking for (S1 == S2), which means it will enter the critical section only when S1 and S2 have the same value.
- Process P2 is checking for (S1 != S2), which means it will enter the critical section only when S1 and S2 have different values.

Since the values of S1 and S2 are randomly assigned initially, there is a possibility that the conditions in the while loops are not satisfied simultaneously for both processes. Therefore, mutual exclusion is achieved as only one process can enter the critical section at a time.

Progress:
Progress means that if a process wants to enter its critical section, it should eventually be able to do so. In this case, both processes P1 and P2 are using while loops to repeatedly check the conditions for entering the critical section.

- Process P1 is checking for (S1 == S2) and will keep looping until the condition is satisfied.
- Process P2 is checking for (S1 != S2) and will keep looping until the condition is satisfied.

It can be observed that if a process wants to enter its critical section, it will eventually be able to do so because the conditions in the while loops are based on the values of shared boolean variables S1 and S2, which can change over time.

Conclusion:
Based on the analysis, it can be concluded that the given methods achieve mutual exclusion but not progress. Mutual exclusion is achieved because only one process can enter the critical section at a time. However, progress is not achieved because a process may have to wait indefinitely in its while loop until the condition is satisfied.

A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows. Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory, and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory. Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete execution?
  • a)
    -2
  • b)
    -1
  • c)
    1
  • d)
    2
Correct answer is option 'D'. Can you explain this answer?

Pallabi Sharma answered
Background Explanation: A critical section in which the process may be changing common variables, updating table, writing a file and perform another function. The important problem is that if one process is executing in its critical section, no other process is to be allowed to execute in its critical section. Each process much request permission to enter its critical section. A semaphore is a tool for synchronization and it is used to remove the critical section problem which is that no two processes can run simultaneously together so to remove this two signal operations are used named as wait and signal which is used to remove the mutual exclusion of the critical section. as an unsigned one of the most important synchronization primitives, because you can build many other Decrementing the semaphore is called acquiring or locking it, incrementing is called releasing or unlocking.
Solution : Since initial value of semaphore is 2, two processes can enter critical section at a time- this is bad and we can see why. Say, X and Y be the processes.X increments x by 1 and Z decrements x by 2. Now, Z stores back and after this X stores back. So, final value of x is 1 and not -1 and two Signal operations make the semaphore value 2 again. So, now W and Z can also execute like this and the value of x can be 2 which is the maximum possible in any order of execution of the processes. (If the semaphore is initialized to 1, processed would execute correctly and we get the final value of x as -2.) Option (D) is the correct answer.
 Another Solution: Processes can run in many ways, below is one of the cases in which x attains max value Semaphore S is initialized to 2 Process W executes S=1, x=1 but it doesn't update the x variable. Then process Y executes S=0, it decrements x, now x= -2 and signal semaphore S=1 Now process Z executes s=0, x=-4, signal semaphore S=1 Now process W updates x=1, S=2 Then process X executes X=2 So correct option is D 
Another Solution: S is a counting semaphore initialized to 2 i.e., Two process can go inside a critical section protected by S. W, X read the variable, increment by 1 and write it back. Y, Z can read the variable, decrement by 2 and write it back. Whenever Y or Z runs the count gets decreased by 2. So, to have the maximum sum, we should copy the variable into one of the processes which increases the count, and at the same time the decrementing processed should run parallel, so that whatever they write back into memory can be overridden by incrementing process. So, in effect decrement would never happen. 

A process executes the following code
for (i = 0; i < n; i++) fork();
The total number of child processes created is
  • a)
    n
  • b)
    2^n - 1
  • c)
    2^n
  • d)
    2^(n+1) - 1
Correct answer is option 'B'. Can you explain this answer?

Anshu Mehta answered
If we sum all levels of above tree for i = 0 to n-1, we get 2^n - 1. So there will be 2^n – 1 child processes. Also see this post for more details.

Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory location X, increments it by the value i, and returns the old value of X. It is used in the pseudocode shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to 0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the lock being not available.
AcquireLock(L){
              while (Fetch_And_Add(L,1))
                         L = 1;
}
ReleaseLock(L){
                         L = 0;
}
 
This implementation
  • a)
    fails as L can overflow
  • b)
    fails as L can take on a non-zero value when the lock is actually available
  • c)
    works correctly but may starve some processes
  • d)
    works correctly without starvation
Correct answer is option 'B'. Can you explain this answer?

Mahesh Pillai answered
Take closer look the below while loop.
while (Fetch_And_Add(L,1))
               L = 1; // A waiting process can be here just after
                       // the lock is released, and can make L = 1.
Consider a situation where a process has just released the lock and made L = 0. Let there be one more process waiting for the lock, means executing the AcquireLock() function. Just after the L was made 0, let the waiting processes executed the line L = 1. Now, the lock is available and L = 1. Since L is 1, the waiting process (and any other future coming processes) can not come out of the while loop. The above problem can be resolved by changing the AcuireLock() to following.
AcquireLock(L){
             while (Fetch_And_Add(L,1))
              { // Do Nothing }
}

Three concurrent processes X, Y, and Z execute three different code segments that access and update certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before entering the respective code segments. After completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one. Which one of the following represents a deadlockfree order of invoking the P operations by the processes? 
  • a)
    X: P(a)P(b)P(c) Y:P(b)P(c)P(d) Z:P(c)P(d)P(a)
  • b)
    X: P(b)P(a)P(c) Y:P(b)P(c)P(d) Z:P(a)P(c)P(d)
  • c)
    X: P(b)P(a)P(c) Y:P(c)P(b)P(d) Z:P(a)P(c)P(d)
  • d)
    X: P(a)P(b)P(c) Y:P(c)P(b)P(d) Z:P(c)P(d)P(a)
Correct answer is option 'B'. Can you explain this answer?

Arjun Unni answered
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock. Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a. There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired a and b, process Y has acquired c. X and Y circularly waiting for each other.
A) for example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly one can figure out that for B) completion order is Z,X then Y.

In the working-set strategy, which of the following is done by the operating system to prevent thrashing?
  1. It initiates another process if there are enough extra frames.
  2. It selects a process to suspend if the sum of the sizes of the working-sets exceeds the total number of available frames.
  • a)
    I only
  • b)
    II only
  • c)
    Neither I nor II
  • d)
    Both I and II
Correct answer is option 'D'. Can you explain this answer?

According to concept of thrashing,
  • I is true because to prevent thrashing we must provide processes with as many frames as they really need "right now".If there are enough extra frames, another process can be initiated.
  • II is true because The total demand, D, is the sum of the sizes of the working sets for all processes. If D exceeds the total number of available frames, then at least one process is thrashing, because there are not enough frames available to satisfy its minimum working set. If D is significantly less than the currently available frames, then additional processes can be launched.

Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T. The code for the processes P and Q is shown below.
Process P:
while (1) {
W:
    print '0';
    print '0';
X:
}
Process Q:
while (1) {
Y:
     print '1';
     print '1';
Z:
}
 
Q. Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following will always lead to an output staring with '001100110011' ?
  • a)
    P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
  • b)
    P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
  • c)
    P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
  • d)
    P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0
Correct answer is option 'B'. Can you explain this answer?

P(S) means wait on semaphore ‘S’ and V(S) means signal on semaphore ‘S’. [sourcecode] Wait(S) { while (i <= 0) --S;} Signal(S) { S++; } [/sourcecode] Initially, we assume S = 1 and T = 0 to support mutual exclusion in process P and Q. Since S = 1, only process P will be executed and wait(S) will decrement the value of S. Therefore, S = 0. At the same instant, in process Q, value of T = 0. Therefore, in process Q, control will be stuck in while loop till the time process P prints 00 and increments the value of T by calling the function V(T). While the control is in process Q, semaphore S = 0 and process P would be stuck in while loop and would not execute till the time process Q prints 11 and makes the value of S = 1 by calling the function V(S). This whole process will repeat to give the output 00 11 00 11 … . 
 
Thus, B is the correct choice. 
 
Please comment below if you find anything wrong in the above post.

A process waiting to be assigned to a processor is considered to be in ___ state.
  • a)
    waiting 
  • b)
    ready 
  • c)
    terminated
  • d)
    running
Correct answer is option 'B'. Can you explain this answer?

Gate Gurus answered
Whenever a process executes, it goes through several phases or states. These states have their functions.
New – in this state, process is being created
Running – instructions are being executed
Waiting:  the process is waiting for some event to occur such as i/o completion
Ready – The process is waiting to be assigned to a processor
Terminated – The process has finished execution.

A thread is usually defined as a "light weight process" because an operating system (OS) maintains smaller data structures for a thread than for a process. In relation to this, which of the following is TRUE?
  • a)
    On per-thread basis, the OS maintains only CPU register state
  • b)
    The OS does not maintain a separate stack for each thread
  • c)
    On per-thread basis, the OS does not maintain virtual memory state
  • d)
    On per-thread basis, the OS maintains only scheduling and accounting information
Correct answer is option 'C'. Can you explain this answer?

Naina Shah answered
Threads share address space of Process. Virtually memory is concerned with processes not with Threads. A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers, (and a thread ID.) As you can see, for a single thread of control - there is one program counter, and one sequence of instructions that can be carried out at any given time and for multi-threaded applications-there are multiple threads within a single process, each having their own program counter, stack and set of registers, but sharing common code, data, and certain structures such as open files.
Option (A): as you can see in the above diagram, NOT ONLY CPU Register but stack and code files, data files are also maintained. So, option (A) is not correct as it says OS maintains only CPU register state.
Option (B): according to option (B), OS does not maintain a separate stack for each thread. But as you can see in above diagram, for each thread, separate stack is maintained. So this option is also incorrect.
Option (C): according to option (C), the OS does not maintain virtual memory state. And It is correct as Os does not maintain any virtual memory state for individual thread.
Option (D): according to option (D), the OS maintains only scheduling and accounting information. But it is not correct as it contains other information like cpu registers stack, program counters, data files, code files are also maintained.

The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the old value of x n y without allowing any intervening access to the memory location x. consider the following implementation of P and V functions on a binary semaphore S.
void P (binary_semaphore *s)
{
        unsigned y;
        unsigned *x = &(s->value);
        do
        {
                fetch-and-set x, y;
        }
        while (y);
}
void V (binary_semaphore *s)
{
        S->value = 0;
}
 
Q. Which one of the following is true?
  • a)
    The implementation may not work if context switching is disabled in P
  • b)
    Instead of using fetch-and –set, a pair of normal load/store can be used
  • c)
    The implementation of V is wrong
  • d)
    The code does not implement a binary semaphore
Correct answer is option 'A'. Can you explain this answer?

Shubham Sharma answered


Explanation:

Context Switching:
- Context switching involves saving the state of a process or thread so that it can be paused and resumed later.
- In the given implementation of the P function, the fetch-and-set operation is crucial for the correct functioning of the binary semaphore.
- If context switching is disabled, there is a possibility that another process may interrupt the execution of the P function before it completes the fetch-and-set operation.
- This interruption can lead to incorrect behavior of the semaphore, making the implementation unreliable.

Use of Fetch-and-Set:
- The fetch-and-set operation ensures atomicity in updating the semaphore value, which is essential for proper synchronization.
- Using a pair of normal load and store operations instead of fetch-and-set can lead to race conditions and data inconsistency.
- Therefore, the fetch-and-set operation is necessary for the correct functioning of the binary semaphore.

Implementation of V:
- The implementation of the V function sets the semaphore value to 0, indicating the release of the semaphore.
- This is a valid operation for the V function in the context of binary semaphores.
- Therefore, the implementation of the V function is correct.

Conclusion:
- The correct statement among the given options is that the implementation may not work if context switching is disabled in the P function.
- It is essential to ensure atomicity in semaphore operations to avoid race conditions and data inconsistency, which can be achieved through the fetch-and-set operation.
- Disabling context switching can potentially disrupt the atomicity of the semaphore operations, making the implementation unreliable.

 
Consider the following code fragment:
if (fork() == 0)
{ a = a + 5; printf("%d,%d", a, &a); }
else { a = a –5; printf("%d, %d", a, &a); }
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process. Which one of the following is TRUE?
  • a)
    u = x + 10 and v = y
  • b)
    u = x + 10 and v != y
  • c)
    u + 10 = x and v = y
  • d)
    u + 10 = x and v != y
Correct answer is option 'C'. Can you explain this answer?

Anirban Khanna answered
When a fork() system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process. But this is not needed in certain cases. When the child is needed just to execute a command for the parent process, there is no need for copying the parent process’ pages, since exec replaces the address space of the process which invoked it with the command to be executed. In such cases, a technique called copy-on-write (COW) is used. With this technique, when a fork occurs, the parent process’s pages are not copied for the child process. Instead, the pages are shared between the child and the parent process. Whenever a process (parent or child) modifies a page, a separate copy of that particular page alone is made for that process (parent or child) which performed the modification. This process will then use the newly copied page rather than the shared one in all future references.

fork() returns 0 in child process and process ID of child process in parent process.
In Child (x), a = a + 5
In Parent (u), a = a – 5;

Child process will execute the if part and parent process will execute the else part. Assume that the initial value of a = 6. Then the value of a printed by the child process will be 11, and the value of a printed by the parent process in 1. Therefore u+10=x Now the second part. The answer is v = y.



We know that, the fork operation creates a separate address space for the child. But the child process has an exact copy of all the memory segments of the parent process. Hence the virtual addresses and the mapping (initially) will be the same for both parent process as well as child process.
PS: the virtual address is same but virtual addresses exist in different processes’ virtual address space and when we print &a, it’s actually printing the virtual address. Hence the answer is v = y.

Consider the following statements about process state transitions for a system using preemptive scheduling.
I. A running process can move to ready state.
II. A ready process can move to running state.
III. A blocked process can move to running state.
IV. A blocked process can move to ready state.
Which of the above statements are TRUE?
  • a)
    I, II, and III only
  • b)
    II and III only
  • c)
    I, II, and IV only
  • d)
    I, II, III, and IV
Correct answer is option 'C'. Can you explain this answer?

Gate Gurus answered
A process state diagram for a pre-emptive scheduling is:
Statement I: TRUE
A process can move from running state to ready state on interrupt or when priority expires, that is, when it is pre-empted.
Statement II: TRUE
A ready process moves to running process when it is dispatched.
Statement III: FALSE
A blocked process that is in waiting state can never move directly to running state. It must go to ready queue first.
Statement IV: TRUE.
A blocked or waiting process can move to ready state.

Information about a process is maintained in a ______.
  • a)
    Stack
  • b)
    Translation Lookaside Buffer 
  • c)
    Process Control Block
  • d)
    Program Control Block
Correct answer is option 'C'. Can you explain this answer?

Process control block: It is a data structure that is maintained by the Operating System for every process. A process state is a condition of the process at a specific instant of time. Every process is represented in the operating system by a process control block, which is also called a task control block.
Hence the correct answer is process control block.

The program in the operating system that does processor management is called _______.
  • a)
    traffic controller 
  • b)
    dispatcher
  • c)
    processor scheduler 
  • d)
    job scheduler 
Correct answer is option 'A'. Can you explain this answer?

Crack Gate answered
In a multiprogramming environment, the operating system determines which processes receive processor time and for how long. Process scheduling is the name for this function. For processor management, an operating system performs the following tasks:
  • The primary goal of the Traffic controller is to coordinate and control the actions of hardware and software in the C.P.U.
  • Keep track of the processor and the process's state. The traffic controller is the program that is in charge of this task.
  • Allocates the processor (CPU) to a process.
  • De-allocates processor when a process is no longer required.
Major Points
  • After the scheduler, the dispatcher is finished. It assigns CPU control to the process chosen by the short-term scheduler. The dispatcher assigns CPU to the process when it has been chosen.
  • Process scheduling is the action of process management, which involves removing a running process from the CPU and selecting a new process based on a set of rules.
  • The new process is brought to the 'Ready State' by a long-term or job scheduler. It regulates the Degree of Multi-programming or the number of processes that are ready at any given time.

Consider the following two-process synchronization solution.
The shared variable turn is initialized to zero. Which one of the following is TRUE?
  • a)
    This is a correct two-process synchronization solution.
  • b)
    This solution violates mutual exclusion requirement.
  • c)
    This solution violates progress requirement.
  • d)
    This solution violates bounded wait requirement.
Correct answer is option 'C'. Can you explain this answer?

Maulik Iyer answered
It satisfies the mutual excluision : Process P0 and P1 could not have successfully executed their while statements at the same time as value of ‘turn’ can either be 0 or 1 but can’t be both at the same time. Lets say, when process P0 executing its while statements with the condition “turn == 1”, So this condition will persist as long as process P1 is executing its critical section. And when P1 comes out from its critical section it changes the value of ‘turn’ to 0 in exit section and because of that time P0 comes out from the its while loop and enters into its critical section. Therefore only one process is able to execute its critical section at a time.
Its also satisfies bounded waiting : It is limit on number of times that other process is allowed to enter its critical section after a process has made a request to enter its critical section and before that request is granted. Lets say, P0 wishes to enter into its critical section, it will definitely get a chance to enter into its critical section after at most one entry made by p1 as after executing its critical section it will set ‘turn’ to 0 (zero). And vice-versa (strict alteration).
Progess is not satisfied : Because of strict alternation no process can stop other process from entering into its critical section.

Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) {
1:   P(S);
2:   process_arrived++;
3.   V(S);
4:   while (process_arrived !=3);
5:   P(S);
6:   process_left++;
7:   if (process_left==3) {
8:      process_arrived = 0;
9:      process_left = 0;
10:  }
11:  V(S);
}
 
Q. The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program all the three processes call the barrier function when they need to synchronize globally. Which one of the following rectifies the problem in the implementation?
  • a)
    Lines 6 to 10 are simply replaced by process_arrived--
  • b)
    At the beginning of the barrier the first process to enter the barrier waits until process_arrived becomes zero before proceeding to execute P(S).
  • c)
    Context switch is disabled at the beginning of the barrier and re-enabled at the end.
  • d)
    The variable process_left is made private instead of shared
Correct answer is option 'B'. Can you explain this answer?

Step ‘2’ should not be executed when the process enters the barrier second time till other two processes have not completed their 7th step. This is to prevent variable process_arrived becoming greater than 3. 
So, when variable process_arrived becomes zero and variable process_left also becomes zero then the problem of deadlock will be resolved. 
Thus, at the beginning of the barrier the first process to enter the barrier waits until process_arrived becomes zero before proceeding to execute P(S). 
 
Thus, option (B) is correct. 
 
Please comment below if you find anything wrong in the above post.

Consider the procedure below for the Producer-Consumer problem which uses semaphores:
 
Q. Which one of the following is TRUE?
  • a)
    The producer will be able to add an item to the buffer, but the consumer can never consume it.
  • b)
    The consumer will remove no more than one item from the buffer.
  • c)
    Deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty.
  • d)
    The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation.
Correct answer is option 'C'. Can you explain this answer?

Initially, there is no element in the buffer. 
Semaphore s = 1 and semaphore n = 0. 
We assume that initially control goes to the consumer when buffer is empty. 
semWait(s) decrements the value of semaphore ‘s’ . Now, s = 0 and semWait(n) decrements the value of semaphore ‘n’. Since, the value of semaphore ‘n’ becomes less than 0 , the control stucks in while loop of function semWait() and a deadlock arises. 
 
Thus, deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty. 

Three concurrent processes X, Y, and Z execute three different code segments that access and update certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before entering the respective code segments. After completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one. Which one of the following represents a deadlock-free order of invoking the P operations by the processes?
  • a)
    X: P(a)P(b)P(c) Y: P(b)P(c)P(d) Z: P(c)P(d)P(a)
  • b)
    X: P(b)P(a)P(c) Y: P(b)P(c)P(d) Z: P(a)P(c)P(d)
  • c)
    X: P(b)P(a)P(c) Y: P(c)P(b)P(d) Z: P(a)P(c)P(d)
  • d)
    X: P(a)P(b)P(c) Y: P(c)P(b)P(d) Z: P(c)P(d)P(a)
Correct answer is option 'B'. Can you explain this answer?

Megha Yadav answered
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock. Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a. There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired a and b, process Y has acquired c. X and Y circularly waiting for each other.
Consider option A) for example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly one can figure out that for B) completion order is Z,X then Y. 

Which of the following types of operating systems is non- interactive?
  • a)
    Multitasking operating system
  • b)
    Multi-user operating system
  • c)
    Batch processing operating system
  • d)
    Multiprogramming operating system
Correct answer is option 'C'. Can you explain this answer?

Crack Gate answered
Batch Operating System does not interact with the computer directly. There is an operator which takes similar jobs having the same requirement and group them into batches. It is the responsibility of the operator to sort jobs with similar needs.
Hence the correct answer is the Batch processing operating system.

With respect to operating systems, which of the following is NOT a valid process state?
  • a)
    Ready
  • b)
    Waiting
  • c)
    Running
  • d)
    Starving
Correct answer is option 'D'. Can you explain this answer?

Crack Gate answered
Process States:
Each process goes through different states in its life cycle,
  • New (Create) – In this step, the process is about to be created but not yet created, it is the program that is present in secondary memory. 
  • Ready – New -> Ready to run. After the creation of a process, the process enters the ready state. i.e. the process is loaded into the main memory
  • Run – The process is chosen by CPU for execution and the instructions within the process are executed by any one of the available CPU cores.
  • Blocked or wait – Whenever the process requests access to I/O or needs input from the user or needs access to a critical region it enters the blocked or wait state.
  • Terminated or completed – Process is killed as well as PCB is deleted.
Hence the correct answer is Starving.

What name is given to a program in execution?
  • a)
    Process
  • b)
    Data load
  • c)
    Program
  • d)
    Mutex
Correct answer is option 'A'. Can you explain this answer?

Gate Gurus answered
A process is the instance of a computer program that is being executed. It contains the program code and its activity. A process may be made up of multiple threads of execution that execute instructions concurrently.
Program in execution is called process. The process is an active entity.
Hence the correct answer is process.

A _________ process is moved to the ready state when its time allocation expires.
  • a)
    Blocked
  • b)
    New
  • c)
    Running
  • d)
    Suspended
Correct answer is option 'C'. Can you explain this answer?

Gate Gurus answered
When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
1.Start
  • This is the initial state when a process is first started/created.
2. Ready
  • The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process.
3. Running
  • Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions.
  • A running process is moved to the ready state when its time allocation expires (quantum time).
  • A running process is moved to the terminated state when its execution completed.
4. Waiting
  • Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available.
5. Terminated or Exit
  • Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.

Chapter doubts & questions for Process Management - Operating System 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Process Management - Operating System in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Operating System

10 videos|100 docs|33 tests

Top Courses Computer Science Engineering (CSE)

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev