All Exams  >   Computer Science Engineering (CSE)  >   Operating System  >   All Questions

All questions of Threads for Computer Science Engineering (CSE) Exam

Which of the following strategy is employed for overcoming the priority inversion problem?
  • a)
    Temporarily raise the priority of lower priority level process
  • b)
    Have a fixed priority level scheme
  • c)
    Implement kernel pre-emption scheme
  • d)
    Allow lower priority process to complete its job
Correct answer is option 'A'. Can you explain this answer?

Ravi Singh answered
Priority inversion is a scenario in scheduling when a higher priority process is indirectly preempted by a lower priority process, thereby inverting the relative priorities of the process. This problem can be eliminated by temporarily raising the priority of lower priority level process, so that it can not preempt the higher priority process.
Option (A) is correct.

User level threads are threads that are visible to the programmer and are unknown to the kernel. The operating system kernel supports and manages kernel level threads. Three different types of models relate user and kernel level threads. Which of the following statements is/are true ?
(a)
(i) The Many - to - one model maps many user threads to one kernel thread
(ii) The one - to - one model maps one user thread to one kernel thread
(iii) The many - to - many model maps many user threads to smaller or equal kernel threads
(b)
(i) Many - to - one model maps many kernel threads to one user thread
(ii) One - to - one model maps one kernel thread to one user thread
(iii) Many - to - many model maps many kernel threads to smaller or equal user threads
  • a)
    (a) is true; (b) is false
  • b)
    (a) is false; (b) is true
  • c)
    Both (a) and (b) are true
  • d)
    Both (a) and (b) are false
Correct answer is option 'A'. Can you explain this answer?

Explanation:

User-level threads and kernel-level threads are two different types of threads used in operating systems. The operating system kernel supports and manages kernel-level threads, while user-level threads are managed by the programmer.

User-Level Threads:
- User-level threads are threads that are visible to the programmer and are unknown to the kernel.
- These threads are created and managed by the application or programming language without any intervention from the operating system.
- User-level threads are lightweight and have low overhead since they do not require kernel involvement for thread management.
- However, user-level threads are limited by the fact that if one thread blocks or performs a system call, all other threads in the process are also blocked.

Kernel-Level Threads:
- Kernel-level threads are threads that are managed and supported by the operating system kernel.
- The kernel provides system calls and services for creating, scheduling, and managing kernel-level threads.
- Kernel-level threads have the advantage of being able to run in parallel on multiple processors or cores.
- However, they have higher overhead compared to user-level threads due to the involvement of the kernel in thread management.

Models of User and Kernel Level Threads:
There are three different models that relate user-level threads to kernel-level threads:

1. Many-to-One Model:
- In this model, many user threads are mapped to a single kernel thread.
- All user-level threads of a process share the same kernel-level thread.
- This model has low overhead and is easy to implement but lacks the ability to run threads in parallel on multiple processors or cores.

2. One-to-One Model:
- In this model, each user thread is mapped to a separate kernel thread.
- Each user-level thread has a corresponding kernel-level thread.
- This model allows threads to run in parallel on multiple processors or cores, but it has higher overhead compared to the many-to-one model.

3. Many-to-Many Model:
- In this model, many user threads are mapped to smaller or equal kernel threads.
- The mapping between user and kernel threads can be dynamic and change over time.
- This model provides a balance between the flexibility of the one-to-one model and the efficiency of the many-to-one model.

Correct Answer:
The correct answer is option (a) - (a) is true; (b) is false.
- The many-to-one model maps many user threads to one kernel thread.
- The one-to-one model maps one user thread to one kernel thread.
- The many-to-many model maps many user threads to smaller or equal kernel threads.

A disk drive has 100 cyclinders, numbered 0 to 99. Disk requests come to the disk driver for cyclinders 12, 26, 24, 4, 42, 8 and 50 in that order. The driver is currently serving a request at cyclinder 24. A seek takes 6 msec per cyclinder moved. How much seek time is needed for shortest seek time first (SSTF) algorithm?
  • a)
    0.984 sec
  • b)
    0.396 sec
  • c)
    0.738 sec
  • d)
    0.42 sec
Correct answer is option 'D'. Can you explain this answer?

Preethi Iyer answered
Given information:
- Disk drive has 100 cylinders, numbered 0 to 99.
- Disk requests come for cylinders 12, 26, 24, 4, 42, 8, and 50.
- The driver is currently serving a request at cylinder 24.
- A seek takes 6 msec per cylinder moved.

Shortest Seek Time First (SSTF) algorithm:
The SSTF algorithm chooses the request that is closest to the current position of the disk head. It minimizes the seek time by selecting the request that requires the shortest distance to be traveled by the disk head.

Calculating the seek time:
To calculate the seek time for the SSTF algorithm, we need to determine the sequence in which the disk requests are serviced. Starting from cylinder 24, we select the request that is closest to the current position and move the disk head accordingly.

1. Initial position: Cylinder 24
2. Select the closest request: Cylinder 26 (distance = 2)
- Seek time: 2 * 6 msec = 12 msec
- New position: Cylinder 26
3. Select the closest request: Cylinder 12 (distance = 14)
- Seek time: 14 * 6 msec = 84 msec
- New position: Cylinder 12
4. Select the closest request: Cylinder 4 (distance = 8)
- Seek time: 8 * 6 msec = 48 msec
- New position: Cylinder 4
5. Select the closest request: Cylinder 8 (distance = 4)
- Seek time: 4 * 6 msec = 24 msec
- New position: Cylinder 8
6. Select the closest request: Cylinder 50 (distance = 42)
- Seek time: 42 * 6 msec = 252 msec
- New position: Cylinder 50
7. Select the closest request: Cylinder 42 (distance = 8)
- Seek time: 8 * 6 msec = 48 msec
- New position: Cylinder 42
8. Select the closest request: Cylinder 24 (distance = 18)
- Seek time: 18 * 6 msec = 108 msec
- New position: Cylinder 24

Total seek time:
The total seek time is the sum of the seek times for each movement:
12 msec + 84 msec + 48 msec + 24 msec + 252 msec + 48 msec + 108 msec = 576 msec

Conversion to seconds:
To convert the seek time to seconds, we divide it by 1000:
576 msec / 1000 = 0.576 sec

Therefore, the seek time needed for the SSTF algorithm is approximately 0.576 seconds.

A thread is usually defined as a "light weight process" because an operating system (OS) maintains smaller data structures for a thread than for a process. In relation to this, which of the following is TRUE?
  • a)
    On per-thread basis, the OS maintains only CPU register state
  • b)
    The OS does not maintain a separate stack for each thread
  • c)
    On per-thread basis, the OS does not maintain virtual memory state
  • d)
    On per-thread basis, the OS maintains only scheduling and accounting information
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
Threads share address space of Process. Virtually memory is concerned with processes not with Threads. A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers, (and a thread ID.) As you can see, for a single thread of control - there is one program counter, and one sequence of instructions that can be carried out at any given time and for multi-threaded applications-there are multiple threads within a single process, each having their own program counter, stack and set of registers, but sharing common code, data, and certain structures such as open files.
Option (A): as you can see in the above diagram, NOT ONLY CPU Register but stack and code files, data files are also maintained. So, option (A) is not correct as it says OS maintains only CPU register state.
Option (B): according to option (B), OS does not maintain a separate stack for each thread. But as you can see in above diagram, for each thread, separate stack is maintained. So this option is also incorrect.
Option (C): according to option (C), the OS does not maintain virtual memory state. And It is correct as Os does not maintain any virtual memory state for individual thread.
Option (D): according to option (D), the OS maintains only scheduling and accounting information. But it is not correct as it contains other information like cpu registers stack, program counters, data files, code files are also maintained.

Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) {
1:   P(S);
2:   process_arrived++;
3.   V(S);
4:   while (process_arrived !=3);
5:   P(S);
6:   process_left++;
7:   if (process_left==3) {
8:      process_arrived = 0;
9:      process_left = 0;
10:  }
11:  V(S);
}
The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program all the three processes call the barrier function when they need to synchronize globally. The above implementation of barrier is incorrect. Which one of the following is true?
  • a)
    The barrier implementation is wrong due to the use of binary semaphore S
  • b)
    The barrier implementation may lead to a deadlock if two barrier in invocations are used in immediate succession
  • c)
    Lines 6 to 10 need not be inside a critical section
  • d)
    The barrier implementation is correct if there are only two processes instead of three
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
It is possible that process_arrived becomes greater than 3. It will not be possible for process arrived to become 3 again, hence deadlock.

Consider a system having ‘m’ resources of the same type. These resources are shared by three processes P1, Pand P3 which have peak demands of 2, 5 and 7 resources respectively. For what value of ‘m’ deadlock will not occur ?
  • a)
    70
  • b)
    14
  • c)
    13
  • d)
    7
Correct answer is option 'B'. Can you explain this answer?

Ananya Shah answered
Multiple components that interact with each other to achieve a specific goal. This system can be described as a complex system, and it can be found in various fields such as engineering, biology, economics, and social sciences. The behavior of the system is not only determined by the individual components but also by the interactions between them. These interactions can be positive or negative, and they can create emergent properties that cannot be observed in the individual components alone. The study of complex systems involves understanding the behavior of the system as a whole, rather than just focusing on the individual components. This requires the use of interdisciplinary approaches that combine theories and methods from different fields to analyze and model complex systems.

Three concurrent processes X, Y, and Z execute three different code segments that access and update certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before entering the respective code segments. After completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one. Which one of the following represents a deadlock-free order of invoking the P operations by the processes?
  • a)
    X: P(a)P(b)P(c) Y: P(b)P(c)P(d) Z: P(c)P(d)P(a)
  • b)
    X: P(b)P(a)P(c) Y: P(b)P(c)P(d) Z: P(a)P(c)P(d)
  • c)
    X: P(b)P(a)P(c) Y: P(c)P(b)P(d) Z: P(a)P(c)P(d)
  • d)
    X: P(a)P(b)P(c) Y: P(c)P(b)P(d) Z: P(c)P(d)P(a)
Correct answer is option 'B'. Can you explain this answer?

Sanya Agarwal answered
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock. Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a. There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired a and b, process Y has acquired c. X and Y circularly waiting for each other.Consider option A) for example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly one can figure out that for B) completion order is Z,X then Y.

Which of the following actions is/are typically not performed by the operating system when switching context from process A to process B?
  • a)
    Saving current register values and restoring saved register values for process B.
  • b)
    Changing address translation tables.
  • c)
    Swapping out the memory image of process A to the disk.
  • d)
    Invalidating the translation look-aside buffer.
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
During context switch processes are not swapped out from memory to disk but processes are generally swapped out from memory to disk when they are suspended. Also, during context switch OS invalidates TLB so that the TLB coincides with the currently executing processes. So, option (C) is false.

A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows. Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory, and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory. Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete execution? (GATE CS 2013)
  • a)
    -2
  • b)
    -1
  • c)
    1
  • d)
    2
Correct answer is option 'D'. Can you explain this answer?

Given information:
- Four concurrent processes W, X, Y, Z operate on a shared variable x.
- Each process reads x from memory, performs an operation, and stores the result back to memory.
- Processes W and X increment x by 1.
- Processes Y and Z decrement x by 2.
- Each process invokes the P operation (wait) on semaphore S before reading x, and invokes the V operation (signal) on semaphore S after storing x to memory.
- Semaphore S is initialized to 2.
- The initial value of x is 0.

Analysis:
- Initially, semaphore S has a value of 2, which means two processes can read x simultaneously.
- Processes W and X increment x by 1, while processes Y and Z decrement x by 2.
- Since processes W and X can read x simultaneously, they will increment the value of x by 1 twice.
- Similarly, since processes Y and Z can read x simultaneously, they will decrement the value of x by 2 twice.

Solution:
- Initially, x = 0.
- Processes W and X increment x by 1 twice, resulting in x = 2.
- Processes Y and Z decrement x by 2 twice, resulting in x = -2.

Conclusion:
The maximum possible value of x after all processes complete execution is 2 (option D).

One of the disadvantages of user level threads compared to Kernel level threads is
  • a)
    If a user-level thread of a process executes a system call, all threads in that process are blocked.
  • b)
    Scheduling is application dependent.
  • c)
    Thread switching doesn’t require kernel mode privileges.
  • d)
    The library procedures invoked for thread management in user level threads are local procedures.
Correct answer is option 'A'. Can you explain this answer?

Sanya Agarwal answered
Advantage of User level thread:
1- Scheduling is application dependent.
2- Thread switching doesn’t require kernel mode privileges.
3- The library procedures invoked for thread management in user level threads are local procedures.
4- User level threads are fast to create and manage.
5- User level thread can run on any operating system.
Disadvantage of User-level thread:
1- Most system calls are blocked on a typical OS.
2- Multiprocessing is not supported for multi-threaded application.
So, Option (A) is correct.

The state of a process after it encounters an I/O instruction is
  • a)
    ready
  • b)
    blocked
  • c)
    idle
  • d)
    running
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
Whenever a process is just created, it is kept in Ready queue. When it starts execution, it is in Running state, as soon as it starts doing input/output operation, it is kept in the blocked state.

Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method Used by P1
while (S1 == S2) ;
Critica1 Section
S1 = S2;
Method Used by P2
while (S1 != S2) ;
Critica1 Section
S2 = not (S1);
Which one of the following statements describes the properties achieved?
  • a)
    Mutual exclusion but not progress
  • b)
    Progress but not mutual exclusion
  • c)
    Neither mutual exclusion nor progress
  • d)
    Both mutual exclusion and progress
Correct answer is option 'A'. Can you explain this answer?

Yash Patel answered
Mutual Exclusion: A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing. while one process executes the shared variable, all other processes desiring to do so at the same time moment should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion.
Progress Requirement: If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.
Solution: It can be easily observed that the Mutual Exclusion requirement is satisfied by the above solution, P1 can enter critical section only if S1 is not equal to S2, and P2 can enter critical section only if S1 is equal to S2. But here Progress Requirement is not satisfied. Suppose when s1=1 and s2=0 and process p1 is not interested to enter into critical section but p2 want to enter critical section. P2 is not able to enter critical section in this as only when p1 finishes execution, then only p2 can enter (then only s1 = s2 condition be satisfied). Progress will not be satisfied when any process which is not interested to enter into the critical section will not allow other interested process to enter into the critical section.

Suppose there are four processes in execution with 12 instances of a Resource R in a system. The maximum need of each process and current allocation are given below:
With reference to current allocation, is system safe ? If so, what is the safe sequence ?
  • a)
    No
  • b)
    Yes, P1P2P3P4
  • c)
    Yes, P4P3P1P2
  • d)
    Yes, P2P1P3P4
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
Current allocation of P1P2P3P4 are 3, 4, 2, 1 which is 10 in total. We have 12 total no of resources and out of them 10 are allocated so, we have only 2 resources. There is 5, 5, 3, 2 resources are needed for P1P2P3P4 respectively. So, P4 will run first and free 3 resources after execution. Which are sufficient for P3 So it will execute and do free 5 resources. Now P1 and P2 both require 5 resources each So we can execute any of them first but we will give priority to P1. The execution order will be P4P3P1P2.
SO, option (C) is correct.

A starvation free job scheduling policy guarantees that no job indefinitely waits for a service. Which of the following job scheduling policies is starvation free?
  • a)
    Priority queing
  • b)
    Shortest Job First
  • c)
    Youngest Job First
  • d)
    Round robin
Correct answer is option 'D'. Can you explain this answer?

Ravi Singh answered
Round Robin is a starvation free scheduling algorithm as it imposes a strict time bound on the response time of each process i.e. for a system with 'n' processes running in a round robin system with time quanta tq, no process will wait for more than (n-1) tq time units to get its CPU turn. Option (D) is correct.

An Operating System (OS) crashes on the average once in 30 days, that is, the Mean Time Between Failures (MTBF) = 30 days. When this happens, it takes 10 minutes to recover the OS, that is, the Mean Time To Repair (MTTR) = 10 minutes. The availability of the OS with these reliability figures is approximately :
  • a)
    96.97%
  • b)
    97.97%
  • c)
    99.009%
  • d)
    99.97%
Correct answer is option 'D'. Can you explain this answer?

Sanya Agarwal answered
System crashes once in 30 days and need 10 minutes to get repaired. Either convert 30 days into minute or 10 minute into days 30 days = 30 ∗ 24 ∗ 60 minute fraction of time system is not available = (10 / (30 ∗ 24 ∗ 60 )) ∗ 100 = 0.023% Availability = 100 - 0.023 = 99.97%
So, option (D) is correct.

A CPU scheduling algorithm determines an order for the execution of its scheduled processes. Given 'n' processes to be scheduled on one processor, how many possible different schedules are there?
  • a)
    n
  • b)
    n2
  • c)
    n!
  • d)
    2n
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
For 'n' processes to be scheduled on one processor, there can be n! different schedules possible. Example: Suppose an OS has 4 processes to schedule P1, P2, P3 and P4. For scheduling the first process, it has 4 choices, then from the remaining three processes it can take 3 choices, and so on. So, total schedules possible are 4*3*2*1 = 4!
Option (C) is correct.

System calls are usually invoked by using :
  • a)
    A privileged instruction
  • b)
    An indirect jump
  • c)
    A software interrupt
  • d)
    Polling
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
  • System calls are usually invoked by using a software interrupt.
  • Polling is the process where the computer or controlling device waits for an external device to check for its readiness or state, often with low-level hardware.
  • Privileged instruction is an instruction (usually in machine code) that can be executed only by the operating system in a specific mode.
  • In direct jump, the target address (i.e. its relative offset value) is encoded into the jump instruction itself.
So, option (C) is correct.

In an operating system, indivisibility of operation means:
  • a)
    Operation is interruptable
  • b)
    Race - condition may occur
  • c)
    Processor can not be pre-empted
  • d)
    All of the above
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
In an operating system, indivisibility of operation means processor can not be pre-empted. One a process starts its execution it will not suspended or stop its execution inside the processor.
So, option (C) is correct.

The time taken to switch between user and kernel modes of execution be t1 while the time taken to switch between two processes be t2. Which of the following is TRUE?
  • a)
    t1 > t2
  • b)
    t1 = t2
  • c)
    t1 < t2
  • d)
    nothing can be said about the relation between t1 and t2
Correct answer is option 'C'. Can you explain this answer?

Sagar Saha answered
Explanation:

Difference in time taken between user and kernel modes (t1) and two processes (t2):
- When a switch occurs between user and kernel modes of execution, it involves a change in privilege levels and requires the processor to switch from user mode to kernel mode or vice versa. This switch typically involves saving and restoring the current state of the process. This process incurs a certain overhead, denoted by t1.
- On the other hand, when a switch occurs between two processes, it involves switching the entire context of one process with another, including saving and restoring registers, program counters, and other relevant information. This switch between processes incurs a higher overhead compared to switching between user and kernel modes, denoted by t2.

Relationship between t1 and t2:
- As explained above, the overhead involved in switching between two processes (t2) is generally higher than the overhead involved in switching between user and kernel modes (t1). This is because switching between processes requires more context switching and data transfer.
- Therefore, it can be concluded that t1 < t2,="" indicating="" that="" the="" time="" taken="" to="" switch="" between="" user="" and="" kernel="" modes="" is="" typically="" less="" than="" the="" time="" taken="" to="" switch="" between="" two="" />
Therefore, the correct answer is option C) t1 < />.

Which of the following does not interrupt a running process?
  • a)
    A device
  • b)
    Timer
  • c)
    Scheduler process
  • d)
    Power failure
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
Scheduler process doesn’t interrupt any process, it’s Job is to select the processes for following three purposes. Long-term scheduler(or job scheduler) –selects which processes should be brought into the ready queue Short-term scheduler(or CPU scheduler) –selects which process should be executed next and allocates CPU. Mid-term Scheduler (Swapper)- present in all systems with virtual memory, temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. The mid-term scheduler may decide to swap out a process which has not been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. 

Consider three CPU intensive processes, which require 10, 20, 30 units and arrive at times 0, 2, 6 respectively. How many context switches are needed if shortest remaining time first is implemented? Context switch at 0 is included but context switch at end is ignored
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
Let three process be P0, P1 and P2 with arrival times 0, 2 and 6 respectively and CPU burst times 10, 20 and 30 respectively. At time 0, P0 is the only available process so it runs. At time 2, P1 arrives, but P0 has the shortest remaining time, so it continues. At time 6, P2 also arrives, but P0 still has the shortest remaining time, so it continues. At time 10, P1 is scheduled as it is the shortest remaining time process. At time 30, P2 is scheduled.

At a particular time of computation the value of a counting semaphore is 7. Then 20 P operations and xV operations were completed on this semaphore. If the new value of semaphore is 5 ,x will be
  • a)
    18
  • b)
    22
  • c)
    15
  • d)
    13
Correct answer is option 'A'. Can you explain this answer?

Yash Patel answered
P operation : Decrements the value of semaphore by 1 V operation : Increments the value of semaphore by 1 Initially, value of semaphore = 7 After 20 P operations, value of semaphore = 7 - 20 = -13 Now, after xV operations, value of semaphore = 5 -13 + xV = 5 xV = 5 + 13 = 18 So, option (A) is correct.

Consider the following statements about user level threads and kernel level threads. Which one of the following statement is FALSE?
  • a)
    Context switch time is longer for kernel level threads than for user level threads.
  • b)
    User level threads do not need any hardware support.
  • c)
    Related kernel level threads can be scheduled on different processors in a multi-processor system.
  • d)
    Blocking one kernel level thread blocks all related threads.
Correct answer is option 'D'. Can you explain this answer?

Yash Patel answered
Kernel level threads are managed by the OS, therefore, thread operations are implemented in the kernel code. Kernel level threads can also utilize multiprocessor systems by splitting threads on different processors. If one thread blocks it does not cause the entire process to block. Kernel level threads have disadvantages as well. They are slower than user level threads due to the management overhead. Kernel level context switch involves more steps than just saving some registers. Finally, they are not portable because the implementation is operating system dependent.
Option (A): Context switch time is longer for kernel level threads than for user level threads. True, As User level threads are managed by user and Kernel level threads are managed by OS. There are many overheads involved in Kernel level thread management, which are not present in User level thread management. So context switch time is longer for kernel level threads than for user level threads.
Option (B): User level threads do not need any hardware support True, as User level threads are managed by user and implemented by Libraries, User level threads do not need any hardware support.
Option (C): Related kernel level threads can be scheduled on different processors in a multi- processor system. This is true.
Option (D): Blocking one kernel level thread blocks all related threads. false, since kernel level threads are managed by operating system, if one thread blocks, it does not cause all threads or entire process to block.

At particular time, the value of a counting semaphore is 10, it will become 7 after: (a) 3 V operations (b) 3 P operations (c) 5 V operations and 2 P operations (d) 2 V operations and 5 P operations Which of the following option is correct?
  • a)
    Only (b)
  • b)
    Only(d)
  • c)
    Both (b) and (d)
  • d)
    None of these
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
P: Wait operation decrements the value of the counting semaphore by 1. V: Signal operation increments the value of counting semaphore by 1. Current value of the counting semaphore = 10 a) after 3 P operations, value of semaphore = 10-3 = 7 d) after 2 v operations, and 5 operations value of semaphore = 10 + 2 - 5 = 7 Hence option (C) is correct.

Names of some of the Operating Systems are given below:
(a) MS-DOS
(b) XENIX
(c) OS/2
In the above list, following operating systems didn’t provide multiuser facility.
  • a)
    (a) only
  • b)
    (a) and (b) only
  • c)
    (b) and (c) only
  • d)
    (a), (b) and (c)
Correct answer is option 'D'. Can you explain this answer?

Sanya Agarwal answered
MS-DOS is an operating system for x86-based personal computers mostly developed by Microsoft. It doesn't provide multi-user facility. XENIX is a discontinued version of the Unix operating system for various microcomputer platforms, licensed by Microsoft from AT&T Corporation. It doesn't provide multi-user facility. OS/2 is a series of computer operating systems, initially created by Microsoft and IBM. It doesn't provide multi-user facility. So, Option (D) is correct.

The performance of Round Robin algorithm depends heavily on
  • a)
    size of the process
  • b)
    the I/O bursts of the process
  • c)
    the CPU bursts of the process
  • d)
    the size of the time quantum
Correct answer is option 'D'. Can you explain this answer?

Ravi Singh answered
In round robin algorithm, the size of time quanta plays a very important role as: If size of quanta is too small: Context switches will increase and it is counted as the waste time, so CPU utilization will decrease. If size of quanta is too large: Larger time quanta will lead to Round robin regenerated into FCFS scheduling algorithm. So, option (D) is correct.

Chapter doubts & questions for Threads - Operating System 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Threads - Operating System in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Operating System

10 videos|100 docs|33 tests

Top Courses Computer Science Engineering (CSE)

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev