All questions of Operating System for Computer Science Engineering (CSE) Exam

In a time-sharing operating system, when the time slot given to a process is completed, the process goes from the RUNNING state to the
  • a)
    BLOCKED state
  • b)
    READY state
  • c)
    SUSPENDED state
  • d)
    TERMINATED state
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered

In time-sharing operating system (example in Round-Robin), whenever the time slot given to a process expires, it goes back to READY state and if it requests for same I/O operation, then it goes to BLOCKED state.

Consider a set of 5 processes whose arrival time, CPU time needed and the priority are given below:

Note: Smaller the number, higher the priority.
If the CPU scheduling policy is FCFS, the average waiting time will be 
  • a)
    12.8 ms
  • b)
    8 ms
  • c)
    16 ms
  • d)
    None of the above
Correct answer is option 'A'. Can you explain this answer?

Crack Gate answered
According to FCFS process solve are p1 p2 p3 p4 p5 so  
For p1 waiting time = 0 process time = 10 then 
For p2 waiting time = (process time of p1-arrival time of p2) = 10 - 0 = 10 then 
For p3 waiting time = (pr. time of (p1+p2) - arrival time of p3) = (10 + 5) - 2 = 13 and 
Same for p4 waiting time = 18 - 5 = 13 
Same for p5 waiting time = 38 - 10 = 28 
So total average waiting time = (0 + 10 + 13 + 13 + 28) / 5
= 12.8

Concurrent processes are processes that
  • a)
    Do not overlap in time
  • b)
    Overlap in time
  • c)
    Are executed by a processor at the same time
  • d)
    None of the above
Correct answer is option 'B'. Can you explain this answer?

Roshni Kumar answered
Concurrent processes are processes that share the CPU and memory. They do overlap in time while execution. At a time CPU entertain only one process but it can switch to other without completing it as a whole.

Suppose we have a system in which processes is in hold and wait condition then which of the following approach prevent the deadlock.
  • a)
    Request all resources initially
  • b)
    Spool everything
  • c)
    Take resources away
  • d)
    Order resources numerically
Correct answer is option 'A'. Can you explain this answer?

Ruchi Sengupta answered
Preventing Deadlock by Requesting All Resources Initially

Deadlock is a situation in which two or more processes are unable to proceed because each is waiting for the other to release resources. It can occur in a system with limited resources and processes that are holding resources while waiting for other resources to be released.

One approach to prevent deadlock is to request all resources initially. This means that when a process needs to execute, it requests all the resources it will need for its entire execution before it starts. This approach can help prevent deadlock by ensuring that a process has all the necessary resources before it begins execution.

Advantages of Requesting All Resources Initially:
-Preventing Resource Deadlock: By requesting all resources initially, a process can ensure that it has all the resources it needs for its execution. This prevents the situation where a process holds some resources and waits for others, leading to deadlock.
-Efficient Resource Allocation: Requesting all resources initially allows the system to allocate resources more efficiently. Since a process requests all the resources it needs at once, the system can determine if there are enough resources available to satisfy the request. If there are not enough resources, the system can allocate them to other processes that can make progress, avoiding resource wastage.

Disadvantages of Requesting All Resources Initially:
-Resource Overallocation: Requesting all resources initially may lead to resource overallocation, where a process requests more resources than it actually needs. This can result in resource wastage and inefficient resource utilization.
-Low Concurrency: Requesting all resources initially can reduce the level of concurrency in the system. Since a process needs to wait until it acquires all the resources it needs, other processes may have to wait for a long time before they can execute, leading to decreased system performance.

Overall, requesting all resources initially can be an effective approach to prevent deadlock in a system. However, it is important to carefully consider the resource requirements of each process and balance the need for preventing deadlock with the need for efficient resource utilization and system performance.

In Round Robin CPU scheduling, as the time quantum is increased, the average turn around time
  • a)
    Increases
  • b)
    Decreases
  • c)
    Remains constant
  • d)
    Varies irregularly
Correct answer is option 'D'. Can you explain this answer?

The Round Robin CPU scheduling technique will become insignificance if the time slice is very small or very large. When it is large it acts as FIFO which does not have a fixed determination over the turn around time. If processes with small burst arrived earlier turn around time will be less else it will be more.

Page fault occurs when
  • a)
    The page is corrupted by application software
  • b)
    The page is in main memory
  • c)
    The page is not in main memory
  • d)
    The tries to divide a number by 0
Correct answer is option 'C'. Can you explain this answer?

Page fault occurs when required page that is to be needed is not found in the main memory. When the page is found in the main memory it is called page hit, otherwise miss.

Dijkstra’s banking algorithm in an operating system performs
  • a)
    Deadlock avoidance
  • b)
    Deadlock recovery
  • c)
    Mutual exclusion
  • d)
    Context switching
Correct answer is option 'A'. Can you explain this answer?

Banker’s algorithm resources will only be allocated if system will remain in SAFE state (DEADLOCK can’t occur if system is in SAFE state), so performing DEADLOCK AVOIDANCE.

The link between two processes P and Q to send and receive messages is called __________
  • a)
    communication link
  • b)
    message-passing link
  • c)
    synchronization link
  • d)
    all of the mentioned
Correct answer is option 'A'. Can you explain this answer?

Communication Link between Processes

In a computer system, multiple processes may need to communicate with each other. The link between two processes that allows them to send and receive messages is called a communication link.

Characteristics of a Communication Link:
- It is a logical connection between two processes.
- It facilitates the exchange of messages between processes.
- It can be established using various communication protocols.
- It can be implemented using different hardware and software components.

Importance of Communication Link:
- It enables processes to cooperate and coordinate with each other.
- It facilitates the sharing of information and resources between processes.
- It enables the implementation of distributed systems and parallel computing.

Types of Communication Links:
- Direct Link: Processes communicate directly with each other without any intermediaries.
- Indirect Link: Processes communicate via an intermediary, such as a message-passing system or a shared memory.

Conclusion:
The communication link between processes plays a crucial role in enabling communication, cooperation, and coordination between processes in a computer system. It is a fundamental component of distributed systems and parallel computing.

‘Aging’ is
  • a)
    Keeping track of cache contents.
  • b)
    Keeping track of what pages are currently residing in the memory.
  • c)
    Keeping track of how many times a given page is referenced.
  • d)
    Increasing the priority of jobs to ensure termination in a finite time.
Correct answer is option 'D'. Can you explain this answer?

‘Aging’ is a process that increases the priority of jobs corresponding to the time process is waiting for the CPU. So, the process with longer waiting time turns to have more priority over the process that is just arriving. This is done to ensure there is no STARVATION and termination of processes in a finite time.

Semaphores are used to solve the problem of
1. Race condition
2. Process synchronization
3. Mutual exclusion
  • a)
    1 and 2
  • b)
    2 and 3
  • c)
    All of the above
  • d)
    None of the above
Correct answer is option 'B'. Can you explain this answer?

Pranav Patel answered
Semaphores are used in deadlock avoidance by using them during interprocess communication. It is used to solve the problem of synchronisation among processes.

Which system call can be used by a parent process to determine the termination of child process?
  • a)
    wait
  • b)
    exit
  • c)
    fork
  • d)
    get
Correct answer is option 'A'. Can you explain this answer?

Janani Joshi answered
Explanation:

When a parent process creates a child process using the `fork()` system call, it may need to wait for the child process to terminate before continuing its execution. The parent process can determine the termination of the child process by using the `wait()` system call.

wait() System Call:

The `wait()` system call allows a parent process to suspend its execution until one of its child processes terminates. It takes the process ID (PID) of the child process as an argument and returns the termination status of the child process.

Usage:

The parent process can use the `wait()` system call in the following way to determine the termination of a child process:

1. The parent process creates a child process using the `fork()` system call.
2. After forking, the parent process can use the `wait()` system call to wait for the termination of the child process.
3. The `wait()` system call suspends the execution of the parent process until the child process terminates.
4. Once the child process terminates, the `wait()` system call returns the termination status of the child process to the parent process.
5. The parent process can then continue its execution and use the termination status of the child process for further processing or decision-making.

Alternative System Calls:

The other options mentioned in the question are not suitable for the parent process to determine the termination of a child process:

- `exit()`: The `exit()` system call is used by a process (both parent and child) to terminate itself, not to determine the termination of another process.
- `fork()`: The `fork()` system call is used to create a new process (child process) from an existing process (parent process), but it does not provide a way for the parent process to determine the termination of the child process.
- `get()`: There is no specific system call named `get()` in the context of process termination.

Therefore, the correct system call that can be used by a parent process to determine the termination of a child process is the `wait()` system call.

In a multiprogramming environment
  • a)
    The processor executes more than one process at a time
  • b)
    The programs are developed by more than one person
  • c)
    More than one process resides in the memory
  • d)
    A single user can execute many programs at the same time
Correct answer is option 'C'. Can you explain this answer?

Naina Sharma answered
Multiprogramming environment means processor is executing multiple processes simultaneously by continuously switching between one-another. Therefore, multiple processes should reside in memory. However, processor can't executes more than one process at a time.

Thrashing
  • a)
    Reduces page I/O
  • b)
    Decreases the degree of multiprogramming
  • c)
    Implies excessive page I/O
  • d)
    Improves the system performance
Correct answer is option 'C'. Can you explain this answer?

Rohan Shah answered
When to increase multi-programming, a lot of processes are brought into memory. Then, what happens is, no process gets enough memory space and processor needs to keep bringing new pages into the memory to satisfy the requests. (A lot of page faults occurs). This process of excessive page replacements is called thrashing, which in turn reduces the overall performance.

Which of the following is NOT true of deadlock prevention and deadlock avoidance schemes?
  • a)
    In deadlock prevention, the request for resources is always granted if the resulting state is. safe.
  • b)
    In deadlock avoidance, the request for resources is always granted if the resulting state is safe.
  • c)
    Deadlock avoidance is less restrictive than deadlock prevention. 
  • d)
    Deadlock avoidance requires knowledge of resource requirements a priori.
Correct answer is option 'A'. Can you explain this answer?

Deadlock Prevention and Deadlock Avoidance

Deadlock prevention and deadlock avoidance are two different strategies used to handle the problem of deadlocks in a computer system.

Deadlock Prevention

In deadlock prevention, the system tries to prevent deadlocks from occurring by ensuring that the four necessary conditions for deadlock do not occur. These conditions are mutual exclusion, hold and wait, no preemption, and circular wait. If any of these conditions are violated, the request for resources is always granted if the resulting state is safe. Deadlock prevention is considered to be a more restrictive approach than deadlock avoidance.

Deadlock Avoidance

In deadlock avoidance, the system tries to avoid deadlocks by predicting and preventing the occurrence of a deadlock before it actually happens. This is done by using techniques such as resource allocation graphs or banker's algorithm. In this approach, the request for resources is always granted if the resulting state is safe. Deadlock avoidance is considered to be less restrictive than deadlock prevention.

The Answer

The statement that is not true of deadlock prevention and deadlock avoidance schemes is option A, which states that in deadlock prevention, the request for resources is always granted if the resulting state is safe. This statement is incorrect because in deadlock prevention, the request for resources is not always granted even if the resulting state is safe. Deadlock prevention may require the system to deny the request for resources in order to prevent the occurrence of a deadlock.

In a paged memory, the page hit ratio is 0.35. The time required to access a page in secondary memory is equal to 100 ns. The time required to access a page in primary memory is 10 ns. The average time required to access a page is
  • a)
    3.0 ns
  • b)
    68.0 ns
  • c)
    68.5 ns
  • d)
    78.5 ns
Correct answer is option 'C'. Can you explain this answer?

Ishaan Saini answered
Hit ratio = 0.35
Time (secondary memory) = 100 ns
T(main memory) = 10 ns
Average access time = h(Tm) + (1 - h) (Ts)
= 0.35 x 10 +(0.65) x 100
= 3.5 + 65 
= 68.5 ns

Hence option (C) is correct

For basics of Memory management lecture click on the following link:

The main function of shared memory is to
  • a)
    Use primary memory efficiently
  • b)
    Do intra process communication
  • c)
    Do inter process communication
  • d)
    None of these
Correct answer is option 'C'. Can you explain this answer?

Atharva Das answered
Shared memory is that memory that can be simultaneously accessed by multiple programs with an intent to provide communication among them or to avoid redundant copies. Shared memory is a way of interchanging data between programs.

In Unix, Which system call creates the new process?
  • a)
    fork
  • b)
    create
  • c)
    new
  • d)
    none of the mentioned
Correct answer is option 'A'. Can you explain this answer?

Shail Kulkarni answered
Introduction:
In Unix, the system call 'fork' is used to create a new process. The 'fork' system call creates an exact copy of the currently running process, known as the parent process. The new process created is called the child process. The 'fork' system call is one of the fundamental features of Unix-based operating systems and is used extensively for process creation.

Explanation:
The 'fork' system call is responsible for creating a new process by duplicating the existing process. When the 'fork' system call is invoked, a new process is created that is an exact copy of the calling process, including its memory space, file descriptors, and other attributes. The only difference between the parent and child process is their process IDs (PID). The parent process is assigned a positive PID, while the child process is assigned a PID of 0.

The 'fork' system call follows the Copy-on-Write (COW) mechanism. This means that initially, both the parent and child processes share the same memory space. However, if either process modifies any memory location, a separate copy of that memory page is created for that process. This mechanism ensures efficient memory utilization and reduces the overhead of memory duplication.

Steps involved in the 'fork' system call:

1. The 'fork' system call is invoked by the parent process.
2. The operating system creates a new process by duplicating the entire address space of the parent process.
3. The new process, known as the child process, is assigned a unique PID.
4. Both the parent and child processes continue execution from the same point after the 'fork' system call.
5. The return value of the 'fork' system call is different for the parent and child processes. The parent process receives the PID of the newly created child process, while the child process receives a return value of 0.
6. The parent and child processes can now execute different code paths or perform different tasks based on the return value of the 'fork' system call.

Conclusion:
The 'fork' system call is used to create a new process in Unix-based operating systems. It creates an exact copy of the currently running process, known as the parent process, and assigns a unique PID to the newly created child process. The 'fork' system call is a fundamental feature of Unix and is essential for process creation and multitasking capabilities.

If the property of locality of reference is well pronounced a program:
1. The number of page faults will be more.
2. The number of page faults will be less.
3. The number of page faults will remain the same.
4. Execution will be faster.
  • a)
    1 and 2
  • b)
    2 and 4
  • c)
    3 and 4
  • d)
    None of the above
Correct answer is option 'B'. Can you explain this answer?

Nisha Das answered
If the property of locality of reference is well pronounced then the page to be accessed will be found in the memory more likely and hence page faults will be less. Also since access time will be only of the upper level in the memory hierarchy bence execution will be faster.

A system has 3 processes sharing 4 resources. If each process needs a maximum of 2 units then, deadlock
  • a)
    Can never occur
  • b)
    May occur
  • c)
    Has to occur
  • d)
    None of the above
Correct answer is option 'A'. Can you explain this answer?

Abhijeet Unni answered
At least one process will be holding 2 resources in case of a simultaneous demand from, all the processes. That process will release the 2 resources, thereby avoiding any possible deadlock.

A set of processes is deadlock if __________
  • a)
    each process is blocked and will remain so forever
  • b)
    each process is terminated
  • c)
    all processes are trying to kill each other
  • d)
    none of the mentioned
Correct answer is option 'A'. Can you explain this answer?

Sudhir Patel answered
Deadlock is a situation which occurs because process A is waiting for one resource and holds another resource (blocking resource). At the same time another process B demands blocking a resource as it is already held by a process A, process B is waiting state unless and until process A releases occupied resource.

Consider a system with 1 K pages and 512 frames and each page is of size 2 KB. How many bits are required to represent the virtual address space Memory is
  • a)
    20 bits
  • b)
    21 bits
  • c)
    11 bits
  • d)
    None of these
Correct answer is option 'B'. Can you explain this answer?

Arpita Mehta answered
Virtual address space consists of pages.
Given that,
Number of pages = 1 K = 210
 Page size = 2KB = 211B
Hence virtual address space
= 211 x210
= 221 B
∴  21 bits are required to represent the virtual address space.

Message passing system allows processes to __________
  • a)
    communicate with each other without sharing same address space
  • b)
    communicate with one another by resorting to shared data
  • c)
    share data
  • d)
    name the recipient or sender of the message
Correct answer is option 'A'. Can you explain this answer?

Introduction

Message passing is a communication mechanism that allows processes to exchange data and communicate with each other. It is a method of interprocess communication (IPC) where processes can send and receive messages.

Explanation

Option A: Communicate with each other without sharing the same address space
- In a message passing system, processes can communicate with each other without the need to share the same address space.
- Address space refers to the memory space allocated to a process, which contains its code, data, and stack.
- In other words, processes in a message passing system may not be running on the same machine or have direct access to each other's memory.

Option B: Communicate with one another by resorting to shared data
- This option is incorrect because message passing does not rely on shared data.
- Shared data refers to a data segment that is accessible by multiple processes, allowing them to read and write data.
- Message passing, on the other hand, involves sending and receiving messages between processes without directly accessing shared data.

Option C: Share data
- This option is incorrect because message passing does not involve sharing data between processes.
- Instead, it focuses on exchanging messages, which can contain information or instructions.

Option D: Name the recipient or sender of the message
- This option is incorrect because message passing does not necessarily require naming the recipient or sender of the message.
- In some message passing systems, processes may communicate anonymously, without explicitly identifying the sender or receiver.

Conclusion

The correct option is A: Communicate with each other without sharing the same address space. Message passing allows processes to communicate by sending and receiving messages, without the need for shared data or explicitly naming the sender or receiver. It enables interprocess communication even when processes are running on different machines or have separate memory spaces.

'm' processes share 'n' resources of the same type. The maximum need of each process does not exceed 'n' and the sum all their maximum needs is always less than m + n. In this set up deadlock
  • a)
    Has to occur
  • b)
    May occur
  • c)
    Can never occur
  • d)
    None of the these
Correct answer is option 'C'. Can you explain this answer?

Rajesh Malik answered
Assume m = 5 and n = 10
That means, 5 processes are sharing 10 resources and worst case will be if everyone is demanding equal number of cases.
So, for deadlock to be there every process must be holding 2 resources and seeking 1 more resource. This will make total demand to 15, which is nothing but 10 + 5 (m + n).
However, as given maximum demand is always less than (m + n), so we can say there will never be a deadlock.

Chapter doubts & questions for Operating System - Question Bank for GATE Computer Science Engineering 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Operating System - Question Bank for GATE Computer Science Engineering in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev