A process in an operating system requests and uses resources in a cycle of actions:
Deadlock is a situation in which a set of processes are blocked because each process is holding at least one resource and waiting to acquire a resource held by some other process. Once the involved processes are blocked in this way, none can proceed.
Consider the simple analogy of two trains travelling towards each other on a single-track section. If each train occupies one end of the single track segment and neither can reverse or move off the track, both remain stuck indefinitely. The same situation may occur in a computer system when processes hold resources and wait for resources held by others.

Deadlock can arise only if the following four conditions hold simultaneously. These are necessary conditions; if any one of them is absent, deadlock cannot occur.
There are three principal approaches to handling deadlock in operating systems:
The four conditions that characterise deadlock were listed above. To prevent deadlock one must ensure that the system design or resource-allocation policy negates at least one of these conditions.
In principle, mutual exclusion could be removed by making resources shareable. In practice, many resources (for example, physical devices such as tape drives and printers) are inherently non-shareable. Therefore this condition cannot always be negated.
Two common policies to eliminate the hold-and-wait condition are:

If preemption is allowed, the operating system can forcibly take a resource away from a process and give it to another. Practical difficulties arise: the process may be left in an inconsistent state, and not all resources can be preempted (for example, I/O in progress).
One common scheme to prevent circular wait is to impose a total ordering on all resource types and require that each process requests resources in strictly increasing (or strictly decreasing) order of enumeration. For example, assign numeric IDs to resources. If a process has resource R5, it may request only resources with numbers greater than 5; requests for lower-numbered resources must be denied. This ordering prevents cycles in the resource-wait graph.
Banker's Algorithm is a deadlock-avoidance algorithm, originally proposed by Edsger W. Dijkstra. It assumes that the system knows, in advance, the maximum number of instances of each resource type that each process may request. The algorithm grants a request only if doing so leaves the system in a safe state. A safe state is one in which there exists some sequence of process completions (a safe sequence) such that each process can obtain its maximum required resources and complete.
Inputs to the Banker's Algorithm are:
Request granting policy - a request from process Pi for some resources is granted only when both of the following hold:
If these two checks pass, the system tentatively allocates the resources to Pi and runs the safety algorithm to determine whether the resulting state is safe. If the state is safe, the allocation is made permanent; otherwise, the request is denied and Pi must wait.
Total resources in the system:

Available system resources are:

Processes (currently allocated resources):

Processes (maximum resources):

Need = Maximum - Allocation. Processes (need resources):

Note: Deadlock prevention is more restrictive than deadlock avoidance. Prevention eliminates the possibility of deadlock by ruling out one of the necessary conditions, whereas avoidance permits more concurrency by requiring additional knowledge and checking for safe states.
A resource-allocation graph (RAG) is a directed graph used to model the allocation of resources to processes. It contains two kinds of nodes: process nodes (P1, P2, ...) and resource nodes (R1, R2, ...). There are two kinds of directed edges:
If the RAG contains a cycle and every resource in the cycle has exactly one instance, then a deadlock exists. If a resource type has multiple instances, a cycle in the RAG is a necessary but not sufficient condition for deadlock.
A wait-for graph is a simplified graph where resource nodes are removed and an edge Pi → Pj indicates that process Pi is waiting for a resource currently held by process Pj. The wait-for graph is convenient for deadlock detection in the single-instance case because detecting a cycle in the wait-for graph is equivalent to detecting deadlock.
Deadlock detection methods differ depending on whether resource types have single or multiple instances.

In the example above, R1 and R2 have single instances and the graph R1 → P1 → R2 → P2 → R1 (a cycle) confirms a deadlock.
Once a deadlock is detected, the system must recover. Recovery techniques include:
Traditional desktop operating systems commonly favour the "ignore" approach or simple restart policies because comprehensive detection and recovery can be expensive in time and complexity. Real-time and high-availability systems, however, often implement detection and recovery strategies.
When designing or configuring a system, choose a deadlock strategy based on expected workload and system goals:
Deadlock is an important concept in operating systems that arises from resource contention under certain necessary conditions. Understanding the four necessary conditions, the structure of resource-allocation and wait-for graphs, and the differences between prevention, avoidance and detection/recovery is essential for designing robust systems. Practical systems balance safety, performance and complexity when choosing how to handle deadlocks.
| 1. What is a deadlock in operating systems? | ![]() |
| 2. What are the necessary conditions for a deadlock to occur? | ![]() |
| 3. How can deadlocks be prevented in an operating system? | ![]() |
| 4. What is the difference between deadlock avoidance and deadlock detection? | ![]() |
| 5. How does the Banker's algorithm help in deadlock avoidance? | ![]() |
![]() | Explore Courses for Computer Science Engineering (CSE) exam |
![]() | Get EduRev Notes directly in your Google search |