The operating system contains a portion of kernel code dedicated to managing input/output (I/O). This code improves the reliability and performance of the system by coordinating requests from processes, device drivers, and hardware controllers. A typical computer system contains one or more CPUs and several device controllers connected by a system bus. Device controllers present an abstraction to the OS; device drivers implement that abstraction and provide the software interface to I/O devices. The OS and device drivers mediate access to shared resources such as device registers and main memory, and they handle data transfer, buffering and error reporting.
Common Bus ControllerI/O requests in operating systems
An I/O operation usually follows this logical flow:
To manage many concurrent requests and devices efficiently, the OS organises I/O management into cooperating components. Typical divisions are the I/O traffic controller, the I/O scheduler, and the I/O device handler. Each performs a distinct role:
I/O scheduling organises the order in which pending I/O requests are dispatched to devices. The goals of I/O scheduling typically include:
I/O scheduling differs from CPU scheduling in one important practical way: once a channel program or device transfer has been started, it is usually not preempted. Device-level operations tend to be short (tens to a few hundreds of milliseconds for traditional disks) and the hardware completes the transfer; the scheduler normally selects the next request only when the device becomes free. Modern systems, however, may support prioritisation (so higher-priority I/O requests can be served before lower-priority ones when possible) and sophisticated queues for different classes of requests.
The traffic controller is responsible for mapping requests onto physical paths and for managing shared communication resources. Its main tasks are:
The I/O scheduler maintains queues of pending requests and implements policies to decide service order. Under heavy load the scheduler must choose which request to serve next so as to optimise the chosen metric (seek time, latency, fairness, etc.). Typical techniques used by schedulers include multiple queues (for priority levels or device classes), request merging (combine nearby requests to reduce movement), and reordering seeks to reduce head travel for rotating disks.
The device handler interfaces with hardware interrupts and the device driver to start and finish transfers. It coordinates the actual movement of data, updates device and request status, and invokes the scheduler when a device becomes free. The choice of scheduling algorithm is central to the device handler's behaviour for block devices such as disks.
Common I/O scheduling algorithms
Each algorithm below attempts to improve one or more of the scheduling goals (seek length, response time, fairness). A short description, typical behaviour and common trade-offs are given for each.
Requests are served in arrival order. It is simple to implement and fair in the sense of preserving arrival order. FCFS can produce high seek times if requests are distributed widely on the disk; overall response time may be poor under heavy or adversarial workloads.
Advantages: simplicity and fairness. Disadvantages: potentially large average seek and response times.
SSTF chooses the pending request whose target track is closest to the current head position. The effect is analogous to Shortest Job First in CPU scheduling: requests that are close are favoured, reducing total arm movement.
Advantages: typically lower total seek than FCFS. Disadvantages: can cause starvation for requests far from frequently accessed regions (those ''out of the way'').
The head moves in one direction servicing all requests until it reaches the end, then reverses direction and services requests on the return trip. The head behaves like an elevator: it sweeps back and forth across the disk.
Advantages: better overall fairness than SSTF and reduced variance in response time; it bounds wait times in practice. Disadvantages: can still produce variable wait times depending on request distribution.
LOOK is a refinement of SCAN. Instead of going all the way to the outermost tracks, the head reverses direction at the last pending request in its path; it "looks" ahead and only travels as far as needed.
Advantages: avoids unnecessary travel to empty regions of the disk, typically reducing service time versus SCAN. Disadvantages: similar to SCAN in other respects; still possible to favour certain regions.
Pending requests are partitioned into groups of at most N requests. The scheduler services one group at a time using SCAN; new requests arriving during a group's service are placed into the following group. This reduces the chance of starvation by bounding how long newly arriving requests must wait before being considered.
C-SCAN services requests in one direction only. When the head reaches the last track in that direction, it quickly returns to the first track without servicing requests on the return; service then resumes in the same direction. This provides a more uniform wait time across tracks because every request sees at most one full sweep's worth of delay.
C-LOOK is an optimized version of C-SCAN. The head services requests in one direction and, instead of travelling to the physical end of the disk, jumps back only to the lowest-numbered pending request and resumes. This avoids unnecessary travel across empty track ranges and reduces total movement compared with C-SCAN.
When selecting an I/O scheduling policy the OS designer balances several criteria:
Modern storage hardware and system designs change how scheduling is applied:
The OS manages I/O by coordinating device drivers, controllers and schedulers. I/O scheduling aims to reduce mechanical movement (for disks), improve throughput and response time, and provide fairness. A range of algorithms-FCFS, SSTF, SCAN, LOOK and their variants such as C-SCAN and C-LOOK-exist to meet different workload needs. The choice of algorithm depends on device characteristics and application requirements: what is optimal for a rotating disk may be unnecessary for flash storage. Understanding these algorithms and their trade-offs is essential for designing and tuning system performance.
| 1. What's the difference between FCFS and SSTF disk scheduling algorithms? | ![]() |
| 2. Why does the elevator algorithm reduce head movement compared to other I/O scheduling methods? | ![]() |
| 3. How do I/O scheduling algorithms affect overall system performance and throughput? | ![]() |
| 4. What causes starvation in I/O scheduling and which algorithms prevent it? | ![]() |
| 5. What's the practical difference between SCAN and C-SCAN disk scheduling for real applications? | ![]() |
![]() | Explore Courses for Computer Science Engineering (CSE) exam |
![]() | Get EduRev Notes directly in your Google search |