All Exams  >   Computer Science Engineering (CSE)  >   Operating System  >   All Questions

All questions of File Systems for Computer Science Engineering (CSE) Exam

Let's say there are seven requests: 16, 24, 43, 82, 140, 170, and 190. Additionally, the disc arm is supposed to travel "towards the larger number" while the read/write arm is at 50. What is the total seek time?
    Correct answer is '332'. Can you explain this answer?

    Nishanth Mehta answered
    In one direction only. In this case, the disc arm will start at a certain position and move towards the nearest request in that direction. Once it reaches that request, it will continue in the same direction to the next closest request.

    To determine the total distance traveled by the disc arm, we need to know the starting position of the disc arm. Let's assume the starting position is 50.

    Here is the order in which the requests will be serviced and the distance traveled at each step:

    1. Start at position 50.
    Distance traveled: 0 (since the arm is already at the starting position)

    2. Move to request 43.
    Distance traveled: 7 (from 50 to 43)

    3. Move to request 24.
    Distance traveled: 19 (from 43 to 24)

    4. Move to request 16.
    Distance traveled: 8 (from 24 to 16)

    5. Move to request 82.
    Distance traveled: 66 (from 16 to 82)

    6. Move to request 140.
    Distance traveled: 58 (from 82 to 140)

    7. Move to request 170.
    Distance traveled: 30 (from 140 to 170)

    8. Move to request 190.
    Distance traveled: 20 (from 170 to 190)

    Total distance traveled by the disc arm: 208 units.

    Consider two files systems A and B , that use contiguous allocation and linked allocation, respectively. A file of size 100 blocks is already stored in A and also in B. Now, consider inserting a new block in the middle of the file (between 50th and 51st block), whose data is already available in the memory. Assume that there are enough free blocks at the end of the file and that the file control blocks are already in memory. Let the number of disk accesses required to insert a block in the middle of the file in A and B are nA and nB respectively, then the value of nA + nB is_________.
      Correct answer is '153'. Can you explain this answer?

      Sudhir Patel answered
      Contiguous allocation:
      Contiguous allocation occurs when the blocks are allocated to the file in such a way that all of the file's logical blocks are assigned to the same physical block on the memory.
      For contiguous allocation,
      we have 100 blocks, in contiguous allocation first, we push the 100th block to 101 th block (we need two access), then we push the 99 th block to the 100 th block (we need two access again) and so on up to 51 th block. Now, 51 block is empty, then we push new block to 51 position list need one access).
      So, Total access are= 50 read operations+ 50 write operations+1 operation to write the newly inserted block.
      Total access are=101 operations (nA)
      Linked allocation:
      In this scheme, each file is a linked list of disc blocks that do not have to be contiguous. Disk blocks can be placed anywhere on the disc. The directory entry includes a reference to the beginning and end of the file block. Each block carries a reference to the next block occupied by the file.
      For linked allocation,
      we need to access the block from 1 to 50, new block access that should insert between 50 and 51 blocks, and access 51 blocks. So, the total access is 52.
      Total access are= 50 read operations+ 1 operation delete next pointer of 50 th element +1 operation to connect it to the 51 th element.
      Total access are=52 operations (nB)
      Total operations are required= nA+ nB
      Total operations are required= 101+52
      Total operations are required= 153.
      Hence the correct answer is 153.

      A programmer handles the file allocation in such a way that n+1 blocks are utilized if a file needs n blocks, with the first block including index information. Which file allocation method is used?
      • a)
        Contiguous file allocation
      • b)
        Linked file allocation
      • c)
        Chained file allocation
      • d)
        Indexed file allocation
      Correct answer is option 'D'. Can you explain this answer?

      Sudhir Patel answered
      Indexed file allocation:
      In Indexed file allocation, If a file requires n blocks then n+1 blocks are used where the first block contains index information(pointers to data blocks).
      • The Indexed file allocation table contains a separate one-level index for each file.
      • The index has one entry for each portion allocated to the file.
      • The file allocation table contains the block number for the index.
      Hence the correct answer is Indexed file allocation.

      There are 200 tracks on a disk platter and the pending requests have come in the order - 36, 69, 167, 76, 42, 51, 126, 12, and 199, Assume the arm is located at the 100th track and moving towards track 200. If the sequence of disc access is 126, 167, 199, 12, 36, 42, 51, 69, and 76 then which disc access scheduling policy is used?
      • a)
        Elevator
      • b)
        Shorter Seek time First
      • c)
        C-SCAN
      • d)
        First Come First Served
      Correct answer is option 'C'. Can you explain this answer?

      Kalyan Menon answered
      Explanation:
      The given sequence of disc access is 126, 167, 199, 12, 36, 42, 51, 69, and 76. Based on this sequence, we need to determine the disk access scheduling policy used.

      Elevator Scheduling (C-SCAN):
      - The elevator scheduling algorithm, also known as C-SCAN, works by moving the arm in one direction until it reaches the end of the disk and then reversing its direction and moving back to the other end.
      - In this case, the arm starts at track 100 and moves towards track 200. However, instead of reversing its direction at track 200, it continues moving towards track 0.
      - So, based on the given sequence, the next track to be accessed should be the one closest to the current track in the direction of the movement. However, the given sequence does not follow this pattern, so elevator scheduling is not being used.

      Shorter Seek Time First (SSTF):
      - The SSTF algorithm selects the request with the shortest seek time from the current track.
      - In this case, the next track to be accessed from the current track 100 would be 126, as it has the shortest seek time compared to the other pending requests.
      - However, the given sequence does not follow this pattern, so SSTF is not being used.

      C-SCAN:
      - C-SCAN, also known as circular SCAN, is a variant of the SCAN algorithm that moves the arm only in one direction, servicing requests along the way, and when it reaches the end of the disk, it jumps back to the beginning without servicing any requests in the reverse direction.
      - In this case, the given sequence matches the pattern of C-SCAN. The arm starts at track 100 and moves towards track 200, accessing tracks 126, 167, and 199 along the way. Then, it jumps back to the beginning and accesses track 12. Finally, it continues moving towards track 200 and accesses tracks 36, 42, 51, 69, and 76.
      - Therefore, the disc access scheduling policy used is C-SCAN.

      First Come First Served (FCFS):
      - FCFS is the simplest disk scheduling algorithm where requests are serviced in the order they arrive.
      - The given sequence does not follow the FCFS pattern as it does not consider the arrival order of the requests.
      - Therefore, FCFS is not being used in this case.

      Hence, the correct answer is option C) C-SCAN.

      The total time to prepare a disk drive mechanism for a block of data to be read from it is
      • a)
        seek time
      • b)
        latency
      • c)
        latency plus seek time
      • d)
        transmission time
      Correct answer is option 'C'. Can you explain this answer?

      Ravi Singh answered
      Seek time is the time taken by the head to move to the correct track and Rotational latency is the time taken by the disk to rotate until the head is over the desired block to read So, the total time to prepare a disk drive mechanism for a block of data to be read from is the sum of both the seek time and the latency. Option (C) is correct.

      Consider a file currently consisting of 50 blocks. Assume that the file control block and the index block is already in memory. If a block is added at the end (and the block information to be added is stored in memory), then how many disk I/O operations are required for indexed (single-level) allocation strategy?
      • a)
        1
      • b)
        101
      • c)
        27
      • d)
        0
      Correct answer is option 'A'. Can you explain this answer?

      Jay Basu answered
      Indexed (Single-Level) Allocation Strategy

      In indexed allocation, a separate index block is used to store the addresses of the data blocks. Each data block is assigned a unique index in the index block, allowing for direct access to the data blocks.

      Given:
      - File consists of 50 blocks
      - File control block (FCB) and index block are already in memory
      - A block is added at the end and the block information is stored in memory

      Calculation:
      To determine the number of disk I/O operations required, we need to consider the following:

      1. Retrieve the index block from memory: 1 disk I/O operation
      2. Update the index block with the address of the new data block: 1 disk I/O operation
      3. Write the updated index block back to disk: 1 disk I/O operation

      Since the index block is already in memory, we only need to perform disk I/O operations to update and write back the index block. The data block to be added is stored in memory, so no additional disk I/O operation is required to retrieve it.

      Therefore, the total number of disk I/O operations required for indexed allocation in this scenario is 1 (to write the updated index block back to disk).

      Answer:
      The correct answer is option A) 1.

      A device with data transfer rate 10 KB/sec is connected to a CPU. Data is transferred byte-wise. Let the interrupt overhead be 4 microsec. The byte transfer time between the device interface register and CPU or memory is negligible. What is the minimum performance gain of operating the device under interrupt mode over operating it under program controlled mode?
      • a)
        15
      • b)
        25
      • c)
        35
      • d)
        45
      Correct answer is option 'B'. Can you explain this answer?

      Sanya Agarwal answered
      In programmed I/O, CPU does continuous polling, To transfer 1B CPU polls for 10^-4 sec = 10^2 micro-sec of processing In interrupt mode CPU is interrupted on completion of i\o, To transfer 1B CPU does 4 micro-sec of processing(since transfer time between other components is negligible). Gain = 10^2 / 4 = 25

      Which of the following addressing modes, facilitates access to an operand whose location is defined relative to the beginning of the data structure in which it appears?
      • a)
        Ascending
      • b)
        Sorting
      • c)
        Index
      • d)
        Indirect
      Correct answer is option 'C'. Can you explain this answer?

      Sudhir Patel answered
      Index addressing modes:
      Index addressing modes
      facilitates access to an operand whose location is defined relative to the beginning of the data structure in which it appears.
      • In the indexed addressing mode, the content of a particular index register is appended to the address component of an instruction to obtain the effective address.
      • The index register refers to a particular CPU register that contains an index value. The address field of an instruction specifies the starting address of any data array in memory.
      • Using the index addressing mode, we get flexibility for specifying several locations of the memory.
      Hence the correct answer is Index.

      Thrashing
      • a)
        Reduces page I/O
      • b)
        Decreases the degree of multiprogramming
      • c)
        Implies excessive page I/O
      • d)
        Improves the system performance
      Correct answer is option 'C'. Can you explain this answer?

      Rohan Shah answered
      When to increase multi-programming, a lot of processes are brought into memory. Then, what happens is, no process gets enough memory space and processor needs to keep bringing new pages into the memory to satisfy the requests. (A lot of page faults occurs). This process of excessive page replacements is called thrashing, which in turn reduces the overall performance.

      In __________ disk scheduling algorithm, the disk head moves from one end to other end of the disk, serving the requests along the way. When the head reaches the other end, it immediately returns to the beginning of the disk without serving any requests on the return trip.
      • a)
        LOOK
      • b)
        SCAN
      • c)
        C-LOOK
      • d)
        C-SCAN
      Correct answer is option 'D'. Can you explain this answer?

      Snehal Desai answered

      Explanation:

      C-SCAN (Circular SCAN):

      C-SCAN is a disk scheduling algorithm where the disk head moves from one end to the other end of the disk, serving the requests along the way. However, when the head reaches the other end, it immediately returns to the beginning of the disk without serving any requests on the return trip.

      Key points about C-SCAN algorithm:
      - C-SCAN is a variant of the SCAN algorithm.
      - In the C-SCAN algorithm, the disk head moves in a circular manner.
      - The head serves all requests in one direction until it reaches the end of the disk, then it returns to the beginning without serving any requests on the way back.
      - This approach helps in reducing the average seek time as it avoids the head movement over the entire disk range.

      Advantages of C-SCAN algorithm:
      - C-SCAN algorithm ensures a more predictable and consistent service time for requests.
      - It reduces the overall seek time as the head movement is optimized in a single direction without any backtracking.
      - C-SCAN is particularly useful in scenarios where there are a large number of requests and the disk is heavily loaded.

      Conclusion:

      In conclusion, the C-SCAN disk scheduling algorithm offers efficient and optimized disk access by moving the disk head in a circular manner, serving requests in one direction and returning to the beginning without any interruptions. This helps in minimizing seek time and providing consistent performance in disk scheduling.

      The primary purpose of an operating system is
      • a)
        To make most efficient use of the computer hardware
      • b)
        To allow people to use the computer
      • c)
        To keep systems programmers employed
      • d)
        To make computers easier to use
      Correct answer is option 'A'. Can you explain this answer?

      Hridoy Datta answered
      Introduction:
      The primary purpose of an operating system is to make the most efficient use of the computer hardware. It serves as an intermediary between the user and the computer hardware, enabling the user to interact with the computer system and providing an environment for executing applications.

      Efficient use of computer hardware:
      The operating system is responsible for managing the computer's resources and ensuring their efficient utilization. It allocates CPU time, memory, and other hardware resources to different processes and applications running on the system. By efficiently managing these resources, the operating system ensures that the computer system operates at its optimal performance.

      Resource allocation:
      The operating system uses various scheduling algorithms to allocate CPU time to different processes based on their priority and requirements. It also manages the memory by allocating and deallocating memory space to different processes as needed. Additionally, the operating system handles input/output operations, manages storage devices, and controls the execution of programs.

      Hardware abstraction:
      One of the key functions of an operating system is to provide hardware abstraction. It hides the underlying complexities of the hardware and presents a simplified interface to the user and application programs. This abstraction allows users to interact with the computer system without having to understand the intricate details of the hardware. For example, the operating system provides a user-friendly graphical user interface (GUI) that allows users to interact with applications using icons, menus, and windows.

      Device management:
      The operating system manages various input/output devices such as keyboards, mice, printers, and network interfaces. It provides device drivers that enable these devices to communicate with the computer system. The operating system handles device interrupts, manages device queues, and ensures smooth data transfer between devices and memory.

      Conclusion:
      In summary, the primary purpose of an operating system is to make the most efficient use of the computer hardware. It achieves this by managing system resources, providing hardware abstraction, and facilitating user interaction. By efficiently utilizing the computer's resources, the operating system ensures optimal performance and enables users to effectively use the computer system.

      In Magnetic Disks Structures, The surface of a platter is logically divided into circular______
      • a)
        sectors
      • b)
        arms
      • c)
        tracks
      • d)
        spindle
      Correct answer is option 'C'. Can you explain this answer?

      Magnetic Disks Structures:

      A magnetic disk is a storage device that uses magnetization to store and retrieve digital data. It consists of one or more rotating disks coated with magnetic material and read/write heads to access the data. The surface of a platter in magnetic disks structures is logically divided into circular tracks.

      Circular Tracks:

      The surface of a platter is logically divided into circular tracks. These tracks are concentric circles on the surface of the disk. The tracks are numbered from the outermost track to the innermost track. Each track is further divided into sectors, which are pie-shaped wedges on the surface of the disk.

      Sectors:

      Each sector on a track stores a fixed amount of data. The number of sectors on a track can vary depending on the disk's format and capacity. Sectors are numbered sequentially starting from 0 to the maximum number of sectors on a track.

      Conclusion:

      In conclusion, the surface of a platter in magnetic disks structures is logically divided into circular tracks, which are concentric circles on the surface of the disk. Each track is further divided into sectors, which are pie-shaped wedges on the surface of the disk. Sectors are numbered sequentially starting from 0 to the maximum number of sectors on a track. By dividing the surface of the platter into circular tracks and sectors, it becomes easier to organize and access the data stored on the disk.

      The address sequence generated by tracing a particular program executing in a pure demand paging system with 100 records per page, with 1 free main memory frame is recorded as follows. What is the number of page faults?
      0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0370 
      • a)
        13
      • b)
        8
      • c)
        7
      • d)
        10
      Correct answer is option 'C'. Can you explain this answer?

      Sonal Nair answered
      When it tries to access 0100, it results in a page fault as the memory is empty right now. So, it loads the second page (which has the addresses 100 - 199). Trying to access 200 will result in a page fault, as it is not in memory right now. So the third page with the addresses from 200 to 299 will replace the second page in memory. Trying to access 430 will result in another page fault. Proceeding this way, we find trying to access the addresses 0510,0120,0220 and 0320 will all result in page faults. So, altogether 7 page faults.

      Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of data are stores in a bit serial manner in a sector. The capacity of the disk pack and the number of bits required to specify a particular sector in the disk are respectively
      • a)
        256 Mbyte, 19 bits
      • b)
        256 Mbyte, 28 bit
      • c)
        512 Mbyte, 20 bits
      • d)
        64 Gbyte, 28 bits
      Correct answer is option 'A'. Can you explain this answer?

      Anshu Mehta answered
      Capacity of the Disk Pack:

      Given:
      - Number of surfaces = 16
      - Number of tracks per surface = 128
      - Number of sectors per track = 256
      - Data stored in a sector = 512 bytes

      To calculate the capacity of the disk pack, we need to multiply the number of surfaces, tracks per surface, sectors per track, and the data stored in a sector.

      Capacity of the disk pack = Number of surfaces × Number of tracks per surface × Number of sectors per track × Data stored in a sector

      Capacity of the disk pack = 16 × 128 × 256 × 512 bytes

      To convert bytes to megabytes, we divide the result by 1,048,576 (1 MB = 1,048,576 bytes).

      Capacity of the disk pack = (16 × 128 × 256 × 512) / 1,048,576 MB

      Calculating this equation gives us:

      Capacity of the disk pack = 256 MB

      Number of Bits to Specify a Sector:

      Given:
      - Number of sectors per track = 256

      To calculate the number of bits required to specify a particular sector in the disk, we need to find the minimum number of bits required to represent 256 sectors.

      The minimum number of bits required to represent n sectors is given by the formula:

      Number of bits = log2(n)

      Number of bits = log2(256)

      Using a calculator, we find:

      Number of bits = 8 bits

      Therefore, the number of bits required to specify a particular sector in the disk is 8 bits.

      Conclusion:

      The capacity of the disk pack is 256 Mbyte and the number of bits required to specify a particular sector in the disk is 8 bits. Therefore, the correct answer is option A: 256 Mbyte, 19 bits.

      There are 200 tracks on a disc platter and the pending requests have come in the order - 36, 69, 167, 76, 42, 51, 126, 12 and 199. Assume the arm is located at the 100 track and moving towards track 199. If sequence of disc access is 126, 167, 199, 12, 36, 42, 51, 69 and 76 then which disc access scheduling policy is used?
      • a)
        Elevator
      • b)
        Shortest seek-time first
      • c)
        C-SCAN
      • d)
        First Come First Served
      Correct answer is option 'C'. Can you explain this answer?

      Nandini Khanna answered
      The disc access scheduling policy used is C-SCAN.

      Explanation:
      The C-SCAN (Circular SCAN) algorithm is a disk scheduling algorithm that works by moving the disk arm only in one direction and servicing the requests in a circular manner. When the arm reaches the end of the disk, it wraps around to the beginning and continues servicing requests in the same direction.

      Given that the arm is initially located at track 100 and moving towards track 199, let's analyze the sequence of disc access to determine the scheduling policy used:

      1. 126: The arm moves from track 100 towards track 199, passing through tracks 126 and 167, and finally reaches track 199.

      2. 167: Since the arm is already at track 199, it does not need to move. The request is serviced.

      3. 199: Same as the previous step, the arm does not need to move and the request is serviced.

      4. 12: The arm has reached the end of the disk and wraps around to the beginning. It moves from track 0 to track 12 to service the request.

      5. 36: The arm continues moving in the same direction from track 12 to track 36.

      6. 42: The arm moves from track 36 to track 42.

      7. 51: The arm moves from track 42 to track 51.

      8. 69: The arm moves from track 51 to track 69.

      9. 76: The arm moves from track 69 to track 76.

      Conclusion:
      As we can see, the sequence of disc access follows the circular pattern of the C-SCAN algorithm. The arm moves in one direction, servicing the requests in a circular manner, and wraps around to the beginning when it reaches the end of the disk. Therefore, the disc access scheduling policy used is C-SCAN.

      Disk requests are received by a disk drive for cylinder 5, 25, 18, 3, 39, 8 and 35 in that order. A seek takes 5 msec per cylinder moved. How much seek time is needed to serve these requests for a Shortest Seek First (SSF) algorithm? Assume that the arm is at cylinder 20 when the last of these requests is made with none of the requests yet served
      • a)
        125 msec
      • b)
        295 msec
      • c)
        575 msec
      • d)
        750 msec
      Correct answer is option 'B'. Can you explain this answer?

      Dhruba Goyal answered
      Shortest Seek First (SSF) Algorithm

      The Shortest Seek First (SSF) algorithm aims to minimize the seek time by serving the disk requests in the order of their closest proximity to the current position of the disk arm.

      Given Information:
      - Disk requests: 5, 25, 18, 3, 39, 8, 35
      - Initial position of the disk arm: 20
      - Seek time per cylinder moved: 5 msec

      Calculating Seek Time for SSF Algorithm:

      1. Calculate the seek time for each request based on the SSF algorithm:
      - Calculate the absolute difference between the current position of the disk arm and each request.
      - Sort the requests in ascending order of their absolute differences.

      Sorted requests based on absolute differences:
      18, 25, 8, 35, 5, 39, 3

      2. Calculate the seek time for each request by multiplying its absolute difference by the seek time per cylinder moved:
      - Seek time for request 18: 5 * (18 - 20) = -10 msec (Negative value indicates movement in the opposite direction)
      - Seek time for request 25: 5 * (25 - 20) = 25 msec
      - Seek time for request 8: 5 * (8 - 20) = -60 msec
      - Seek time for request 35: 5 * (35 - 20) = 75 msec
      - Seek time for request 5: 5 * (5 - 20) = -75 msec
      - Seek time for request 39: 5 * (39 - 20) = 95 msec
      - Seek time for request 3: 5 * (3 - 20) = -85 msec

      3. Calculate the total seek time by summing up the seek times of all requests:
      Total Seek Time = -10 + 25 - 60 + 75 - 75 + 95 - 85 = -25 msec

      Explanation:
      The negative and positive seek times indicate the direction in which the disk arm needs to move. A negative value means it needs to move towards lower cylinder numbers, while a positive value means it needs to move towards higher cylinder numbers.

      In this case, the total seek time is -25 msec, indicating that the disk arm needs to move towards lower cylinder numbers. The absolute value of the seek time (-25) represents the total distance the disk arm needs to travel in terms of cylinders.

      Final Answer:
      Therefore, the seek time needed to serve the given requests using the Shortest Seek First (SSF) algorithm is 25 msec. Hence, option B is the correct answer.

      Using a larger block size in a fixed block size file system leads to :
      • a)
        better disk throughput but poorer disk space utilization
      • b)
        better disk throughput and better disk space utilization
      • c)
        poorer disk throughput but better disk space utilization
      • d)
        poorer disk throughput and poorer disk space utilization
      Correct answer is option 'A'. Can you explain this answer?

      Sanya Agarwal answered
      Using larger block size makes disk utilization poorer as more space would be wasted for small data in a block. It may make throughput better as the number of blocks would decrease. A larger block size guarantees that more data from a single file can be written or read at a time into a single block without having to move the disk ́s head to another spot on the disk. The less time you spend moving your heads across the disk, the more continuous reads/writes per second. The smaller the block size, the more frequent it is required to move before a read/write can occur. Larger block size means less number of blocks to fetch and hence better throughput. But larger block size also means space is wasted when only small size is required and hence poor utilization.

      Consider the process track request are 98, 183, 37, 122, 14, 124, 14, 124, 65, 67 and the initial seek is 53. What is the total number of seeks for the C-Look algorithm?
        Correct answer is '322'. Can you explain this answer?

        Sudhir Patel answered
        The given data,
        The process track request are= 98, 183, 37, 122, 14, 124, 14, 124, 65, 67 
        The initial seek = 53
        C-Look algorithm:
        Total number of seeks= ( 183-53 )+(183-14) +(37-14)
        Total number of seeks= 322
        Hence the correct answer is 322.

        Consider a disk that has 209 cylinders, starting from index 0. At some time, the disk arm is at the 57th cylinder from the end moving from largest to innermost cylinder. There is a queue of disk access requests for cylinders 5, 40, 75, 195, 145, 203, 106 and 122. The R/W head of the disk uses the Circular Scan algorithm to reach the disk and access the data. The disk arm requires 1 msec to move between adjacent cylinders and 9 msec to move between the largest and innermost cylinder. The total seek time is _______.
          Correct answer is '2036'. Can you explain this answer?

          Nisha Gupta answered
          Given:
          - Disk has 209 cylinders, starting from index 0.
          - Disk arm is at the 57th cylinder from the end.
          - Disk arm is moving from the largest to innermost cylinder.
          - Disk access requests queue: 5, 40, 75, 195, 145, 203, 106, 122.
          - Disk arm requires 1 msec to move between adjacent cylinders.
          - Disk arm requires 9 msec to move between the largest and innermost cylinder.

          To Find:
          Total seek time.

          Solution:
          To calculate the total seek time, we need to consider the movement of the disk arm from its current position to each requested cylinder.

          1. Calculate the total seek time for the requests before the current position:
          Since the disk arm is moving from the largest to innermost cylinder, we need to calculate the seek time for the requests that are larger than the current position.

          - 195: The disk arm needs to move from cylinder 209 to 195, which requires 9 * (209 - 195) = 126 msec.
          - 203: The disk arm needs to move from cylinder 209 to 203, which requires 9 * (209 - 203) = 54 msec.

          Total seek time for the requests before the current position = 126 + 54 = 180 msec.

          2. Calculate the total seek time for the requests after the current position:
          Since the disk arm is moving from the largest to innermost cylinder, we need to calculate the seek time for the requests that are smaller than the current position.

          - 145: The disk arm needs to move from cylinder 209 to 0 and then from 0 to 145, which requires 9 * (209 + 145) = 3342 msec.
          - 122: The disk arm needs to move from cylinder 209 to 0 and then from 0 to 122, which requires 9 * (209 + 122) = 3105 msec.

          Total seek time for the requests after the current position = 3342 + 3105 = 6447 msec.

          3. Calculate the total seek time for the requests between the current position and the largest cylinder:
          Since the disk arm is moving from the largest to innermost cylinder, we need to calculate the seek time for the requests that are between the current position and the largest cylinder.

          - 75: The disk arm needs to move from cylinder 209 to 0 and then from 0 to 75, which requires 9 * (209 + 75) = 2700 msec.
          - 106: The disk arm needs to move from cylinder 209 to 0 and then from 0 to 106, which requires 9 * (209 + 106) = 4149 msec.

          Total seek time for the requests between the current position and the largest cylinder = 2700 + 4149 = 6849 msec.

          4. Calculate the total seek time for the requests between the current position and the innermost cylinder:
          Since the disk arm is moving from the largest to innermost cylinder, we need to calculate the seek time for the requests that are between the current position and the innermost cylinder.

          - 40: The disk arm needs to move from cylinder 57 to 40, which

          Which of the following statements about synchronous and asynchronous I/O is NOT true?
          • a)
            An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
          • b)
            In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after completion of the I/O
          • c)
            A process making a synchronous I/O call waits until I/O is complete, but a process making an asynchronous I/O call does not wait for completion of the I/O
          • d)
            In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR that is invoked after the completion of I/O
          Correct answer is option 'B'. Can you explain this answer?

          Sanya Agarwal answered
          An interrupt service routine will be invoked after the completion of I/O operation and it will place process from block state to ready state, because process performing I/O operation was placed in blocked state till the I/O operation was completed in Synchronous I/O. However, process performing I/O will not be placed in the block state and process continues to execute the remaining instructions in Asynchronous I/O, because handler function will be registered while performing the I/O operation, when the I/O operation completed signal mechanism is used to notify the process that data is available. So, option (B) is false.

          The data blocks of a very large file in the Unix file system are allocated using
          • a)
            contiguous allocation
          • b)
            linked allocation
          • c)
            indexed allocation
          • d)
            an extension of indexed allocation
          Correct answer is option 'D'. Can you explain this answer?

          Sanya Agarwal answered
          The Unix file system uses an extension of indexed allocation. It uses direct blocks, single indirect blocks, double indirect blocks and triple indirect blocks. Following diagram shows implementation of Unix file system. 

          Consider the following five disk five disk access requests of the form (request id, cylinder number) that are present in the disk scheduler queue at a given time.
          (P, 155), (Q, 85), (R, 110), (S, 30), (T, 115)
          Assume the head is positioned at cylinder 100. The scheduler follows Shortest Seek Time First scheduling to service the requests. Which one of the following statements is FALSE ?
          • a)
            T is serviced before P
          • b)
            Q is serviced after S, but before T
          • c)
            The head reverses its direction of movement between servicing of Q and P
          • d)
            R is serviced before P
          Correct answer is option 'B'. Can you explain this answer?

          Shortest Seek Time First (SSTF) Scheduling

          SSTF is a disk scheduling algorithm that selects the disk request with the shortest seek time from the current head position. The seek time is the time taken by the disk arm to move from its current position to the requested cylinder.

          Given Information:
          - Disk access requests: (P, 155), (Q, 85), (R, 110), (S, 30), (T, 115)
          - Current head position: Cylinder 100

          Step 1: Calculating Seek Time
          - Calculate the seek time for each request by finding the absolute difference between the current head position and the requested cylinder number.
          - Seek Time for each request:
          - P: |155 - 100| = 55
          - Q: |85 - 100| = 15
          - R: |110 - 100| = 10
          - S: |30 - 100| = 70
          - T: |115 - 100| = 15

          Step 2: Servicing Requests
          - The disk scheduler services the request with the shortest seek time first.
          - Initially, the head is at cylinder 100.
          - The request with the shortest seek time is R with a seek time of 10.
          - The head moves to cylinder 110.

          Step 3: Updating Seek Time
          - Recalculate the seek time for each request based on the new head position.
          - Seek Time for each request:
          - P: |155 - 110| = 45
          - Q: |85 - 110| = 25
          - S: |30 - 110| = 80
          - T: |115 - 110| = 5

          Step 4: Servicing Requests
          - The request with the shortest seek time is T with a seek time of 5.
          - The head moves to cylinder 115.

          Step 5: Updating Seek Time
          - Recalculate the seek time for each request based on the new head position.
          - Seek Time for each request:
          - P: |155 - 115| = 40
          - Q: |85 - 115| = 30
          - S: |30 - 115| = 85

          Step 6: Servicing Requests
          - The request with the shortest seek time is Q with a seek time of 30.
          - The head moves to cylinder 85.

          Step 7: Updating Seek Time
          - Recalculate the seek time for each request based on the new head position.
          - Seek Time for each request:
          - P: |155 - 85| = 70
          - S: |30 - 85| = 55

          Step 8: Servicing Requests
          - The request with the shortest seek time is S with a seek time of 55.
          - The head moves to cylinder 30.

          Step 9: Updating Seek Time
          - Recalculate the seek time for the remaining request based on the new head position.
          - Seek Time for the remaining request:
          - P: |155 - 30| = 125

          Step

          Let disk request come to a disk driver for cylinders in the order 10, 22, 20, 2, 40, 6 and 38, at the time when the disk driver is reading from cylinder 20. If the seek time is 6 ms per cylinder, then the total seek time for the disk arm scheduling time algorithm (FCFS - First Come First Serve) is:
          • a)
            360 ms
          • b)
            212 ms
          • c)
            848 ms
          • d)
            876 ms
          Correct answer is option 'D'. Can you explain this answer?

          Ruchi Sengupta answered
          Problem: Find the total seek time for the disk arm scheduling time algorithm (FCFS - First Come First Serve) for disk requests 10, 22, 20, 2, 40, 6, and 38, starting from cylinder 20 with a seek time of 6 ms per cylinder.

          Solution:



          FCFS Algorithm: In FCFS scheduling algorithm, the disk arm moves from its current position to the first requested cylinder and then moves to the second requested cylinder, and so on.

          Steps:

          1. Starting position: Cylinder 20

          2. Request 1: Cylinder 10 (distance to move: 10-20=10)

          3. Request 2: Cylinder 22 (distance to move: 22-10=12)

          4. Request 3: Cylinder 20 (distance to move: 20-22=2)

          5. Request 4: Cylinder 2 (distance to move: 20-2=18)

          6. Request 5: Cylinder 40 (distance to move: 40-2=38)

          7. Request 6: Cylinder 6 (distance to move: 40-6=34)

          8. Request 7: Cylinder 38 (distance to move: 38-6=32)



          Total seek time:
          Total seek time = Sum of distances between consecutive cylinders * seek time per cylinder



          Total seek time = (10+12+2+18+38+34+32) * 6



          Total seek time = 876 ms

          Answer: The total seek time for the disk arm scheduling time algorithm (FCFS - First Come First Serve) is 876 ms.

          Dak bearing a security grading confidential, secret etc is known as?
          • a)
            Secret dak
          • b)
            Classified dak
          • c)
            Closed Dak
          • d)
            Desk Dak
          Correct answer is option 'B'. Can you explain this answer?

          Sudhir Patel answered
          Dak:
          All communication received/issued by an office/department through dak.
          Classified dak:  
          Dak bearing security grading confidential, secret, etc. is called Classified dak.

          Which of the following page replacement algorithms suffers from Belady’s anomaly?
          • a)
            Shortest job first
          • b)
            Round robin
          • c)
            First-come-first-serve
          • d)
            Elevator
          Correct answer is option 'C'. Can you explain this answer?

          Nabanita Basak answered
          There are two types of page replacement algorithm mainly stack based and non-stack based, it is observed that algorithms which follows the latter one suffers from Belady’s anomaly. FIFO suffers from Belady's anomaly.


          The address <400,16,29> corresponds to sector number:
          • a)
            505035
          • b)
            505036
          • c)
            505037
          • d)
            505038
          Correct answer is option 'C'. Can you explain this answer?

          Sanya Agarwal answered
          The data on a disk is ordered in the following way. It is first stored on the first sector of the first surface of the first cylinder. Then in the next sector, and next, until all the sectors on the first track are exhausted. Then it moves on to the first sector of the second surface (remains at the same cylinder), then next sector and so on. It exhausts all available surfaces for the first cylinder in this way. After that, it moves on to repeat the process for the next cylinder

          Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time the disk arm is at cylinder 100, and there is a queue of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135 and 145. If Shortest-Seek Time First (SSTF) is being used for scheduling the disk access, the request for cylinder 90 is serviced after servicing ____________ number of requests.
          • a)
            1
          • b)
            2
          • c)
            3
          • d)
            4
          Correct answer is option 'C'. Can you explain this answer?

          Sanya Agarwal answered
          In Shortest-Seek-First algorithm, request closest to the current position of the disk arm and head is handled first. In this question, the arm is currently at cylinder number 100. Now the requests come in the queue order for cylinder numbers 30, 85, 90, 100, 105, 110, 135 and 145. The disk will service that request first whose cylinder number is closest to its arm. Hence 1st serviced request is for cylinder no 100 ( as the arm is itself pointing to it ), then 105, then 110, and then the arm comes to service request for cylinder 90. Hence before servicing request for cylinder 90, the disk would had serviced 3 requests. Hence option C.

          A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses, 1 indirect block address and 1 doubly indirect block address. The size of each disk block is 128 Bytes and the size of each disk block address is 8 Bytes. The maximum possible file size in this file system is
          • a)
            3 Kbytes
          • b)
            35 Kbytes
          • c)
            280 Bytes
          • d)
            Dependent on the size of the disk
          Correct answer is option 'B'. Can you explain this answer?

          Sanya Agarwal answered
          Total number of possible addresses stored in a disk block = 128/8 = 16
          Maximum number of addressable bytes due to direct address block = 8*128
          Maximum number of addressable bytes due to 1 single indirect address block = 16*128
          Maximum number of addressable bytes due to 1 double indirect address block = 16*16*128
          The maximum possible file size = 8*128 + 16*128 + 16*16*128 = 35KB

          Disk requests come to a disk driver for cylinders in the order 10, 22, 20, 2, 40, 6, and 38 at a time when the disk drive is reading from cylinder 20. The seek time is 6 ms/cylinder. The total seek time, if the disk arm scheduling algorithms is first-come-first-served is
          • a)
            360 ms
          • b)
            850 ms
          • c)
            900 ms
          • d)
            None of the above
          Correct answer is option 'D'. Can you explain this answer?

          Palak Saini answered
          Total Seek Time Calculation
          To calculate the total seek time for the disk requests using the First-Come-First-Served (FCFS) scheduling algorithm, we need to analyze the sequence of cylinder requests and their distances from the current position of the disk arm.
          Initial Conditions
          - Current cylinder: 20
          - Sequence of requests: 10, 22, 20, 2, 40, 6, 38
          - Seek time: 6 ms per cylinder
          Request Sequence Analysis
          The requests will be processed in the order they arrive. Starting from cylinder 20, let's calculate the movement for each request:
          - From 20 to 10:
          - Distance = |20 - 10| = 10 cylinders
          - Seek time = 10 * 6 ms = 60 ms
          - From 10 to 22:
          - Distance = |10 - 22| = 12 cylinders
          - Seek time = 12 * 6 ms = 72 ms
          - From 22 to 20:
          - Distance = |22 - 20| = 2 cylinders
          - Seek time = 2 * 6 ms = 12 ms
          - From 20 to 2:
          - Distance = |20 - 2| = 18 cylinders
          - Seek time = 18 * 6 ms = 108 ms
          - From 2 to 40:
          - Distance = |2 - 40| = 38 cylinders
          - Seek time = 38 * 6 ms = 228 ms
          - From 40 to 6:
          - Distance = |40 - 6| = 34 cylinders
          - Seek time = 34 * 6 ms = 204 ms
          - From 6 to 38:
          - Distance = |6 - 38| = 32 cylinders
          - Seek time = 32 * 6 ms = 192 ms
          Total Seek Time Calculation
          Now, summing all these seek times:
          - Total Seek Time = 60 ms + 72 ms + 12 ms + 108 ms + 228 ms + 204 ms + 192 ms = 876 ms
          Thus, the total seek time is 876 ms, which does not match any of the options provided, confirming that the correct answer is option 'D' (None of the above).

          A process refers to 5 pages, A, B, C, D and E in the order - A, B, C, D, A, B, E, A, B, C, D, E. If the page replacement algorithm is FIFO, the number of pages which transfer with an empty internal store of 3 frames is
          • a)
            8
          • b)
            10
          • c)
            9
          • d)
            7
          Correct answer is option 'C'. Can you explain this answer?

          Shounak Yadav answered
          The first 3 reference A, B, C fills the internal storage with A, B,C in 3 page transfers. Now the next reference D results in a page fault. So page A is downloaded and D takes its place after a page transfer. So, the internal store has D, Fand C. The next reference is A - results in a page fault. So, a page transfer takes place and swaps B and A. Continuing this way, we find totally 9 page transfers are necessary.

          Disk requests come to a disk driver for cylinders in the order 10,22,20,2,40,6 and 38, at a time when the disk drive is reading from cylinder 20. The seek time is 6 ms per cylinder. The total seek time, if the disk arm scheduling algorithm is first-come-first-served is
          • a)
            360 ms
          • b)
            850 ms
          • c)
            900 ms
          • d)
            None of these
          Correct answer is option 'D'. Can you explain this answer?

          Samridhi Joshi answered
          Explanation:

          In the First-Come-First-Served (FCFS) disk scheduling algorithm, the disk requests are serviced in the order they arrive. So, the seek time for each request is calculated based on the difference between the current and next cylinder.

          Given:
          Initial cylinder: 20
          Cylinder requests: 10, 22, 20, 2, 40, 6, 38

          To calculate the total seek time, we need to calculate the seek time for each request and sum them up.

          Seek Time Calculation:

          1. From cylinder 20 to cylinder 10: 20 - 10 = 10
          2. From cylinder 10 to cylinder 22: 22 - 10 = 12
          3. From cylinder 22 to cylinder 20: 22 - 20 = 2
          4. From cylinder 20 to cylinder 2: 20 - 2 = 18
          5. From cylinder 2 to cylinder 40: 40 - 2 = 38
          6. From cylinder 40 to cylinder 6: 40 - 6 = 34
          7. From cylinder 6 to cylinder 38: 38 - 6 = 32

          Total Seek Time Calculation:

          Total Seek Time = Seek Time for each request

          Total Seek Time = 10 + 12 + 2 + 18 + 38 + 34 + 32

          Total Seek Time = 146

          Conversion to Milliseconds:

          The seek time given is 6 ms per cylinder. So, we need to multiply the total seek time by 6 to get the seek time in milliseconds.

          Total Seek Time in milliseconds = Total Seek Time * 6

          Total Seek Time in milliseconds = 146 * 6

          Total Seek Time in milliseconds = 876 ms

          Therefore, the total seek time for the given disk requests using the First-Come-First-Served (FCFS) disk scheduling algorithm is 876 milliseconds.

          A read bit can be read
          • a)
            and written by CPU
          • b)
            and written by peripheral
          • c)
            by peripheral and written by CPU
          • d)
            by CPU and written by the peripheral
          Correct answer is option 'D'. Can you explain this answer?

          Puja Bajaj answered
          Introduction:
          In computer systems, data is stored in various storage devices such as RAM, hard drives, and solid-state drives. When the CPU needs to access this data, it can either read or write it. Reading refers to retrieving data from a storage device, while writing involves storing new or modified data back to the storage device.

          Explanation:
          The given question asks about a read bit, which means accessing data from a storage device. The question suggests that a read bit can be read by the CPU and written by the peripheral. Let's analyze each option to understand the possibilities.

          a) Read bit can be read and written by CPU:
          This option suggests that the CPU can both read and write the read bit. However, this is not accurate because the read bit is typically read by the CPU but not written by it.

          b) Read bit can be read and written by peripheral:
          This option implies that the peripheral device can both read and write the read bit. While peripherals can write data to storage devices, they do not typically read data directly from a read bit.

          c) Read bit can be read by peripheral and written by CPU:
          This option suggests that the peripheral device can read the read bit, and the CPU can write it. This is not entirely accurate because the read bit is usually read by the CPU and not by the peripheral device.

          d) Read bit can be read by CPU and written by the peripheral:
          This option accurately describes the functionality of a read bit. The CPU is responsible for reading data from storage devices, including the read bit. However, when it comes to writing data, the CPU can delegate this task to a peripheral device, which can write the read bit or any other data back to the storage device.

          Conclusion:
          Among the given options, option 'D' correctly states that a read bit can be read by the CPU and written by the peripheral. The CPU is primarily responsible for reading data, while peripherals can assist in writing data back to the storage device.

          A memory page containing a heavily used variable that was initialized very early and is in constant use is removed, when the page replacement algorithm used is
          • a)
            LRU
          • b)
            FIFO
          • c)
            LFU
          • d)
            None of these
          Correct answer is option 'B'. Can you explain this answer?

          Aditi Gupta answered
          Since the page is used heavily then it can not be removed if LFU used, since it is in constant used, hence it can’t be removed if LRU is used. Using FIFO only it can be removed effectively.

          Put the following disk scheduling policies results in minimum amount of head movement.
          • a)
            FCFS
          • b)
            Circular SCAN
          • c)
            Elevator
          Correct answer is option 'C'. Can you explain this answer?

          Sanya Agarwal answered
          Circular scanning works just like the elevator to some extent. It begins its scan toward the nearest end and works its way all the way to the end of the system. Once it hits the bottom or top it jumps to the other end and moves in the same direction. Circular SCAN has more head movement than SCAN (elevator) because Circular SCAN has circular jump and it does count as a head movement. SCAN (elevator) is the best choice here.

          What is compaction refers to
          • a)
            a technique for overcoming internal fragmentation
          • b)
            a paging technique
          • c)
            a technique for overcoming external fragmentation
          • d)
            a technique for compressing the data
          Correct answer is option 'C'. Can you explain this answer?

          Krithika Gupta answered
          Compaction refers to a technique for overcoming external fragmentation in computer memory management. It involves rearranging the memory blocks to reduce or eliminate the fragmentation and make efficient use of available memory.

          External fragmentation occurs when free memory is scattered throughout the memory space in small, non-contiguous blocks. This can happen due to the allocation and deallocation of variable-sized memory blocks. As a result, even though there may be enough free memory to satisfy a memory allocation request, it may not be available in a contiguous block, leading to inefficient memory utilization.

          Compaction is a process of rearranging memory blocks to create larger contiguous blocks of free memory. It involves moving allocated blocks and adjusting memory addresses to eliminate the gaps between them. The goal is to create a single large block of free memory, which can then be used to satisfy larger memory allocation requests.

          Here is a detailed explanation of how compaction works:

          1. Identify fragmented memory: The first step is to identify the areas of memory that are fragmented and contain small free blocks.

          2. Relocate allocated blocks: The next step is to move the allocated blocks towards one end of the memory space, typically towards the lower addresses. This involves updating memory addresses in the process control blocks (PCBs) and any pointers or references to the memory blocks.

          3. Adjust memory addresses: After relocating the allocated blocks, the memory addresses need to be adjusted to reflect the new positions. This may involve updating pointers, references, and any other data structures that store memory addresses.

          4. Create a large block of free memory: As the allocated blocks are moved, the gaps between them are eliminated, creating a larger contiguous block of free memory.

          5. Update memory management data structures: Finally, the memory management data structures, such as the free block list or bitmap, need to be updated to reflect the changes in the memory layout.

          By compacting the memory and eliminating external fragmentation, compaction improves memory utilization and reduces the likelihood of memory allocation failures due to insufficient contiguous free memory. However, compaction can be a costly operation in terms of time and computational resources, especially if there are many allocated blocks that need to be moved. Therefore, it is typically used in situations where external fragmentation becomes a significant problem and memory compaction can be performed efficiently.

          Which of the following is not a type of file allocation system.
          • a)
            Contiguous system allocation
          • b)
            Non-contiguous system allocation
          • c)
            Linked allocation
          • d)
            Indexed allocation
          Correct answer is option 'B'. Can you explain this answer?

          Palak Khanna answered
          Introduction:
          A file allocation system is a method used by an operating system to allocate space on a storage device for files. There are several types of file allocation systems, each with its own advantages and disadvantages. The question asks for the type of file allocation system that is not among the given options.

          Explanation:
          The four types of file allocation systems mentioned in the options are:
          a) Contiguous system allocation
          b) Non-contiguous system allocation
          c) Linked allocation
          d) Indexed allocation

          Contiguous System Allocation:
          In a contiguous system allocation, each file is stored as a contiguous block of data on the storage device. This means that the entire file is stored in a single continuous section of the storage space. This type of allocation provides fast access to the file and is easy to implement. However, it suffers from the limitation that files cannot grow or shrink dynamically, and there may be fragmentation issues.

          Non-contiguous System Allocation:
          Non-contiguous system allocation, also known as the linked allocation method, allows files to be stored in non-contiguous blocks of data on the storage device. Each block contains a pointer to the next block in the file. This method allows files to grow and shrink dynamically, and it eliminates fragmentation issues. However, it can result in slower access times due to the need to traverse the linked blocks.

          Linked Allocation:
          Linked allocation is a type of non-contiguous system allocation where each file is divided into blocks that can be scattered across the storage device. Each block contains a pointer to the next block in the file. This method allows files to grow and shrink dynamically, but it can result in slower access times due to the need to traverse the linked blocks.

          Indexed Allocation:
          Indexed allocation is a type of non-contiguous system allocation where a separate index block is used to store the addresses of the blocks that make up a file. The index block contains pointers to the actual data blocks. This method allows fast access to the file, but it requires additional space for the index block.

          Conclusion:
          Based on the explanations above, the correct answer is option 'B' - Non-contiguous system allocation. This is because non-contiguous system allocation, also known as linked allocation, is indeed a valid type of file allocation system. Therefore, it should not be considered as the answer to the question.

          Chapter doubts & questions for File Systems - Operating System 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

          Chapter doubts & questions of File Systems - Operating System in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

          Operating System

          10 videos|100 docs|33 tests

          Top Courses Computer Science Engineering (CSE)

          Signup to see your scores go up within 7 days!

          Study with 1000+ FREE Docs, Videos & Tests
          10M+ students study on EduRev