All Exams  >   Computer Science Engineering (CSE)  >   Operating System  >   All Questions

All questions of Virtual Memory for Computer Science Engineering (CSE) Exam

_____ contains the swap space.
  • a)
    RAM
  • b)
    Disk
  • c)
    ROM
  • d)
    On-chip cache
Correct answer is option 'B'. Can you explain this answer?

Sudhir Patel answered
Concept:
Swap space is the area on a hard disk that is part of the Virtual Memory of one's machine, which is a combination of accessible physical memory (RAM) and the swap space.
Swap space is used in various ways by different operating systems, depending on the memory management algorithms in use. Systems that implement swapping may use swap space to hold an entire process image, including the code and data segments.
Swap space can reside in one of two places: it can be carved out of the normal file system or it can be in a separate disk partition.
Swap space is used to save the process data. Operating system that offers virtual memory commonly use the mapping interface for kernel services. For instance, to execute program, the operating system maps the executable into memory and then transfers control to the entry address of the executable. The mapping interface is commonly used for kernel access to swap space on disk.
NOTE:
Option (B) is the best possible answer. 

A process refers to 5 pages, A, B, C, D, E in the order :
A, B, C, D, A, B, E, A, B, C, D, E
If the page replacement algorithm is FIFO, the number of page transfers with an empty internal store of 3 frames is: 
  • a)
    8
  • b)
    10
  • c)
    9
  • d)
    7
Correct answer is option 'C'. Can you explain this answer?

Aaditya Ghosh answered


Explanation:

Given:
- Pages: A, B, C, D, E in the order: A, B, C, D, A, B, E, A, B, C, D, E
- Page replacement algorithm: FIFO
- Internal store: 3 frames

Page Transfers Calculation:
- Initially, the frames are empty, so pages A, B, C will be loaded into the frames.
- When page D comes, it will replace page A as it was the first loaded page according to FIFO.
- When page A comes again, it will replace page B as it was the second loaded page.
- When page B comes again, it will replace page C as it was the third loaded page.
- When page E comes, it will replace page D as it was the first loaded page.
- This process continues until all pages are accessed.

Page Transfers:
1. A, B, C (Initial pages loaded)
2. A, B, C, D (D replaces A)
3. B, C, D, A (A replaces B)
4. C, D, A, B (B replaces C)
5. D, A, B, E (E replaces D)
6. A, B, E, C (A replaces C)
7. B, E, C, D (B replaces D)
8. E, C, D, A (E replaces A)

Total Page Transfers: 8

Therefore, the correct answer is 9 page transfers with an empty internal store of 3 frames.

A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin with. The system first accesses 100 distinct pages in some order and then accesses the same 100 pages but now in the reverse order. How many page faults will occur?
  • a)
    196
  • b)
    192
  • c)
    197
  • d)
    195
Correct answer is option 'A'. Can you explain this answer?

Sudhir Patel answered
Given that page frame size = 4
As there are 100 distinct pages which are first accessed → 100 page faults
when it accesses the same 100 pages but now in the reverse order → (100-4)
Because page frame size is four.
There fore, Total number of page faults = 100+ (100-4)= 196

The crew performed experiments such as pollinatary planets and faster computer chips price tag is
  • a)
    51 million dollars
  • b)
    52 million dollars
  • c)
    54 million dollars
  • d)
    56 million dollars
Correct answer is option 'D'. Can you explain this answer?

Mahesh Chavan answered
Answer:

To answer this question, we need to analyze the information provided and determine the price tag for the experiments conducted by the crew. Let's break down the information and explain the answer in detail.

Given Information:
- The crew performed experiments such as pollinatary planets and faster computer chips.
- The price tag for these experiments is to be determined.

Analysis:
Based on the information given, we know that the crew performed experiments related to pollinatary planets and faster computer chips. However, the exact cost associated with these experiments is not mentioned directly. Therefore, we need to carefully analyze the options provided and select the correct answer based on logical reasoning.

Options:
a) 51 million dollars
b) 52 million dollars
c) 54 million dollars
d) 56 million dollars

Logical Reasoning:
Since the specific cost of the experiments is not mentioned, we can use logical reasoning to arrive at the correct answer.

Option a) 51 million dollars: This option is lower than the other options provided. Given that the experiments conducted involved advanced technological research, it is unlikely that the cost would be as low as 51 million dollars.

Option b) 52 million dollars: This option is similar to option a) and is unlikely to be the correct answer for the same reasons mentioned above.

Option c) 54 million dollars: This option is also unlikely to be the correct answer as it falls within the same range as options a) and b).

Option d) 56 million dollars: Among the given options, this is the highest value. Considering the advanced nature of the experiments performed by the crew, it is reasonable to assume that the cost would be higher. Therefore, option d) is the most logical choice.

Conclusion:
Based on logical reasoning and considering the advanced nature of the experiments, the correct answer is option d) 56 million dollars.

Recall that Belady’s anomaly is that the page-fault rate may increase as the number of allocated frames increases. Now, consider the following statements:
S1: Random page replacement algorithm (where a page chosen at random is replaced)
Suffers from Belady’s anomaly
S2: LRU page replacement algorithm suffers from Belady’s anomaly
Which of the following is CORRECT?
  • a)
    S1 is true, S2 is true
  • b)
    S1 is true, S2 is false
  • c)
    S1 is false, S2 is true
  • d)
    S1 is false, S2 is false
Correct answer is option 'B'. Can you explain this answer?

Dhruba Goyal answered
Was a computer scientist who proposed the Belady's anomaly in page replacement algorithms. This anomaly states that in certain cases, increasing the number of page frames in a memory can actually increase the number of page faults.

Belady's anomaly challenges the common intuition that increasing the number of page frames should always lead to a decrease in page faults. It occurs in page replacement algorithms that use the Least Recently Used (LRU) policy.

The LRU policy replaces the page that has not been used for the longest time. In Belady's anomaly, increasing the number of page frames can cause pages that were previously in memory to be evicted more frequently, resulting in more page faults.

This anomaly occurs when the reference pattern of a program has a high degree of locality and exhibits a "thrashing" behavior. Thrashing refers to a situation where the system spends a significant amount of time swapping pages in and out of memory, resulting in poor performance.

Belady's anomaly is important because it highlights the limitations of certain page replacement algorithms. It reminds us that increasing the number of page frames is not always the most effective solution to reducing page faults. Instead, it emphasizes the need for more sophisticated page replacement algorithms that can better handle the locality and thrashing behavior of programs.

Removing of suspended process from memory to disk and their subsequent return is called
  • a)
    Swapping
  • b)
    Segmentation
  • c)
    I/O operation 
  • d)
    Replacement
Correct answer is option 'A'. Can you explain this answer?

Hridoy Datta answered
Swapping:
Swapping is the process of moving a suspended process from memory to disk to free up space in the main memory for other processes. When a process is suspended, it is temporarily removed from the main memory and stored on the disk until it is ready to resume execution. This allows the operating system to manage the limited memory resources efficiently.

Reason for Swapping:
- One of the main reasons for swapping is to prevent the main memory (RAM) from becoming full and causing the system to slow down or crash.
- Swapping helps in optimizing the overall performance of the system by allowing more processes to be executed concurrently without running out of memory.

Steps in Swapping:
- When a process is suspended, the operating system identifies a suitable location on the disk to store the process.
- The contents of the process, including its code, data, and stack, are transferred from the main memory to the disk.
- The operating system keeps track of the location of the process on the disk so that it can be easily retrieved when needed.

Benefits of Swapping:
- Swapping helps in increasing the overall system performance by efficiently managing memory resources.
- It allows the system to run more processes simultaneously without running out of memory.
- Swapping helps in preventing memory-related issues such as system crashes and slowdowns.
In conclusion, swapping plays a crucial role in the efficient management of memory resources in a computer system by temporarily moving suspended processes from memory to disk and then retrieving them when needed.

Consider a fully associative cache with 8 cache blocks (numbered 0-7) and the following sequence of memory block requests: 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 If LRU replacement policy is used, which cache block will have memory block 7? 
  • a)
    4
  • b)
    5
  • c)
    6
  • d)
    7
Correct answer is option 'B'. Can you explain this answer?

Block size is =8 Given 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 So from 0 to 7 ,we have
  • 4 3 25 8 19 6 16 35    //25,8 LRU so next 16,35 come in the block.
  •  45 3 25 8 19 6 16 35
  • 45 22 25 8 19 6 16 35
  • 45 22 25 8 19 6 16 35
  • 45 22 25 8 3 6 16 35     //16 and 25 already there
  • 45 22 25 8 3 7 16 35   //7 in 5th block Therefore , answer is  B

A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading data from track 120, and at the previous request, service was for track 90. The pending requests (in order of their arrival) are for track numbers. 30 70 115 130 110 80 20 25. How many times will the head change its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist Serve)
  • a)
    2 and 3
  • b)
    3 and 3
  • c)
    3 and 4
  • d)
    4 and 4
Correct answer is option 'C'. Can you explain this answer?

Saptarshi Saha answered
Explanation:
To determine the number of times the head changes its direction for the SSTF and FCFS disk scheduling policies, we need to analyze the order in which the pending requests are serviced.

SSTF (Shortest Seek Time First):
1. Start at track 120.
2. The nearest pending request is track 115 (5 tracks away).
3. Move the head to track 115.
4. The nearest pending request is track 110 (5 tracks away).
5. Move the head to track 110.
6. The nearest pending request is track 130 (20 tracks away).
7. Move the head to track 130.
8. The nearest pending request is track 130 (0 tracks away).
9. Move the head to track 130.
10. The nearest pending request is track 70 (60 tracks away).
11. Move the head to track 70.
12. The nearest pending request is track 80 (10 tracks away).
13. Move the head to track 80.
14. The nearest pending request is track 90 (10 tracks away).
15. Move the head to track 90.
16. The nearest pending request is track 30 (60 tracks away).
17. Move the head to track 30.
18. The nearest pending request is track 25 (5 tracks away).
19. Move the head to track 25.
20. The nearest pending request is track 20 (5 tracks away).
21. Move the head to track 20.

FCFS (First Come First Serve):
1. Start at track 120.
2. Service the pending request for track 30.
3. Move the head to track 30.
4. Service the pending request for track 70.
5. Move the head to track 70.
6. Service the pending request for track 115.
7. Move the head to track 115.
8. Service the pending request for track 130.
9. Move the head to track 130.
10. Service the pending request for track 110.
11. Move the head to track 110.
12. Service the pending request for track 80.
13. Move the head to track 80.
14. Service the pending request for track 20.
15. Move the head to track 20.
16. Service the pending request for track 25.

The head changes its direction whenever it moves from a higher track number to a lower track number or vice versa.

Number of times the head changes its direction for SSTF: 2 times (from track 120 to 115, and from track 115 to 110)

Number of times the head changes its direction for FCFS: 4 times (from track 120 to 30, from track 30 to 70, from track 70 to 115, and from track 115 to 130)

Therefore, the correct answer is option 'C': 3 times for SSTF and 4 times for FCFS.

Moving process from main memory to disk is called
  • a)
    scheduling
  • b)
    caching 
  • c)
    swapping 
  • d)
    spooling
Correct answer is option 'C'. Can you explain this answer?

  • Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution.
  • Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes.

Page fault occurs when
  • a)
    The page is corrupted by application software
  • b)
    The page is in main memory
  • c)
    The page is not in main memory
  • d)
    The tries to divide a number by 0
Correct answer is option 'C'. Can you explain this answer?

Page fault occurs when required page that is to be needed is not found in the main memory. When the page is found in the main memory it is called page hit, otherwise miss.

In a paged memory, the page hit ratio is 0.35. The time required to access a page in secondary memory is equal to 100 ns. The time required to access a page in primary memory is 10 ns. The average time required to access a page is
  • a)
    3.0 ns
  • b)
    68.0 ns
  • c)
    68.5 ns
  • d)
    78.5 ns
Correct answer is option 'C'. Can you explain this answer?

Ishaan Saini answered
Hit ratio = 0.35
Time (secondary memory) = 100 ns
T(main memory) = 10 ns
Average access time = h(Tm) + (1 - h) (Ts)
= 0.35 x 10 +(0.65) x 100
= 3.5 + 65 
= 68.5 ns

Hence option (C) is correct

For basics of Memory management lecture click on the following link:

In a particular Unix OS, each data block is of size 1024 bytes, each node has 10 direct data block addresses and three additional addresses: one for single indirect block, one for double indirect block and one for triple indirect block. Also, each block can contain addresses for 128 blocks. Which one of the following is approximately the maximum size of a file in the file system? 
  • a)
    512 MB
  • b)
    2GB
  • c)
    8GB
  • d)
    16GB
Correct answer is option 'B'. Can you explain this answer?

Nishanth Mehta answered
The diagram is taken from Operating System Concept book.
Maximum size of the File System = Summation of size of all the data blocks whose addresses belongs to the file.
Given: Size of 1 data block = 1024 Bytes No. of addresses which 1 data block can contain = 128
 
Now, Maximum File Size can be calculated as:
10 direct addresses of data blocks = 10*1024
1 single indirect data block = 128*1024
1 doubly indirect data block = 128*128*1024
1 triple indirect data block = 128*128*128*1024
Hence,
Max File Size = 10*1024 + 128*1024 + 128*128*1024 + 128*128*128*1024 Bytes
= 2113674*1024 Bytes
= 2.0157 GB ~ 2GB

In process swapping in operating system, what is the residing location of swap space?
  • a)
    RAM
  • b)
    Disk
  • c)
    ROM
  • d)
    On-chip cache
Correct answer is option 'B'. Can you explain this answer?

Anmol Basu answered
Swap Space in Operating System

In an operating system, swap space is a designated area on a disk that is used to temporarily store data that cannot fit in the computer's physical memory (RAM). When the RAM becomes full, the operating system moves inactive pages of memory to the swap space, freeing up valuable RAM for other processes.

Residing Location of Swap Space

The residing location of swap space is on a disk. This means that the swap space is allocated on the hard disk or solid-state drive (SSD) connected to the computer.

Reasons for Storing Swap Space on Disk

There are several reasons why swap space is stored on a disk:

1. Capacity: Disks have much larger storage capacity compared to RAM. This allows the operating system to allocate a significant amount of swap space to handle memory requirements of various processes.

2. Persistence: Data stored in swap space is persistent, meaning it remains on the disk even if the computer is powered off. This ensures that the swapped-out pages can be retrieved when needed, even after a system restart.

3. Flexibility: Disk storage allows for flexibility in managing swap space. The operating system can dynamically allocate or deallocate disk space for swap as per the memory demands of running processes.

4. Efficiency: Disk storage is slower compared to RAM, but it is still much faster than accessing data from secondary storage devices like hard disks. While accessing swapped-out pages from disk incurs a performance penalty, it is still more efficient than running out of physical memory and causing system instability or crashes.

5. Virtual Memory Management: Storing swap space on disk is an essential component of virtual memory management in modern operating systems. It enables the operating system to transparently manage memory allocation and paging, ensuring that processes can utilize more memory than physically available.

Conclusion

In summary, the residing location of swap space in the process swapping mechanism of an operating system is on a disk. This disk-based storage provides the necessary capacity, persistence, flexibility, and efficiency to handle memory demands and ensure smooth operation of the system.

A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin with. The system first accesses 50 distinct pages in some order and then accesses the same 50 pages in reverse order. How many page faults will occur?
  • a)
    96
  • b)
    100
  • c)
    97
  • d)
    92
Correct answer is option 'A'. Can you explain this answer?

Sudhir Patel answered
Page frames = 4
Pages: 1, 2, 3, 4 .... 45, 46, 47, 48, 49, 50, 50(H), 49(H), 48(H), 47(H), 46, 45, .....4, 3, 2, 1
First 50-page accesses will cause page fault but in reverse order page number 50, 49, 48, and 47 will not cause a page fault.
Hence total page faults are 50 + 46.

The main function of shared memory is to
  • a)
    Use primary memory efficiently
  • b)
    Do intra process communication
  • c)
    Do inter process communication
  • d)
    None of these
Correct answer is option 'C'. Can you explain this answer?

Atharva Das answered
Shared memory is that memory that can be simultaneously accessed by multiple programs with an intent to provide communication among them or to avoid redundant copies. Shared memory is a way of interchanging data between programs.

The optimal page replacement algorithm will select the page that
  • a)
    Has not been used for the longest time in the past.
  • b)
    Will not be used for the longest time in the future.
  • c)
    Has been used least number of times.
  • d)
    Has been used most number of times.
Correct answer is option 'B'. Can you explain this answer?

The optimal page replacement algorithm will select the page whose next occurrence will be after the longest time in future. For example, if we need to swap a page and there are two options from which we can swap, say one would be used after 10s and the other after 5s, then the algorithm will swap out the page that would be required 10s later. Thus, B is the correct choice. Please comment below if you find anything wrong in the above post.

If the property of locality of reference is well pronounced a program:
1. The number of page faults will be more.
2. The number of page faults will be less.
3. The number of page faults will remain the same.
4. Execution will be faster.
  • a)
    1 and 2
  • b)
    2 and 4
  • c)
    3 and 4
  • d)
    None of the above
Correct answer is option 'B'. Can you explain this answer?

Nisha Das answered
If the property of locality of reference is well pronounced then the page to be accessed will be found in the memory more likely and hence page faults will be less. Also since access time will be only of the upper level in the memory hierarchy bence execution will be faster.

A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns, and TLB access time is also 1 ns. Assuming that no page faults occur, the average time taken to access a virtual address is approximately (to the nearest 0.5 ns)
  • a)
    1.5 ns
  • b)
    2 ns
  • c)
    3 ns
  • d)
    4 ns
Correct answer is option 'D'. Can you explain this answer?

The possibilities are
TLB Hit*Cache Hit +
TLB Hit*Cache Miss +
TLB Miss*Cache Hit +
TLB Miss*Cache Miss
= 0.96*0.9*2 +
0.96*0.1*12 + 0.04*0.9*22 +
0,04*0.1*32
= 3.8
≈ 4
Why 22 and 32? 22 is because when TLB miss occurs it takes 1ns and the for the physical address it has to go through two level page tables which are in main memory and takes 2 memory access and the that page is found in cache taking 1 ns which gives a total of 22

Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4KB, what is the approximate size of the page table?
  • a)
    16 MB
  • b)
    8 MB
  • c)
    2 MB
  • d)
    24 MB
Correct answer is option 'C'. Can you explain this answer?

Vandana Desai answered
Solution:

Given, physical memory = 64 MB
Virtual address space = 32-bit
Page size = 4 KB

To calculate the page table size, we need to find the number of page table entries.

Number of page table entries = Virtual address space/Page size
= 2^32/2^12
= 2^20

Each page table entry requires 4 bytes (32 bits) for the page frame number.

Therefore, the size of the page table = Number of page table entries x size of each entry
= 2^20 x 4 bytes
= 4 MB

However, this is the size of the page table assuming that each entry takes 4 bytes. In reality, each entry may require additional bits for flags, protection bits, etc. Hence, the actual size of the page table may be slightly larger than 4 MB.

Therefore, the approximate size of the page table is 2 MB (option C).

Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of 
  • a)
    1.9999 milliseconds
  • b)
    1 millisecond
  • c)
    9.999 microseconds
  • d)
    1.9999 microseconds
Correct answer is option 'D'. Can you explain this answer?

Arka Bajaj answered
If any page request comes it will first search into page table, if present, then it will directly fetch the page from memory, thus in this case time requires will be only memory access time. But if required page will not be found, first we have to bring it out and then go for memory access. This extra time is called page fault service time. Let hit ratio be p , memory access time be t1 , and page fault service time be t2.
Hence, average memory access time = p*t1 + (1-p)*t2
=(99.99*1 + 0.01*(10*1000 + 1))/100 =1.9999 *10^-6 sec

Consider a system with 1 K pages and 512 frames and each page is of size 2 KB. How many bits are required to represent the virtual address space Memory is
  • a)
    20 bits
  • b)
    21 bits
  • c)
    11 bits
  • d)
    None of these
Correct answer is option 'B'. Can you explain this answer?

Arpita Mehta answered
Virtual address space consists of pages.
Given that,
Number of pages = 1 K = 210
 Page size = 2KB = 211B
Hence virtual address space
= 211 x210
= 221 B
∴  21 bits are required to represent the virtual address space.

Consider a fully-associative data cache with 32 blocks of 64 bytes each. The cache uses LRU (Least Recently Used) replacement. Consider the following C code to sum together all of the elements of a 64 by 64 two-dimensional array of 64-bit double-precision floating point numbers.
double sum (double A [64] [64] ) {
int i, j ;
double sum = 0 ;
for (i = 0 ; i < 64; i++)
for (j = 0; j < 64; j++)
sum += A [i] [j] ;
return sum ;
}
Assume all blocks in the cache are initially invalid. How many cache misses will result from the code?
  • a)
    256
  • b)
    512
  • c)
    128
  • d)
    1024
Correct answer is option 'B'. Can you explain this answer?

Sudhir Patel answered
Least Recently Used (LRU) page replacement algorithm:
LRU policy follows the concept of locality of reference as the base for its page replacement decisions. LRU policy says that pages that have not been used for the longest period of time will probably not be used for a long time.
The given data, 
Array size= 64 x64 x size of each element.
Array size=64 x64x 64-bit
Array size= 64 x64 x8 bytes
The data cache with 32 blocks of 64 bytes each.
Data cache= 64 bytes
Initially, the data cache loads blocks when i=0 and j=0 and one cache miss after that till i=0 and j=8 there is no cache misses because at one-time cache loads the 64 bytes of data.
So number for 64 bytes 1 miss then,
64 x64 x8 bytes=?
Number of miss= 64 x64 x8 bytes / 64
The number of miss= 512.
Hence the correct answer is 512.

Consider a demand paging system with four-page frames (initially empty) and LRU page replacement policy. For the following page reference string
7, 2, 7, 3, 2, 5, 3, 4, 6, 7, 7, 1, 5, 6, 1
the page fault rate, defined as the ratio of number of page faults to the number of memory accesses (rounded off to one decimal place) is ______.
    Correct answer is '0.6'. Can you explain this answer?

    Ashwini Ghosh answered
    Page Fault Rate Calculation for a Demand Paging System with LRU Page Replacement Policy

    To calculate the page fault rate for the given page reference string, we need to keep track of the number of page faults and the number of memory accesses. The page fault rate is then calculated by dividing the number of page faults by the number of memory accesses.

    Given:
    Page reference string: 7, 2, 7, 3, 2, 5, 3, 4, 6, 7, 7, 1, 5, 6, 1
    Number of page frames: 4 (initially empty)
    Page replacement policy: LRU (Least Recently Used)

    1. Initialize the page frames:
    The page frames are initially empty, so we have:
    [_, _, _, _]

    2. Process the page reference string:
    For each page in the reference string, we check if it is already present in one of the page frames. If it is not present, we have a page fault and need to replace a page using the LRU policy.

    - Access page 7: Page fault, as the page frame is empty.
    Page frames: [7, _, _, _]
    - Access page 2: Page fault, as the page frame is empty.
    Page frames: [7, 2, _, _]
    - Access page 7: Page fault, as page 7 is already in one of the frames but not the most recently used.
    Page frames: [7, 2, _, _]
    - Access page 3: Page fault, as the page frame is empty.
    Page frames: [7, 2, 3, _]
    - Access page 2: Page fault, as page 2 is already in one of the frames but not the most recently used.
    Page frames: [7, 2, 3, _]
    - Access page 5: Page fault, as the page frame is empty.
    Page frames: [7, 2, 3, 5]
    - Access page 3: Page fault, as page 3 is already in one of the frames but not the most recently used.
    Page frames: [7, 2, 3, 5]
    - Access page 4: Page fault, as the page frame is empty.
    Page frames: [7, 2, 3, 4]
    - Access page 6: Page fault, as the page frame is empty.
    Page frames: [7, 2, 3, 6]
    - Access page 7: Page fault, as page 7 is already in one of the frames but not the most recently used.
    Page frames: [7, 2, 3, 6]
    - Access page 7: Page fault, as page 7 is already in one of the frames but not the most recently used.
    Page frames: [7, 2, 3, 6]
    - Access page 1: Page fault, as the page frame is empty.
    Page frames: [7, 2, 3, 1]
    - Access page 5: Page fault, as page 5 is already in one of the frames but not the most recently used.
    Page frames: [7, 2, 3, 1]
    - Access page 6: Page fault, as page

    For 64 bit virtual addresses, a 4 KB page size and 256 MB of RAM, an inverted page table requires
    • a)
      240 entries
    • b)
      252 entries
    • c)
      216 entries
    • d)
      278 entries
    Correct answer is option 'C'. Can you explain this answer?

    Mansi Shah answered
    Virtul address space = 264 bytes
    Page size = 4 KB
    = 212 Bytes
    Physical Memotry = 256 MB.
    = 228 Bytes
    Number of entries in inverted page table = Number of frames in physical memory

    Which of the following is NOT an advantage of using shared, dynamically linked libraries as opposed to using statically linked libraries ?
    • a)
      Smaller sizes of executable files
    • b)
      Lesser overall page fault rate in the system
    • c)
      Faster program startup
    • d)
      Existing programs need not be re-linked to take advantage of newer versions of libraries
    Correct answer is option 'C'. Can you explain this answer?

    Sagar Saha answered
    Refer Static and Dynamic Libraries In Non-Shared (static) libraries, since library code is connected at compile time, the final executable has no dependencies on the the library at run time i.e. no additional run-time loading costs, it means that you don’t need to carry along a copy of the library that is being used and you have everything under your control and there is no dependency.

    Which of the following is not a form of memory?
    • a)
      instruction cache
    • b)
      instruction register
    • c)
      instruction opcode
    • d)
      translation lookaside buffer
    Correct answer is option 'C'. Can you explain this answer?

    Sonal Nair answered
    Instruction Cache - Used for storing instructions that are frequently used Instruction Register - Part of CPU's control unit that stores the instruction currently being executed Instruction Opcode - It is the portion of a machine language instruction that specifies the operation to be performed Translation Lookaside Buffer - It is a memory cache that stores recent translations of virtual memory to physical addresses for faster access. So, all the above except Instruction Opcode are memories. Thus, C is the correct choice. Please comment below if you find anything wrong in the above post.

    How many page faults will occur if FIFO Page replacement algorithm is used for the following reference string with three-page frames?
    7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
    • a)
      17
    • b)
      16
    • c)
      14
    • d)
      15
    Correct answer is option 'D'. Can you explain this answer?

    Arnab Kapoor answered
    Page Faults with FIFO Page Replacement Algorithm

    To determine the number of page faults that would occur using the FIFO (First-In, First-Out) page replacement algorithm for the given reference string and three-page frames, we need to simulate the process step by step.

    The reference string is as follows: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

    Step 1: Initialize an empty set for the page frames and set the count of page faults to 0.

    Step 2: Iterate through the reference string and for each page:

    - Check if the page is already present in the page frames.
    - If it is present, move to the next page.
    - If it is not present, check if there is an empty page frame available.
    - If there is an empty page frame, allocate the page to the empty frame and move to the next page.
    - If there is no empty page frame, select the oldest page in the page frames (the one that arrived first) and replace it with the current page. Increment the count of page faults.

    Step 3: Repeat Step 2 until all pages in the reference string are processed.

    Step 4: Count the total number of page faults that occurred during the simulation.

    Let's go through the simulation with the given reference string and three-page frames:

    - Initially, the page frames are empty.
    - The first page, 7, is not present, so it is allocated to an empty frame. (Page faults: 1)
    - The second page, 0, is not present, so it is allocated to an empty frame. (Page faults: 2)
    - The third page, 1, is not present, so it is allocated to an empty frame. (Page faults: 3)
    - The fourth page, 2, is not present, so it is allocated to an empty frame. (Page faults: 4)
    - The fifth page, 0, is already present in a frame, so no page fault occurs.
    - The sixth page, 3, is not present, so the oldest page (7) is replaced with 3. (Page faults: 5)
    - The seventh page, 0, is already present in a frame, so no page fault occurs.
    - The eighth page, 4, is not present, so the oldest page (0) is replaced with 4. (Page faults: 6)
    - The ninth page, 2, is already present in a frame, so no page fault occurs.
    - The tenth page, 3, is already present in a frame, so no page fault occurs.
    - The eleventh page, 0, is already present in a frame, so no page fault occurs.
    - The twelfth page, 3, is already present in a frame, so no page fault occurs.
    - The thirteenth page, 2, is already present in a frame, so no page fault occurs.
    - The fourteenth page, 1, is already present in a frame, so no page fault occurs.
    - The fifteenth page, 2, is already

    The size of the virtual memory depends on the size of the
    • a)
      Data bus
    • b)
      Main memory
    • c)
      Address bus
    • d)
      None of these
    Correct answer is option 'C'. Can you explain this answer?

    Prerna Joshi answered
    The size of virtual memory depends on the size of the address bus. Processor generates the memory address as per the size of virtual memory.

    Overlay is
    • a)
      A part of an operating system
    • b)
      A specific memory location
    • c)
      A single contiguous memory that was used in the olden days for running large programs by swapping
    • d)
      Overloading the system with many user files 
    Correct answer is option 'C'. Can you explain this answer?

    Tanishq Malik answered
    Overlay means “the process of transfers a block of program code or other data into internal memory, replacing what is already stored". Overlaying is a programming method that allows programs to be large than the computer’s main memory.

    The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The maximum storage density of the disk is 1400bits/cm. The disk rotates at a speed of 4200 RPM. The main memory of a computer has 64-bit word length and 1µs cycle time. If cycle stealing is used for data transfer from the disk, the percentage of memory cycles stolen for transferring one word is 
    • a)
      0.5%
    • b)
      1%
    • c)
      5%
    • d)
      10%
    Correct answer is option 'C'. Can you explain this answer?

    Subhankar Shah answered
    Inner most diameter = 10 cm Storage density = 1400 bits/cm 
    Capacity of each track : = 3.14 * diameter * density = 3.14 * 10 * 1400 = 43960 bits 
    Rotational latency = 60/4200 =1/70 seconds 
    It is given that the main memory of a computer has 64-bit word length and 1µs cycle time. 
    Data transferred in 1 sec = 64 * 106 bits Data read by disk in 1 sec = 43960 * 70 = 3.08 * 106 bits 
    Total memory cycle = (3.08 * 106) / (64 * 106) = 5% 
     
    Thus, option (C) is correct. 
     
    Please comment below if you find anything wrong in the above post

    In which one of the following page replacement algorithms it is possible for the page fault rate to increase even when the number of allocated frames increases?
    • a)
      LRU (Least Recently Used)
    • b)
      OPT (Optimal Page Replacement)
    • c)
      MRU (Most Recently Used)
    • d)
      FIFO (First In First Out)
    Correct answer is option 'D'. Can you explain this answer?

    In some situations FIFO page replacement gives more page faults when increasing the number of page frames. This situation is Belady’s anomaly. Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

    Consider a computer system with ten physical page frames. The system is provided with an access sequence a1, a2, ..., a20, a1, a2, ..., a20), where each ai number. The difference in the number of page faults between the last-in-first-out page replacement policy and the optimal page replacement policy is __________
    [Note that this question was originally Fill-in-the-Blanks question]
    • a)
      0
    • b)
      1
    • c)
      2
    • d)
      3
    Correct answer is option 'B'. Can you explain this answer?

    Avantika Yadav answered
    Introduction:
    In this question, we are given an access sequence of 20 page references, and we need to compare the performance of the Last-In-First-Out (LIFO) page replacement policy with the Optimal page replacement policy. We are asked to find the difference in the number of page faults between these two policies.

    Explanation:
    To solve this question, let's first understand the working of the LIFO and Optimal page replacement policies.

    Last-In-First-Out (LIFO) Page Replacement Policy:
    In the LIFO policy, the page that has been in memory the longest is replaced. This means that the page that was most recently brought into memory will be the last to be replaced. This policy does not consider the future access pattern and only focuses on the past.

    Optimal Page Replacement Policy:
    The Optimal policy replaces the page that will not be used for the longest period of time in the future. This policy requires knowledge of the future access pattern, which is not usually available in real-time scenarios. However, for the purpose of this question, we are assuming that the future access pattern is known.

    Now, let's analyze the given access sequence and compare the two policies.

    Access Sequence: a1, a2, ..., a20, a1, a2, ..., a20

    Page Faults with LIFO Policy:
    With the LIFO policy, the last page brought into memory will always be the first to be replaced. Since the access sequence is repeated twice and the page frames are limited to 10, the LIFO policy will encounter a page fault for every page reference after the first 10. Therefore, the number of page faults with the LIFO policy can be calculated as:
    Number of page faults with LIFO = 20 - 10 = 10

    Page Faults with Optimal Policy:
    Since we are assuming knowledge of the future access pattern, we can analyze the access sequence to determine the optimal page replacement. In this case, since the access sequence is repeated twice, the optimal policy can be achieved by having the first 10 page references in memory initially. The subsequent page references will not result in any page faults as they are already in memory. Therefore, the number of page faults with the Optimal policy is 0.

    Difference in Page Faults:
    The difference in the number of page faults between the LIFO and Optimal policies can be calculated as:
    Difference = Number of page faults with LIFO - Number of page faults with Optimal
    Difference = 10 - 0 = 10

    Therefore, the difference in the number of page faults between the LIFO and Optimal page replacement policies is 10.

    Conclusion:
    The correct answer is option 'B' - 1. The difference in the number of page faults between the LIFO and Optimal page replacement policies is 1.

    Chapter doubts & questions for Virtual Memory - Operating System 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

    Chapter doubts & questions of Virtual Memory - Operating System in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

    Operating System

    10 videos|100 docs|33 tests

    Top Courses Computer Science Engineering (CSE)

    Signup to see your scores go up within 7 days!

    Study with 1000+ FREE Docs, Videos & Tests
    10M+ students study on EduRev