Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE) PDF Download

Memory Management |Partition Allocation Method

In operating system, following are four common memory management techniques.

Single contiguous allocation:  Simplest allocation method used by MS-DOS.  All memory (except some reserved for OS) is available to a process.

Partitioned allocation: Memory is divided in different blocks

Paged memory management: Memory is divided in fixed sized units called page frames, used in virtual memory environment.

Segmented memory management: Memory is divided in different segments (a segment is logical grouping of process' data or code). In this management, allocated memory does'nt  have to be contiguous.

Most of the operating systems (for example Windows and Linux) use Segmentation with Paging. A process is divided in segments and individual segments have pages.

In Partition Allocation, when there are more than one partition freely available to accommodate a process’s request, a partition must be selected. To choose a particular partition, a partition allocation method is needed. A partition allocation method is considered better if it avoids internal fragmentation.

Below are the various partition allocation schemes :

1. First Fit : In the first fit, partition is allocated which is first sufficient from the top of Main Memory.

2. Best Fit :  Allocate the process to the partition which is first smallest sufficient partition among the free available partition.

3. Worst Fit : Allocate the process to the partition which is largest sufficient among the freely available partitions available in the main memory. 

4. Next Fit : Next fit is similar to the first fit but it will search for the first sufficient partition from the last allocation point.

Is Best-Fit really best?

Although, best fit minimizes the wastage space, it consumes a lot of processor time for searching the block which is close to required size. Also, Best-fit may perform poorer than other algorithms in some cases. For example, see below exercise.

Exercise: Consider the requests from processes in given order 300K, 25K, 125K and 50K. Let there be two blocks of memory available of size 150K followed by a block size 350K.
Which of the following partition allocation schemes can satisfy above requests?
A) Best fit but not first fit.
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.

Solution: Let us try all options.
Best Fit:
300K is allocated from block of size 350K. 50 is left in the block.
25K is allocated from the remaining 50K block. 25K is left in the block.
125K is allocated from 150 K block. 25K is left in this block also.
50K can’t be allocated even if there is 25K + 25K space available.

First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is be allocated from 150K block, 125K is left out.
Then 125K and 50K are allocated to remaining left out partitions.
So, first fit can handle requests.

So option B is the correct choice.

Page Replacement Algorithms

In a operating system that uses paging for memory management, page replacement algorithm are needed to decide which page needed to be replaced when new page comes in. Whenever a new page is referred and not present in memory, page fault occurs.

Page Fault

A page fault is a type of interrupt, raised by the hardware when a running program accesses a memory page that is mapped into the virtual address space, but not loaded in physical memory.

Page Replacement Algorithms

First In First Out

This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.

For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults.
when 3 comes, it is already in  memory so —> 0 Page Faults.
Then 5 comes, it is not available in  memory so it replaces the oldest page slot i.e 1. —>1 Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page Fault.

Belady’s anomaly

Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm.  For example, if we consider reference string      3     2     1     0     3     2     4     3     2     1     0     4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

Optimal Page replacement

In this algorithm, pages are replaced which are not used for the longest duration of time in the future.

Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available in the memory.

Optimal page replacement is perfect, but not possible in practice as operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.

Least Recently Used

In this algorithm page will be replaced which is least recently used.

Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the memory.

Virtual Memory

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory. The addresses a program may use to reference memory are distinguished from the addresses the memory system uses to identify physical storage sites, and program generated addresses are translated automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations.

It is a technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.

  1. All memory references within a process are logical addresses that are dynamically translated into physical addresses at run time. This means that a process can be swapped in and out of main memory such that it occupies different places in main memory at different times during the course of execution.
  2. A process may be broken into number of pieces and these pieces need not be continuously located in the main memory during execution. The combination of dynamic run-time addres translation and use of page or segment table permits this.

If these characteristics are present then, it is not necessary that all the pages or segments are present in the main memory during execution. This means that the required pages need to be loaded into memory whenever required. Virtual memory is implemented using Demand Paging or Demand Segmentation.

Demand Paging :

The process of loading the page into memory on demand (whenever page fault occurs) is known as demand paging.
The process includes the following steps:

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

  1. If CPU try to refer a page that is currently not available in the main memory, it generates an interrupt indicating memory access fault.
  2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS must bring the required page into the memory.
  3. The OS will search for the required page in the logical address space.
  4. The required page will be brought from logical address space to physical address space. The page replacement algorithms are used for the decision making of replacing the page in physical address space.
  5. The page table will updated accordingly.
  6. The signal will be sent to the CPU to continue the program execution and it will place the process back into ready state.

Hence whenever a page fault occurs these steps are followed by the operating system and the required page is brought into memory.

Advantages :

  • More processes may be maintained in the main memory: Because we are going to load only some of the pages of any particular process, there is room for more processes. This leads to more efficient utilization of the processor because it is more likely that at least one of the more numerous processes will be in the ready state at any particular time.
  • A process may be larger than all of main memory: One of the most fundamental restrictions in programming is lifted. A process larger than the main memory can be executed because of demand paging. The OS itself loads pages of a process in main memory as required.
  • It allows greater multiprogramming levels by using less of the available (primary) memory for each process.

Page Fault Service Time :

The time taken to service the page fault is called as page fault service time. The page fault service time includes the time taken to perform all the above six steps.

Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m

Swapping:

Swapping a process out means removing all of its pages from memory, or marking them so that they will be removed by the normal page replacement process. Suspending a process ensures that it is not runnable while it is swapped out. At some later time, the system swaps back the process from the secondary storage to main memory. When a process is busy swapping pages in and out then this situation is called thrashing.

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

Thrashing :

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

At any given time, only few pages of any process are in main memory and therefore more processes can be maintained in memory. Furthermore time is saved because unused pages are not swapped in and out of memory. However, the OS must be clever about how it manages this scheme. In the steady state practically, all of main memory will be occupied with process’s pages, so that the processor and OS has direct access to as many processes as possible. Thus when the OS brings one page in, it must throw another out. If it throws out a page just before it is used, then it will just have to get that page again almost immediately. Too much of this leads to a condition called Thrashing. The system spends most of its time swapping pages rather than executing instructions. So a good page replacement algorithm is required.

In the given diagram, initial degree of multi programming upto some extent of point(lamda), the CPU utilization is very high and the system resources are utilized 100%. But if we further increase the degree of multi programming the CPU utilization will drastically fall down and the system will spent more time only in the page replacement and the time taken to complete the execution of the process will increase. This situation in the system is called as thrashing.

Causes of Thrashing :

  1. High degree of multiprogramming : If the number of processes keeps on increasing in the memory than number of frames allocated to each process will be decreased. So, less number of frames will be available to each process. Due to this, page fault will occur more frequently and more CPU time will be wasted in just swapping in and out of pages and the utilization will keep on decreasing.
    For example:
    Let free frames = 400
    Case 1Number of process = 100
    Then, each process will get 4 frames.
    Case 2: Number of process = 400
    Each process will get 1 frame.
    Case 2 is a condition of thrashing, as the number of processes are increased,frames per process are decreased. Hence CPU time will be consumed in just swapping pages.
  2. Lacks of Frames:If a process has less number of frames then less pages of that process will be able to reside in memory and hence more frequent swapping in and out will be required. This may lead to thrashing. Hence sufficient amount of frames must be allocated to each process in order to prevent thrashing.

Recovery of Thrashing :

  • Do not allow the system to go into thrashing by instructing the long term scheduler not to bring the processes into memory after the threshold.
  • If the system is already in thrashing then instruct the mid term schedular to suspend some of the processes so that we can recover the system from thrashing.

 Paging

Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. This scheme permits the physical address space of a process to be non – contiguous.

  • Logical Address or Virtual Address (represented in bits): An address generated by the CPU
  • Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all logical addresses generated by a program
  • Physical Address (represented in bits): An address actually available on memory unit
  • Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding to the logical addresses

Example:

  • If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G = 230)
  • If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27 bits
  • If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M = 220)
  • If Physical Address Space = 16 M words = 24 * 220 words, then Logical Address = log2 224 = 24 bits

The mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device and this mapping is known as paging technique.

  • The Physical Address Space is conceptually divided into a number of fixed-size blocks, called frames.
  • The Logical address Space is also splitted into fixed-size blocks, called pages.
  • Page Size = Frame Size

Let us consider an example:

  • Physical Address = 12 bits, then Physical Address Space = 4 K words
  • Logical Address = 13 bits, then Logical Address Space = 8 K words
  • Page size = frame size = 1 K words (assumption)

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

Address generated by CPU is divided into

  • Page number(p): Number of bits required to represent the pages in Logical Address Space or Page number
  • Page offset(d): Number of bits required to represent particular word in a page or page size of Logical Address Space or word number of a page or page offset.

Physical Address is divided into

  • Frame number(f): Number of bits required to represent the frame of Physical Address Space or Frame number.
  • Frame offset(d): Number of bits required to represent particular word in a frame or frame size of Physical Address Space or word number of a frame or frame offset.

The hardware implementation of page table can be done by using dedicated registers. But the usage of register for the page table is satisfactory only if page table is small. If page table contain large number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast look up hardware cache.

  • The TLB is associative, high speed memory.
  • Each entry in TLB consists of two parts: a tag and a value.
  • When this memory is used, then an item is compared with all tags simultaneously.If the item is found, then corresponding value is returned.

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

Segmentation

A Memory Management technique in which memory is divided into variable sized chunks which can be allocated to processes. Each chunk is called a Segment.

A table stores the information about all such segments and is called Segment Table. 

Segment Table: It maps two dimensional Logical address into one dimensional Physical address.

It’s each table entry has

  • Base Address: It contains the starting physical address where the segments reside in memory.
  • Limit: It specifies the length of the segment.

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

Translation of Two dimensional Logical Address to one dimensional Physical Address.

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)  

Address generated by the CPU is divided into:

  • Segment number (s): Number of bits required to represent the segment.
  • Segment offset (d): Number of bits required to represent the size of the segment.

Advantages of Segmentation:

  • No Internal fragmentation.
  • Segment Table consumes less space in comparison to Page table in paging.

Disadvantage of Segmentation:

  • As processes are loaded and removed from the memory, the free memory space is broken into little pieces, causing External fragmentation.
The document Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE) is a part of the Computer Science Engineering (CSE) Course Operating System.
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)
10 videos|99 docs|33 tests

Top Courses for Computer Science Engineering (CSE)

FAQs on Memory Management & Virtual Memory - Operating System - Computer Science Engineering (CSE)

1. What is memory management in computer science engineering?
Ans. Memory management in computer science engineering refers to the process of managing and organizing computer memory effectively. It includes tasks such as allocation and deallocation of memory space to different programs and processes, ensuring efficient utilization of memory resources, and preventing memory-related errors or conflicts.
2. What is virtual memory and how does it work?
Ans. Virtual memory is a memory management technique where the operating system uses a combination of physical memory (RAM) and secondary storage (usually hard disk) to provide the illusion of a larger, contiguous memory space to programs. It works by dividing the virtual address space of a process into fixed-size blocks called pages, which are then mapped to physical memory or stored in the disk when not in use. This allows programs to utilize more memory than physically available and enables efficient memory allocation.
3. What are the advantages of virtual memory?
Ans. Virtual memory offers several advantages, including: 1. Increased memory capacity: Virtual memory allows programs to access more memory than physically available in the system. This enables running larger programs or multiple programs simultaneously without running out of memory. 2. Memory protection: Virtual memory provides memory isolation between different processes, preventing one program from accessing or modifying the memory of another program. This enhances system security and stability. 3. Simplified memory management: With virtual memory, programs do not need to worry about physical memory constraints or fragmentation. The operating system handles the allocation and management of memory pages, making it easier for programmers to develop software. 4. Swapping and multitasking: Virtual memory enables swapping, which is the process of moving inactive pages from physical memory to disk. This allows the system to efficiently utilize limited physical memory resources and facilitate multitasking by quickly switching between different programs.
4. How does the operating system allocate memory to programs?
Ans. The operating system allocates memory to programs using various techniques, including: 1. Fixed partitioning: The physical memory is divided into fixed-size partitions, and each partition is assigned to a specific program. This method suffers from fragmentation and inefficient memory utilization. 2. Dynamic partitioning: The physical memory is divided into variable-sized partitions based on the size of the programs. The operating system dynamically allocates and deallocates memory as processes enter or exit the system. This method reduces fragmentation but requires more complex memory management algorithms. 3. Paging: The virtual memory address space of a process is divided into fixed-size blocks called pages. The operating system maps these pages to physical memory in a page table. When a program needs more memory, the operating system assigns additional pages to it. Paging allows for efficient memory utilization and easy memory allocation.
5. What are the common memory-related errors in computer systems?
Ans. Some common memory-related errors in computer systems include: 1. Segmentation faults: These occur when a program tries to access memory outside its allocated address space. It is typically caused by programming errors or memory corruption. 2. Memory leaks: Memory leaks happen when a program fails to release memory it no longer needs, leading to a gradual loss of available memory over time. This can cause performance degradation and eventual system crashes. 3. Stack overflow: A stack overflow occurs when a program's call stack exceeds the available memory space allocated to it. It often happens due to infinite recursion or excessive nesting of function calls. 4. Page faults: Page faults occur when a program tries to access a page of memory that is not currently mapped to physical memory. The operating system needs to retrieve the required page from secondary storage, causing a delay in program execution. 5. Out of memory errors: These errors happen when a program requests more memory than the system can provide. It can occur due to improper memory management or running too many resource-intensive programs simultaneously.
10 videos|99 docs|33 tests
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Objective type Questions

,

Extra Questions

,

Previous Year Questions with Solutions

,

Free

,

study material

,

Summary

,

Viva Questions

,

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

,

Important questions

,

Semester Notes

,

MCQs

,

practice quizzes

,

ppt

,

Sample Paper

,

pdf

,

video lectures

,

shortcuts and tricks

,

Exam

,

mock tests for examination

,

past year papers

,

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

,

Memory Management & Virtual Memory | Operating System - Computer Science Engineering (CSE)

;