Short Notes: Memory Management and Virtual Memory | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE) PDF Download

Download, print and study this document offline
Please wait while the PDF view is loading
 Page 1


Memory management techniques allow several processes to share memory. When 
several processes are in memory they can share the CPU, thus increasing CPU 
utilization.
Address Binding: Binding of instructions and data to memory addresses.
1. Compile time: if process location is known then absolute code can be 
generated.
2. Load time: Compiler generates relocatable code which is bound at load time.
3. Execution time: If a process can be moved from one memory segment to 
another then binding must be delayed until run time.
Dynamic Loading:
• Routine is not loaded until it is called.
• Better memory-space utilization;
• Unused routine is never loaded.
• Useful when large amounts of code are needed to handle infrequently 
occurring cases.
• No special support from the operating system is required; implemented 
through program design.
Dynamic Linking:
• Linking postponed until execution time.
• Small piece of code, stub, used to locate the appropriate memory-resident 
library routine.
• Stub replaces itself with the address of the routine, and executes the routine.
• Operating system needed to check if routine is in processes’ memory address
Page 2


Memory management techniques allow several processes to share memory. When 
several processes are in memory they can share the CPU, thus increasing CPU 
utilization.
Address Binding: Binding of instructions and data to memory addresses.
1. Compile time: if process location is known then absolute code can be 
generated.
2. Load time: Compiler generates relocatable code which is bound at load time.
3. Execution time: If a process can be moved from one memory segment to 
another then binding must be delayed until run time.
Dynamic Loading:
• Routine is not loaded until it is called.
• Better memory-space utilization;
• Unused routine is never loaded.
• Useful when large amounts of code are needed to handle infrequently 
occurring cases.
• No special support from the operating system is required; implemented 
through program design.
Dynamic Linking:
• Linking postponed until execution time.
• Small piece of code, stub, used to locate the appropriate memory-resident 
library routine.
• Stub replaces itself with the address of the routine, and executes the routine.
• Operating system needed to check if routine is in processes’ memory address
Overlays: This techniques allow to keep in memory only those instructions and 
data, which are required at given time. The other instruction and data is loaded into 
the memory space occupied by the previous ones when they are needed.
Swapping: Consider an environment which supports multiprogramming using say 
Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished 
executing for one time quantum, it is swapped out of memory to a backing store.
Main memory Main backing store
The memory manager then picks up another process from the backing store and 
loads it into the memory occupied by the previous process. Then, the scheduler 
picks up another process and allocates the .CPU to it.
Memory Management Techniques
The main memory must accommodate both the operating system and the various 
user processes. The parts of the main memory must be allocated in the most 
efficient way possible.
There are two ways for memory allocation as given below
Single Partition Allocation: The memory is divided into two parts. One to be used 
by as and the other one is for user programs. The as code and date is protected 
from being modified by user programs using a base register.
USER
o s
1024 k 
100 k 
0
Single partition 
allocation
Multiple Partition Allocation: The multiple partition allocation may be further 
classified as
Fixed Partition Scheme: Memory is divided into a number of fixed size partitions. 
Then, each partition holds one process. This scheme supports multiprogramming 
as a number of processes may be brought into memory and the CPU can be 
switched from one process to another.
When a process arrives for execution, it is put into the input queue of the smallest 
partition, which is large enough to hold it.
Variable Partition Scheme: A block of available memory is designated as a hole at 
any time, a set of holes exists, which consists of holes of various sizes scattered 
throughout the memory.
When a process arrives and needs memory, this set of holes is searched for a hole 
which is large enough to hold the process. If the hole is too large, it is split into two 
parts. The unused part is added to the set of holes. All holes which are adjacent to 
each other are merged.
There are different ways of implementing allocation of partitions from a list of free 
holes, such as:
Page 3


Memory management techniques allow several processes to share memory. When 
several processes are in memory they can share the CPU, thus increasing CPU 
utilization.
Address Binding: Binding of instructions and data to memory addresses.
1. Compile time: if process location is known then absolute code can be 
generated.
2. Load time: Compiler generates relocatable code which is bound at load time.
3. Execution time: If a process can be moved from one memory segment to 
another then binding must be delayed until run time.
Dynamic Loading:
• Routine is not loaded until it is called.
• Better memory-space utilization;
• Unused routine is never loaded.
• Useful when large amounts of code are needed to handle infrequently 
occurring cases.
• No special support from the operating system is required; implemented 
through program design.
Dynamic Linking:
• Linking postponed until execution time.
• Small piece of code, stub, used to locate the appropriate memory-resident 
library routine.
• Stub replaces itself with the address of the routine, and executes the routine.
• Operating system needed to check if routine is in processes’ memory address
Overlays: This techniques allow to keep in memory only those instructions and 
data, which are required at given time. The other instruction and data is loaded into 
the memory space occupied by the previous ones when they are needed.
Swapping: Consider an environment which supports multiprogramming using say 
Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished 
executing for one time quantum, it is swapped out of memory to a backing store.
Main memory Main backing store
The memory manager then picks up another process from the backing store and 
loads it into the memory occupied by the previous process. Then, the scheduler 
picks up another process and allocates the .CPU to it.
Memory Management Techniques
The main memory must accommodate both the operating system and the various 
user processes. The parts of the main memory must be allocated in the most 
efficient way possible.
There are two ways for memory allocation as given below
Single Partition Allocation: The memory is divided into two parts. One to be used 
by as and the other one is for user programs. The as code and date is protected 
from being modified by user programs using a base register.
USER
o s
1024 k 
100 k 
0
Single partition 
allocation
Multiple Partition Allocation: The multiple partition allocation may be further 
classified as
Fixed Partition Scheme: Memory is divided into a number of fixed size partitions. 
Then, each partition holds one process. This scheme supports multiprogramming 
as a number of processes may be brought into memory and the CPU can be 
switched from one process to another.
When a process arrives for execution, it is put into the input queue of the smallest 
partition, which is large enough to hold it.
Variable Partition Scheme: A block of available memory is designated as a hole at 
any time, a set of holes exists, which consists of holes of various sizes scattered 
throughout the memory.
When a process arrives and needs memory, this set of holes is searched for a hole 
which is large enough to hold the process. If the hole is too large, it is split into two 
parts. The unused part is added to the set of holes. All holes which are adjacent to 
each other are merged.
There are different ways of implementing allocation of partitions from a list of free 
holes, such as:
• first-fit: allocate the first hole that is big enough
• best-fit: allocate the smallest hole that is small enough; the entire list of holes 
must be searched, unless it is ordered by size
• next-fit: scan holes from the location of the last allocation and choose the 
next available block that is large enough (can be Implemented using a circular 
linked list)
Disadvantages of Memory Management Techniques
The above schemes cause external and internal fragmentation of the memory as 
given below
• External Fragmentation: When there is enough total memory In the system to 
satisfy the requirements of a process but the memory is not contiguous.
• Internal Fragmentation: The memory wasted Inside the allocated blocks of 
memory called internal fragmentation.
e. g., consider a process requiring 150 k, thus if a hold of size 170 k is allocated to 
it the remaining 20 k is wasted.
Compaction: This is strategy to solve the problem of external fragmentation. All 
free memory is placed together by moving processes to new locations.
Paging:
• It is a memory management technique, which allows the memory to be 
allocated to the process wherever it is available.
• Physical memory is divided into fixed size blocks called frames.
• Logical memory Is broken Into blocks of same size called pages.
• The backing store is also divided into same size blocks.
• When a process is to be executed its pages are loaded into available page 
frames.
• A frame is a collection of contiguous pages.
• Every logical address generated by the CPU is divided into two parts: The 
page number (P) and the page offset (d).
• The page number is used as an index into a page table.
(Pago table)
A paging block diagram
• Each entry in the page table contains the base address of the page in physical 
memory (f).
• The base address of the Pth entry is then combined with the offset (d) to give 
the actual address in memory.
Paging Example:
Page 4


Memory management techniques allow several processes to share memory. When 
several processes are in memory they can share the CPU, thus increasing CPU 
utilization.
Address Binding: Binding of instructions and data to memory addresses.
1. Compile time: if process location is known then absolute code can be 
generated.
2. Load time: Compiler generates relocatable code which is bound at load time.
3. Execution time: If a process can be moved from one memory segment to 
another then binding must be delayed until run time.
Dynamic Loading:
• Routine is not loaded until it is called.
• Better memory-space utilization;
• Unused routine is never loaded.
• Useful when large amounts of code are needed to handle infrequently 
occurring cases.
• No special support from the operating system is required; implemented 
through program design.
Dynamic Linking:
• Linking postponed until execution time.
• Small piece of code, stub, used to locate the appropriate memory-resident 
library routine.
• Stub replaces itself with the address of the routine, and executes the routine.
• Operating system needed to check if routine is in processes’ memory address
Overlays: This techniques allow to keep in memory only those instructions and 
data, which are required at given time. The other instruction and data is loaded into 
the memory space occupied by the previous ones when they are needed.
Swapping: Consider an environment which supports multiprogramming using say 
Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished 
executing for one time quantum, it is swapped out of memory to a backing store.
Main memory Main backing store
The memory manager then picks up another process from the backing store and 
loads it into the memory occupied by the previous process. Then, the scheduler 
picks up another process and allocates the .CPU to it.
Memory Management Techniques
The main memory must accommodate both the operating system and the various 
user processes. The parts of the main memory must be allocated in the most 
efficient way possible.
There are two ways for memory allocation as given below
Single Partition Allocation: The memory is divided into two parts. One to be used 
by as and the other one is for user programs. The as code and date is protected 
from being modified by user programs using a base register.
USER
o s
1024 k 
100 k 
0
Single partition 
allocation
Multiple Partition Allocation: The multiple partition allocation may be further 
classified as
Fixed Partition Scheme: Memory is divided into a number of fixed size partitions. 
Then, each partition holds one process. This scheme supports multiprogramming 
as a number of processes may be brought into memory and the CPU can be 
switched from one process to another.
When a process arrives for execution, it is put into the input queue of the smallest 
partition, which is large enough to hold it.
Variable Partition Scheme: A block of available memory is designated as a hole at 
any time, a set of holes exists, which consists of holes of various sizes scattered 
throughout the memory.
When a process arrives and needs memory, this set of holes is searched for a hole 
which is large enough to hold the process. If the hole is too large, it is split into two 
parts. The unused part is added to the set of holes. All holes which are adjacent to 
each other are merged.
There are different ways of implementing allocation of partitions from a list of free 
holes, such as:
• first-fit: allocate the first hole that is big enough
• best-fit: allocate the smallest hole that is small enough; the entire list of holes 
must be searched, unless it is ordered by size
• next-fit: scan holes from the location of the last allocation and choose the 
next available block that is large enough (can be Implemented using a circular 
linked list)
Disadvantages of Memory Management Techniques
The above schemes cause external and internal fragmentation of the memory as 
given below
• External Fragmentation: When there is enough total memory In the system to 
satisfy the requirements of a process but the memory is not contiguous.
• Internal Fragmentation: The memory wasted Inside the allocated blocks of 
memory called internal fragmentation.
e. g., consider a process requiring 150 k, thus if a hold of size 170 k is allocated to 
it the remaining 20 k is wasted.
Compaction: This is strategy to solve the problem of external fragmentation. All 
free memory is placed together by moving processes to new locations.
Paging:
• It is a memory management technique, which allows the memory to be 
allocated to the process wherever it is available.
• Physical memory is divided into fixed size blocks called frames.
• Logical memory Is broken Into blocks of same size called pages.
• The backing store is also divided into same size blocks.
• When a process is to be executed its pages are loaded into available page 
frames.
• A frame is a collection of contiguous pages.
• Every logical address generated by the CPU is divided into two parts: The 
page number (P) and the page offset (d).
• The page number is used as an index into a page table.
(Pago table)
A paging block diagram
• Each entry in the page table contains the base address of the page in physical 
memory (f).
• The base address of the Pth entry is then combined with the offset (d) to give 
the actual address in memory.
Paging Example:
frame
number
page 3
logical
memory
page 0 0
0 1
page 1
1 4
1
page 2
2
3
3
7
2
page table
page 0
page 2
page 1
page 3
physical
memorv
Implementation of Page Table:
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the page table.
• Page-table length register (PRLR) indicates size of the page table.
• In this scheme every data/instruction access requires two memory accesses. 
One for the page table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast- 
lookup hardware cache called associative memory or translation look-aside 
buffers (TLBs)
1. Associative Memory:
• Frame number is available within memory.
• Each entry in memory has two portions: Page number and frame number.
2. Paging Hardware With TLB:
Page 5


Memory management techniques allow several processes to share memory. When 
several processes are in memory they can share the CPU, thus increasing CPU 
utilization.
Address Binding: Binding of instructions and data to memory addresses.
1. Compile time: if process location is known then absolute code can be 
generated.
2. Load time: Compiler generates relocatable code which is bound at load time.
3. Execution time: If a process can be moved from one memory segment to 
another then binding must be delayed until run time.
Dynamic Loading:
• Routine is not loaded until it is called.
• Better memory-space utilization;
• Unused routine is never loaded.
• Useful when large amounts of code are needed to handle infrequently 
occurring cases.
• No special support from the operating system is required; implemented 
through program design.
Dynamic Linking:
• Linking postponed until execution time.
• Small piece of code, stub, used to locate the appropriate memory-resident 
library routine.
• Stub replaces itself with the address of the routine, and executes the routine.
• Operating system needed to check if routine is in processes’ memory address
Overlays: This techniques allow to keep in memory only those instructions and 
data, which are required at given time. The other instruction and data is loaded into 
the memory space occupied by the previous ones when they are needed.
Swapping: Consider an environment which supports multiprogramming using say 
Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished 
executing for one time quantum, it is swapped out of memory to a backing store.
Main memory Main backing store
The memory manager then picks up another process from the backing store and 
loads it into the memory occupied by the previous process. Then, the scheduler 
picks up another process and allocates the .CPU to it.
Memory Management Techniques
The main memory must accommodate both the operating system and the various 
user processes. The parts of the main memory must be allocated in the most 
efficient way possible.
There are two ways for memory allocation as given below
Single Partition Allocation: The memory is divided into two parts. One to be used 
by as and the other one is for user programs. The as code and date is protected 
from being modified by user programs using a base register.
USER
o s
1024 k 
100 k 
0
Single partition 
allocation
Multiple Partition Allocation: The multiple partition allocation may be further 
classified as
Fixed Partition Scheme: Memory is divided into a number of fixed size partitions. 
Then, each partition holds one process. This scheme supports multiprogramming 
as a number of processes may be brought into memory and the CPU can be 
switched from one process to another.
When a process arrives for execution, it is put into the input queue of the smallest 
partition, which is large enough to hold it.
Variable Partition Scheme: A block of available memory is designated as a hole at 
any time, a set of holes exists, which consists of holes of various sizes scattered 
throughout the memory.
When a process arrives and needs memory, this set of holes is searched for a hole 
which is large enough to hold the process. If the hole is too large, it is split into two 
parts. The unused part is added to the set of holes. All holes which are adjacent to 
each other are merged.
There are different ways of implementing allocation of partitions from a list of free 
holes, such as:
• first-fit: allocate the first hole that is big enough
• best-fit: allocate the smallest hole that is small enough; the entire list of holes 
must be searched, unless it is ordered by size
• next-fit: scan holes from the location of the last allocation and choose the 
next available block that is large enough (can be Implemented using a circular 
linked list)
Disadvantages of Memory Management Techniques
The above schemes cause external and internal fragmentation of the memory as 
given below
• External Fragmentation: When there is enough total memory In the system to 
satisfy the requirements of a process but the memory is not contiguous.
• Internal Fragmentation: The memory wasted Inside the allocated blocks of 
memory called internal fragmentation.
e. g., consider a process requiring 150 k, thus if a hold of size 170 k is allocated to 
it the remaining 20 k is wasted.
Compaction: This is strategy to solve the problem of external fragmentation. All 
free memory is placed together by moving processes to new locations.
Paging:
• It is a memory management technique, which allows the memory to be 
allocated to the process wherever it is available.
• Physical memory is divided into fixed size blocks called frames.
• Logical memory Is broken Into blocks of same size called pages.
• The backing store is also divided into same size blocks.
• When a process is to be executed its pages are loaded into available page 
frames.
• A frame is a collection of contiguous pages.
• Every logical address generated by the CPU is divided into two parts: The 
page number (P) and the page offset (d).
• The page number is used as an index into a page table.
(Pago table)
A paging block diagram
• Each entry in the page table contains the base address of the page in physical 
memory (f).
• The base address of the Pth entry is then combined with the offset (d) to give 
the actual address in memory.
Paging Example:
frame
number
page 3
logical
memory
page 0 0
0 1
page 1
1 4
1
page 2
2
3
3
7
2
page table
page 0
page 2
page 1
page 3
physical
memorv
Implementation of Page Table:
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the page table.
• Page-table length register (PRLR) indicates size of the page table.
• In this scheme every data/instruction access requires two memory accesses. 
One for the page table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast- 
lookup hardware cache called associative memory or translation look-aside 
buffers (TLBs)
1. Associative Memory:
• Frame number is available within memory.
• Each entry in memory has two portions: Page number and frame number.
2. Paging Hardware With TLB:
page table
• Effective Access Time 
Let Associative Lookup = t time unit,
Assume memory cycle time is 1 microsecond
Hit ratio: percentage of times that a page number is found in the associative 
registers; ration related to
number of associative registers.
Hit ratio = h
Effective Access Time (EAT) EAT = (1 + t) h + (2 + t)(1 -h ) = 2 + t - h
Memory Protection: Memory protection implemented by associating protection bit 
with each frame.
Valid-invalid bit attached to each entry in the page table:
1. "valid" indicates that the associated page is in the process’ logical address 
space, and is thus a legal page.
2. "invalid" indicates that the page is not in the process’ logical address space.
Example:
Read More
90 docs

Top Courses for Computer Science Engineering (CSE)

90 docs
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Short Notes: Memory Management and Virtual Memory | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE)

,

mock tests for examination

,

Viva Questions

,

Exam

,

MCQs

,

Sample Paper

,

Summary

,

pdf

,

Objective type Questions

,

Extra Questions

,

Previous Year Questions with Solutions

,

practice quizzes

,

video lectures

,

Short Notes: Memory Management and Virtual Memory | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE)

,

ppt

,

shortcuts and tricks

,

Short Notes: Memory Management and Virtual Memory | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE)

,

Free

,

study material

,

Semester Notes

,

Important questions

,

past year papers

;