Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE) PDF Download

Principles

  • Intended to give memory speed approaching that of fastest memories available but with large size, at close to price of slower memories
  • Cache is checked first for all memory references.
  • If not found, the entire block in which that reference resides in main memory is stored in a cache slot, called a line
  • Each line includes a tag (usually a portion of the main memory address) which identifies which particular block is being stored
  • Locality of reference implies that future references will likely come from this block of memory, so that cache line will probably be utilized repeatedly.
  • The proportion of memory references, which are found already stored in cache, is called the hit ratio.
  • Cache memory is intended to give memory speed approaching that of the fastest memories available, and at the same time provide a large memory size at the price of less expensive types of semiconductor memories. There is a relatively large and slow main memory together with a smaller, faster cache memory contains a copy of portions of main memory.
  • When the processor attempts to read a word of memory, a check is made to determine if the word is in the cache. If so, the word is delivered to the processor. If not, a block of main memory, consisting of fixed number of words is read into the cache and then the word is delivered to the processor.

Question for Cache Memory Principles
Try yourself:
What is the purpose of cache memory in a computer system?
View Solution

  • The locality of reference property states that over a short interval of time, address generated by a typical program refers to a few localized area of memory repeatedly. So if programs and data which are accessed frequently are placed in a fast memory, the average access time can be reduced. This type of small, fast memory is called cache memory which is placed in between the CPU and the main memory.

Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

  • When the CPU needs to access memory, cache is examined. If the word is found in cache, it is read from the cache and if the word is not found in cache, main memory is accessed to read word. A block of word containing the one just accessed is then transferred from main memory to cache memory.

Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

  • Cache connects to the processor via data control and address line. The data and address lines also attached to data and address buffer which attached to a system bus from which main memory is reached.
  • When a cache hit occurs, the data and address buffers are disabled and the communication is only between processor and cache with no system bus traffic. When a cache miss occurs, the desired word is first read into the cache and then transferred from cache to processor. For later case, the cache is physically interposed between the processor and main memory for all data, address and control lines.

Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

  • CPU generates the receive address (RA) of a word to be moved (read).
  • Check a block containing RA is in cache.
  • If present, get from cache (fast) and return.
  • If not present, access and read required block from main memory to cache.
  • Allocate cache line for this new found block.
  • Load bock for cache and deliver word to CPU
  • Cache includes tags to identify which block of main memory is in each cache slot

Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

Locality of Reference

  • The reference to memory at any given interval of time tends to be confined within a few localized area of memory. This property is called locality of reference. This is possible because the program loops and subroutine calls are encountered frequently. When program loop is executed, the CPU will execute same portion of program repeatedly. Similarly, when a subroutine is called, the CPU fetched starting address of subroutine and executes the subroutine program. Thus loops and subroutine localize reference to memory.
  • This principle states that memory references tend to cluster over a long period of time, the clusters in use changes but over a short period of time, the processor is primarily working with fixed clusters of memory references.

Question for Cache Memory Principles
Try yourself:
What is the purpose of cache memory in a computer system?
View Solution

Spatial Locality 

  • It refers to the tendency of execution to involve a number of memory locations that are clustered.
  • It reflects tendency of a program to access data locations sequentially, such as when processing a table of data.

Temporal Locality

  • It refers to the tendency for a processor to access memory locations that have been used frequently. For e.g. Iteration loops executes same set of instructions repeatedly.
The document Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE) is a part of the Computer Science Engineering (CSE) Course Computer Architecture & Organisation (CAO).
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)
20 videos|86 docs|48 tests

Top Courses for Computer Science Engineering (CSE)

FAQs on Cache Memory Principles - Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

1. What is cache memory and how does it work?
Ans. Cache memory is a small, high-speed memory that is used to store frequently accessed data and instructions. It acts as a buffer between the main memory and the CPU, providing faster access to frequently used data. When the CPU needs data, it checks the cache first. If the data is found in the cache (cache hit), it is retrieved quickly. If the data is not in the cache (cache miss), it is fetched from the main memory and also stored in the cache for future use.
2. What are the advantages of using cache memory in computer systems?
Ans. Cache memory offers several advantages in computer systems. Firstly, it reduces the average access time for data and instructions, improving overall system performance. It also helps to bridge the speed gap between the fast CPU and slower main memory. Additionally, cache memory reduces the number of memory accesses to the main memory, thereby reducing memory bus traffic and improving system efficiency. Finally, cache memory helps to alleviate the bottleneck caused by the limited bandwidth of the system bus.
3. What is the principle behind cache memory organization?
Ans. Cache memory organization follows the principle of locality. Locality refers to the tendency of a program to access data and instructions that are close to each other in both time and space. Cache memory exploits two types of locality: temporal locality (reusing recently accessed data) and spatial locality (accessing data located nearby in memory). By storing recently accessed data in the cache, cache memory takes advantage of these localities to provide faster access to frequently used data.
4. How is cache memory organized in a computer system?
Ans. Cache memory is typically organized in a hierarchy, consisting of multiple levels. The most commonly used organization is a two-level cache hierarchy: L1 cache (level 1) and L2 cache (level 2). The L1 cache is the smallest and fastest cache, located closest to the CPU. It stores the most recently accessed data. The L2 cache is larger but slower, located between the L1 cache and the main memory. It acts as a backup to the L1 cache, storing additional data that may not fit in the L1 cache.
5. What are the different cache replacement policies used in cache memory?
Ans. Cache replacement policies determine which cache line should be replaced when a cache miss occurs. Some commonly used replacement policies include: - Least Recently Used (LRU): This policy replaces the least recently used cache line. - First-In-First-Out (FIFO): This policy replaces the cache line that was first inserted into the cache. - Random Replacement: This policy selects a cache line randomly for replacement. - Least Frequently Used (LFU): This policy replaces the cache line that has been accessed the least number of times. Each replacement policy has its own advantages and disadvantages, and the choice of policy depends on the specific requirements and constraints of the computer system.
20 videos|86 docs|48 tests
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

MCQs

,

Sample Paper

,

Semester Notes

,

Viva Questions

,

ppt

,

video lectures

,

practice quizzes

,

Objective type Questions

,

past year papers

,

Extra Questions

,

shortcuts and tricks

,

Exam

,

study material

,

Free

,

Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

Summary

,

Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

Previous Year Questions with Solutions

,

pdf

,

Cache Memory Principles | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

mock tests for examination

,

Important questions

;