Table of contents | |
Introduction | |
Characteristics of Cache Memory | |
Levels of Memory | |
Cache Performance | |
Cache Mapping | |
Application of Cache Memory | |
Advantages of Cache Memory | |
Disadvantages of Cache Memory |
Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses
Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit ratio(H)
Enhanced Cache Performance Strategies:
Cache Memory Mapping Types:
1. Direct Mapping
Direct mapping, the most basic technique, assigns each block of main memory to a single cache line. In this method, every memory block is mapped to a specific line in the cache. If a line already contains a memory block and a new block needs to be loaded, the existing block is replaced. The address space is divided into two components: the index field and the tag field. The cache stores the tag field, while the remaining information is stored in main memory. The performance of direct mapping is directly linked to the hit ratio.
i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache
In cache access operations, each main memory address can be seen as comprising three components. The least significant w bits pinpoint a specific word or byte within a main memory block. Typically, addresses operate at the byte level in modern machines. The remaining s bits designate one of the 2s blocks of main memory. These s bits are interpreted by the cache logic as a tag with s-r bits (the uppermost portion) and a line field with r bits. This line field distinguishes one of the m=2r cache lines. The line offset represents the index bits in the direct mapping approach.
2. Associative Mapping
In this mapping scheme, associative memory stores both the content and addresses of memory words. Any block has the flexibility to reside in any cache line. Thus, the word ID bits are employed to specify which word within the block is required, while the tag encompasses all remaining bits. This architecture allows any word to be positioned in any location within the cache memory. It's regarded as the swiftest and most adaptable mapping method. In associative mapping, the index bits are rendered as zero.
3. Set-Associative Mapping
This mapping approach is an advancement over direct mapping, aiming to eliminate its limitations. Set-associative mapping addresses the potential thrashing issue inherent in direct mapping. Instead of strictly assigning one line for a block in the cache, set-associative mapping organizes lines into sets. This means a block in memory can map to any line within a specific set. With set-associative mapping, each word in the cache can correspond to two or more words in main memory sharing the same index address. By blending aspects of both direct and associative cache mapping techniques, set-associative mapping offers an optimized solution. The index bits in set-associative mapping are determined by the set offset bits. In this setup, the cache comprises several sets, each containing multiple lines.
Relationships in the Set-Associative Mapping can be defined as:
m = v * k
i= j mod v
where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set
Cache memory finds applications in various scenarios:
541 videos|683 docs|263 tests
|
1. What are the characteristics of Cache Memory? |
2. What are the different levels of memory in a computer system? |
3. How does Cache Performance impact overall system performance? |
4. What is Cache Mapping and how does it work? |
5. What are some advantages and disadvantages of using Cache Memory in a computer system? |
541 videos|683 docs|263 tests
|
|
Explore Courses for Bank Exams exam
|