3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

Mock Test Series - Computer Science Engg. (CSE) GATE 2020

Computer Science Engineering (CSE) : 3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

The document 3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev is a part of the Computer Science Engineering (CSE) Course Mock Test Series - Computer Science Engg. (CSE) GATE 2020.
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)

Cache Memory

Cache memory is a very high speed memory that is placed between the CPU and main memory, to operate at the speed of the CPU.

It  is used to reduce the average time to access data from the main memory. The cache is a smaller and  faster memory which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data.

  • Cache memory is much faster than main memory and its access time is very less as compared to main memory.

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

Cache Performance

When the processor needs to read or write a location in main memory, it first checks for a corresponding entry in the cache.

The cache checks for the contents of the requested memory location in any cache lines that might contain that address.

  • If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read from chache
  • If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache miss, the cache allocates a new entry and copies in data from main memory, then the request is fulfilled from the contents of the cache.

The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

Hit ratio = hit / (hit + miss) =  no. of hits/total accesses

Cache Mapping

The three different types of mapping used for the purpose of cache memory are as follow,

  • Direct mapping
  • Associative mapping,
  • Set-Associative mapping.

Direct mapping: In direct mapping assigned each memory block to a specific line in the cache. If a line is previously taken up by a memory block when a new block needs to be loaded, the old block is trashed. An address space is split into two parts index field and tag field. The cache is used to store the tag field whereas the rest is stored in the main memory. Direct mapping`s performance is directly proportional to the Hit ratio.

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

Associative mappingIn this type of mapping the associative memory is used to store content and addresses both of the memory word. Any block can go into any line of the cache. This means that the word id bits are used to identify which word in the block is needed, but the tag becomes all of the remaining bits. This enables the placement of the any word at any place in the cache memory. It is considered to be the fastest and the most flexible mapping form.

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

Set-associative mapping: This form of mapping is a enhanced form of the direct mapping where the drawbacks of direct mapping is removed. Set associative addresses the problem of possible thrashing in the direct mapping method. It does this by saying that instead of having exactly one line that a block can map to in the cache, we will group a few lines together creating a set. Then a block in memory can map to any one of the lines of a specific set..Set-associative mapping allows that each word that is present in the cache can have two or more words in the main memory for the same index address. Set associative cache mapping combines the best of direct and associative cache mapping techniques

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

The cache hit ratio for this initialization loop is
(A) 0%
(B) 25%
(C) 50%
(D) 75%
Answer: (C)

Explanation:

Cache hit ratio=No. of hits/total accesses

=1024/(1024+1024)

=1/2=0.5=50%

So (C) is correct option

Cache Organization | (Introduction)

Cache is close to CPU and faster than main memory. But at the same time is smaller than main memory. The cache organization is about mapping data in memory to a location in cache.

A Simple Solution: One way to go about this mapping is to consider last few bits of long memory address to find small cache address, and place them at the found address.

Problems With Simple Solution: The problem with this approach is, we loose the information about high order bits and have no way to find out the lower order bits belong to which higher order bits.

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

Solution is Tag: To handle above problem, more information is stored in cache to tell which block of memory is stored in cache. We store additional information as Tag

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

What is a Cache Block?

Since programs have Spatial Locality (Once a location is retrieved, it is highly probable that the nearby locations would be retrieved in near future). So a cache is organized in the form of blocks. Typical cache block sizes are 32 bytes or 64 bytes.

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

The above arrangement is Direct Mapped Cache and it has following problem

We have discussed above that last few bits of memory addresses are being used to address in cache and remaining bits are stored as tag. Now imagine that cache is very small and addresses of 2 bits. Suppose we use the last two bits of main memory address to decide the cache (as shown in below diagram). So if a program accesses 2, 6, 2, 6, 2, …, every access would cause a hit as 2 and 6 have to be stored in same location in cache.

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

Solution to above problem – Associativity

What if we could store data at any place in cache, the above problem won’t be there? That would slow down cache, so we do something in between.

3. Cache, Memory Hierarchy, Computer Organization and Architecture, GATE Computer Science Engineering (CSE) Notes | EduRev

Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity!

Dynamic Test

Content Category

Related Searches

Summary

,

Memory Hierarchy

,

Computer Organization and Architecture

,

pdf

,

Computer Organization and Architecture

,

Sample Paper

,

Previous Year Questions with Solutions

,

Objective type Questions

,

3. Cache

,

GATE Computer Science Engineering (CSE) Notes | EduRev

,

Memory Hierarchy

,

3. Cache

,

video lectures

,

mock tests for examination

,

Semester Notes

,

past year papers

,

Exam

,

Viva Questions

,

shortcuts and tricks

,

study material

,

Free

,

GATE Computer Science Engineering (CSE) Notes | EduRev

,

practice quizzes

,

GATE Computer Science Engineering (CSE) Notes | EduRev

,

Extra Questions

,

MCQs

,

3. Cache

,

Memory Hierarchy

,

Computer Organization and Architecture

,

Important questions

,

ppt

;