The Memory Hierarchy | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE) PDF Download

The Memory Hierarchy

  • Capacity, cost and speed of different types of memory play a vital role while designing a memory system for computers.
  • If the memory has larger capacity, more application will get space to run smoothly.
  • It's better to have fastest memory as far as possible to achieve a greater performance. Moreover for the practical system, the cost should be reasonable.
  • There is a tradeoff between these three characteristics cost, capacity and access time. One cannot achieve all these quantities in same memory module because
    • If capacity increases, access time increases (slower) and due to which cost per bit decreases.
    • If access time decreases (faster), capacity decreases and due to which cost per bit increases.
  • The designer tries to increase capacity because cost per bit decreases and the more application program can be accommodated. But at the same time, access time increases and hence decreases the performance.

So the best idea will be to use memory hierarchy.

  • Memory Hierarchy is to obtain the highest possible access speed while minimizing the total cost of the memory system.
  • Not all accumulated information is needed by the CPU at the same time.
  • Therefore, it is more economical to use low-cost storage devices to serve as a backup for storing the information that is not currently used by CPU
  • The memory unit that directly communicate with CPU is called the main memory  
  • Devices that provide backup storage are called auxiliary memory
  • The memory hierarchy system consists of all storage devices employed in a computer system from the slow by high-capacity auxiliary memory to a relatively faster main memory, to an even smaller and faster cache memory
  • The main memory occupies a central position by being able to communicate directly with the CPU and with auxiliary memory devices through an I/O processor
  • A special very-high-speed memory called cache is used to increase the speed of processing by making current programs and data available to the CPU at a rapid rate
  • CPU logic is usually faster than main memory access time, with the result that processing speed is limited primarily by the speed of main memory
  • The cache is used for storing segments of programs currently being executed in the CPU and temporary data frequently needed in the present calculations
  • The memory hierarchy system consists of all storage devices employed in a computer system from slow but high capacity auxiliary memory to a relatively faster cache memory accessible to high speed processing logic. The figure below illustrates memory hierarchy.

The Memory Hierarchy | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

 

The Memory Hierarchy | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

 

  • As we go down in the hierarchy
    • Cost per bit decreases
    • Capacity of memory increases
    • Access time increases
    • Frequency of access of memory by processor also decreases.
  • Hierarchy List 
    • Registers
    • L1 Cache
    • L2 Cache
    • Main memory
    • Disk cache
    • Disk
    • Optical
    • Tape
The document The Memory Hierarchy | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE) is a part of the Computer Science Engineering (CSE) Course Computer Architecture & Organisation (CAO).
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)
20 videos|86 docs|48 tests

Top Courses for Computer Science Engineering (CSE)

FAQs on The Memory Hierarchy - Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

1. What is the memory hierarchy in computer science engineering?
Ans. The memory hierarchy in computer science engineering refers to the organization and structure of different levels of memory within a computer system. It consists of multiple levels, including registers, cache, main memory (RAM), and secondary storage (hard drives or solid-state drives). These levels are arranged in a hierarchy based on their proximity to the processor and their speed and cost characteristics.
2. How does the memory hierarchy improve computer performance?
Ans. The memory hierarchy improves computer performance by exploiting the principle of locality. This principle states that programs tend to access a small portion of the available memory at any given time. The memory hierarchy places frequently accessed data and instructions in faster and more expensive levels of memory, such as cache, closer to the processor. This reduces the average access time and enhances the overall system performance.
3. What is the role of cache memory in the memory hierarchy?
Ans. Cache memory plays a crucial role in the memory hierarchy. It is a small, high-speed memory located between the processor and main memory. Cache memory stores frequently accessed data and instructions from the main memory to provide faster access to the processor. By keeping a copy of frequently used data closer to the processor, cache memory reduces the average memory access time and improves system performance.
4. How is data transferred between different levels of the memory hierarchy?
Ans. Data is transferred between different levels of the memory hierarchy using the principle of cache coherence. When the processor requests data, the cache memory checks if the data is present in its cache. If not, it fetches the data from the next level of memory and stores it in the cache. Similarly, when data is modified in the cache, it is written back to the next level of memory to maintain consistency across different levels.
5. What are the trade-offs involved in designing the memory hierarchy?
Ans. Designing the memory hierarchy involves several trade-offs. One trade-off is between speed and cost. Faster and smaller memory levels, such as registers and cache, are more expensive than larger and slower memory levels, such as main memory and secondary storage. Another trade-off is between capacity and latency. Larger memory levels provide more storage capacity but may have higher access latency. Designers need to balance these trade-offs to optimize the overall system performance and cost efficiency.
20 videos|86 docs|48 tests
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

shortcuts and tricks

,

The Memory Hierarchy | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

Viva Questions

,

Important questions

,

Extra Questions

,

pdf

,

Free

,

Sample Paper

,

past year papers

,

The Memory Hierarchy | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

practice quizzes

,

mock tests for examination

,

Objective type Questions

,

The Memory Hierarchy | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

MCQs

,

ppt

,

study material

,

Summary

,

Previous Year Questions with Solutions

,

Exam

,

Semester Notes

,

video lectures

;