All questions of Memory Hierarchy: Cache, Main Memory and Secondary Storage for Computer Science Engineering (CSE) Exam

The small extremly fast, RAM’s all called as ________
  • a)
    Cache
  • b)
    Heaps
  • c)
    Accumulators
  • d)
    Stacks
Correct answer is option 'B'. Can you explain this answer?

Maitri Yadav answered
Answer: b
Explanation: Cache’s are extremly essential in single BUS organisation to achieve fast operation.

The logical addresses generated by the cpu are mapped onto physical memory by ____
  • a)
    Relocation register
  • b)
    TLB
  • c)
    MMU
  • d)
    None of the mentioned
Correct answer is option 'C'. Can you explain this answer?

Alok Desai answered
Logical addresses and Physical memory mapping

Logical addresses are generated by the CPU during the execution of a program. These addresses are virtual addresses that are independent of the underlying physical memory. The CPU uses logical addresses to access data and instructions in memory.

Physical memory refers to the actual physical locations in the memory hardware where data and instructions are stored. Each physical memory location has a unique physical address.

Mapping logical addresses to physical memory

The mapping of logical addresses to physical memory is performed by the Memory Management Unit (MMU) in the CPU. The MMU is responsible for translating logical addresses into physical addresses, allowing the CPU to access the correct physical location in memory.

The role of the MMU

The MMU uses various mechanisms to perform the mapping of logical addresses to physical addresses. One of the key mechanisms used by the MMU is the Translation Lookaside Buffer (TLB). The TLB is a cache that stores recently accessed translations between logical addresses and physical addresses. This cache helps to improve the performance of address translation by reducing the number of memory accesses required.

Relocation register

The relocation register is a hardware register that is used to perform dynamic address translation. It is used to adjust the logical addresses generated by the CPU to the correct physical addresses in memory. The relocation register contains the base address of the current memory partition or segment, and it is added to the logical address to obtain the corresponding physical address.

The correct answer

The correct answer to the given question is option 'C', MMU. The MMU is responsible for mapping logical addresses to physical memory. It performs this mapping using mechanisms such as the TLB and the relocation register. The TLB caches recently accessed translations, while the relocation register adjusts the logical addresses to the correct physical addresses. Therefore, the MMU plays a crucial role in the address translation process.

On of the most widely used schemes of encoding used is _________
  • a)
    NRZ-polar
  • b)
    RZ-polar
  • c)
    Manchester
  • d)
    Block encoding
Correct answer is option 'C'. Can you explain this answer?

Manchester Encoding
Manchester encoding is one of the most widely used schemes of encoding in digital communication. It is a type of line coding where the signal is divided into discrete voltage levels to represent binary data. In Manchester encoding, each bit is represented by a transition in the middle of the bit period.

How Manchester Encoding Works
Manchester encoding works by assigning a specific voltage level to each bit in the data stream. The two voltage levels used are typically positive and negative. The encoding scheme ensures that there is always a transition in the middle of each bit period, allowing the receiver to synchronize its clock with the transmitted data.

Advantages of Manchester Encoding
1. Clock Synchronization: The use of transitions in the middle of each bit period enables the receiver to synchronize its clock with the transmitted data. This ensures accurate data recovery without the need for a separate clock signal.

2. Error Detection: Manchester encoding can detect certain types of errors, such as bit inversions or transmission line noise. If a transition is missing or occurs at the wrong time, the receiver can detect the error and request retransmission of the data.

3. DC Balance: Manchester encoding maintains a balanced distribution of positive and negative voltage levels, eliminating the problem of DC bias. This is important in long-distance communication where DC bias can cause signal degradation.

4. Simple Implementation: Manchester encoding is relatively simple to implement using digital logic circuits. The encoding and decoding procedures can be implemented using basic gates and flip-flops.

Disadvantages of Manchester Encoding
1. Bandwidth Requirement: Manchester encoding requires twice the bandwidth compared to other encoding schemes like NRZ (Non-Return-to-Zero). This is because each bit is represented by two voltage transitions.

2. Lower Data Rate: Due to the need for transitions in the middle of each bit period, Manchester encoding has a lower data rate compared to other encoding schemes. This can be a limitation in applications where high-speed data transmission is required.

Applications of Manchester Encoding
Manchester encoding is commonly used in Ethernet networks, where it is used to encode data for transmission over twisted pair or fiber optic cables. It is also used in wireless communication protocols such as Zigbee and RFID. Additionally, Manchester encoding is used in some digital storage systems like magnetic tape drives and optical discs.

In conclusion, Manchester encoding is a widely used encoding scheme in digital communication due to its clock synchronization, error detection, and DC balance properties. While it has some limitations in terms of bandwidth and data rate, it finds applications in various communication systems.

The duration between the read and the mfc signal is ______
  • a)
    Access time
  • b)
    Latency
  • c)
    Delay
  • d)
    Cycle time
Correct answer is option 'A'. Can you explain this answer?

Jyoti Sengupta answered
Answer: a
Explanation: The time between the issue of read signal and the completion of it is called memory access time.

The DMA doesn’t make use of the MMU for bulk data transfers. 
  • a)
    True
  • b)
    False
Correct answer is option 'B'. Can you explain this answer?

The DMA stands for the Direct Marketing Association. It is a trade association that represents the direct marketing industry. The DMA provides resources, education, and advocacy for companies and professionals involved in direct marketing.

Direct marketing refers to marketing efforts that involve directly contacting individuals or businesses to promote products or services. This can include methods such as direct mail, email marketing, telemarketing, and more. The DMA works to promote ethical and responsible practices within the direct marketing industry.

The DMA offers various benefits to its members, including access to research, industry events, networking opportunities, and educational resources. The association also provides guidelines and best practices for direct marketers to follow, ensuring that they adhere to privacy regulations and maintain high standards of professionalism.

In addition to supporting its members, the DMA also advocates for the direct marketing industry as a whole. It works to influence legislation and regulations that impact direct marketing, and promotes the value and effectiveness of direct marketing as a marketing channel.

Overall, the DMA plays a crucial role in supporting and promoting the direct marketing industry and its professionals.

The less space consideration as lead to the development of ________ (for large memories).
  • a)
    SIMM’s
  • b)
    DIMS’s
  • c)
    SSRAM’s
  • d)
    Both SIMM’s and DIMS’s
Correct answer is option 'D'. Can you explain this answer?

Vaishnavi Kaur answered
Answer: d
Explanation: The SIMM (single inline memory module) or DIMM (dual inline memory module) occupy less space while providing greater memory space.

 The disadvantage of DRAM over SRAM is/are _______
  • a)
    Lower data storage capacities
  • b)
    Higher heat descipation
  • c)
    The cells are not static
  • d)
    All of the mentioned
Correct answer is option 'C'. Can you explain this answer?

Saanvi Gupta answered
The correct answer is option 'C': The cells in DRAM are not static.

Explanation:

DRAM (Dynamic Random Access Memory) and SRAM (Static Random Access Memory) are two different types of memory technologies used in computer systems. While they both serve the purpose of storing data, they have distinct characteristics and advantages/disadvantages.

1. DRAM stores data in individual memory cells, which are composed of a capacitor and a transistor. The data is stored in the form of electrical charge in the capacitors. However, the charge in the capacitors leaks over time, which means that the data stored in DRAM cells needs to be continuously refreshed. This process is called "dynamic" because the cells require periodic refreshing to maintain their data integrity.

2. On the other hand, SRAM stores data in bistable flip-flops, which are composed of multiple transistors. Unlike DRAM, SRAM does not require refreshing, as the data remains stored as long as power is supplied to the memory cells. This makes SRAM "static" in nature, meaning it does not need periodic refreshing.

Now, let's discuss why the correct answer is option 'C': The cells in DRAM are not static.

- DRAM cells are not static: As mentioned earlier, the charge stored in the capacitors of DRAM cells leaks over time. This means that the data stored in DRAM needs to be continuously refreshed to maintain its integrity. The refreshing process introduces additional complexity and overhead in terms of both hardware and software. On the other hand, SRAM cells are static and do not require periodic refreshing, which simplifies the memory architecture.

- Lower data storage capacities (option 'a'): This option is incorrect because DRAM and SRAM can have similar data storage capacities. The capacity of both types of memory is determined by the number of memory cells that can be packed in a given area. However, DRAM is more commonly used for higher-capacity memory modules due to its cost-effectiveness.

- Higher heat dissipation (option 'b'): This option is incorrect because heat dissipation is not a specific disadvantage of DRAM over SRAM. Both types of memory technologies can generate heat when they are in use, but the heat dissipation mechanisms depend on the overall system design and cooling solutions rather than the memory technology itself.

In conclusion, the correct disadvantage of DRAM over SRAM is that the cells in DRAM are not static and require periodic refreshing to maintain data integrity.

The asscociatively mapped virtual memory makes use of _______
  • a)
    TLB
  • b)
    Page table
  • c)
    Frame table
  • d)
    None of the mentioned
Correct answer is option 'A'. Can you explain this answer?

Swati Patel answered
Associatively Mapped Virtual Memory and TLB

Associatively mapped virtual memory is a memory management technique used in computer systems to efficiently map virtual addresses to physical addresses. It is a combination of the benefits of direct-mapped and fully associative mapping techniques.

What is a TLB?

A Translation Lookaside Buffer (TLB) is a cache-like structure that stores recently accessed virtual-to-physical address translations. It is a hardware component that is used to speed up the memory translation process. The TLB contains a subset of the page table entries (PTEs) that are frequently accessed.

How Associatively Mapped Virtual Memory makes use of TLB?

In an associatively mapped virtual memory system, the TLB plays a crucial role in the address translation process. When a virtual address is generated by the CPU, it first checks the TLB for a matching translation entry. If a match is found, the TLB provides the corresponding physical address directly, avoiding the need to access the page table. This is known as a TLB hit.

If a TLB hit does not occur, the CPU must consult the page table to find the translation. The associatively mapped virtual memory system typically uses a page table, which is a data structure that maps virtual pages to physical frames. The page table contains entries that store the virtual-to-physical address mappings.

Advantages of Associatively Mapped Virtual Memory with TLB

1. Faster Address Translation: The TLB allows for faster address translation by storing frequently accessed translations. This speeds up memory access and improves overall system performance.

2. Reduced Memory Access: With the use of the TLB, the CPU can avoid accessing the page table for every memory access. This reduces the number of memory accesses required and improves efficiency.

3. Flexibility: Associatively mapped virtual memory provides the flexibility to dynamically manage the TLB entries based on the most frequently accessed pages. This allows for efficient memory management and optimization.

4. Improved Performance: By reducing the number of memory accesses and providing faster address translation, associatively mapped virtual memory with TLB significantly improves system performance and response time.

In conclusion, associatively mapped virtual memory makes use of the TLB to speed up address translation and improve system performance. The TLB stores recently accessed translations, allowing the CPU to directly retrieve the physical address without accessing the page table.

Which register is connected to the MUX ?
  • a)
    Y
  • b)
    Z
  • c)
    R0
  • d)
    Temp
Correct answer is option 'A'. Can you explain this answer?

Srishti Yadav answered
Answer: a
Explanation: The MUX can either read the operand from the Y register or increment the PC.

In set-associative technique, the blocks are grouped into ______ sets.
  • a)
    4
  • b)
    8
  • c)
    12
  • d)
    6
Correct answer is option 'D'. Can you explain this answer?

In set-associative technique, the blocks are grouped into sets. The correct answer is option 'D', which states that there are 6 sets in set-associative technique. Let's understand why this is the correct answer.

Set-Associative Technique:
---------------------------
The set-associative technique is a compromise between the direct-mapped and fully-associative cache mapping techniques. In this technique, the cache is divided into a number of sets, and each set contains a certain number of cache blocks. Each memory block is mapped to a specific set using a hash function.

Key Points:
1. The cache is divided into sets.
2. Each set contains a certain number of cache blocks.
3. Each memory block is mapped to a specific set using a hash function.

Explanation:
--------------
To understand the answer, let's consider an example. Suppose we have a cache memory with 24 blocks and we want to use a set-associative technique with 4 blocks per set.

1. We divide the total number of blocks (24) by the number of blocks per set (4) to get the total number of sets.
- Total number of sets = Total number of blocks / Number of blocks per set
- Total number of sets = 24 / 4
- Total number of sets = 6

2. Therefore, in this case, the cache memory will be divided into 6 sets, with each set containing 4 blocks.

Conclusion:
------------
In the set-associative technique, the blocks are grouped into sets. The number of sets depends on the total number of blocks and the number of blocks per set. In the given question, the correct answer is option 'D', which states that there are 6 sets in set-associative technique.

The algorithm to remove and place new contents into the cache is called _______
  • a)
    Replacement algorithm
  • b)
    Renewal algorithm
  • c)
    Updation
  • d)
    None of the mentioned
Correct answer is option 'A'. Can you explain this answer?

Sarthak Desai answered
Answer: a
Explanation: As the cache gets full, older contents of the cache are swapped out with newer contents. This decision is taken by the algorithm.

A common strategy for performance is making various functional units operate parallely. 
  • a)
    True
  • b)
    False
Correct answer is option 'A'. Can you explain this answer?



Parallel Operation of Functional Units

One common strategy for improving performance in computer systems is to make various functional units operate in parallel. This involves breaking down tasks into smaller subtasks and assigning them to different functional units that can work simultaneously.

Benefits of Parallel Operation

1. Increased Throughput: By having multiple functional units working in parallel, the overall throughput of the system can be increased. This is particularly beneficial for tasks that can be divided into independent subtasks.

2. Faster Execution: Parallel operation allows tasks to be completed more quickly since multiple functional units are working on different parts of the task simultaneously. This can lead to faster overall execution times for programs.

3. Resource Utilization: By utilizing multiple functional units simultaneously, system resources are used more efficiently. This can help in maximizing the performance of the system without requiring additional resources.

4. Scalability: Parallel operation allows for scalability as more functional units can be added to further increase performance. This makes it easier to adapt to changing workload requirements.

Challenges of Parallel Operation

1. Dependency Management: Ensuring that tasks are divided in a way that minimizes dependencies between them can be a challenge. Dependencies can lead to bottlenecks and reduce the benefits of parallel operation.

2. Overhead: Managing parallel operation incurs some overhead in terms of coordination between functional units. This overhead needs to be carefully managed to avoid diminishing returns.

In conclusion, making various functional units operate in parallel is a common strategy for improving performance in computer systems. It offers benefits such as increased throughput, faster execution, resource utilization, and scalability, but also comes with challenges such as dependency management and overhead.

The only difference between the EEPROM and flash memory is that the latter doesn’t allow bulk data to be written. 
  • a)
    True
  • b)
    False
Correct answer is option 'A'. Can you explain this answer?

Anmol Basu answered
Understanding EEPROM and Flash Memory
Both EEPROM (Electrically Erasable Programmable Read-Only Memory) and flash memory are types of non-volatile storage, meaning they retain data even when power is removed. However, they have fundamental differences in terms of data writing and erasing processes.
Key Differences Between EEPROM and Flash Memory
- Writing Process:
- EEPROM allows data to be written and erased at the byte level. This means you can modify a single byte without affecting the rest of the memory.
- Flash memory, on the other hand, is typically written and erased in blocks. This means that to write new data, entire blocks must be erased first, which is less flexible than EEPROM.
- Bulk Writing Capability:
- The statement that "the only difference between EEPROM and flash memory is that the latter doesn’t allow bulk data to be written" simplifies the situation. Flash memory does allow for bulk writing, but it requires block-level operations, making it less efficient for small, random write operations compared to EEPROM.
- Speed and Endurance:
- Flash memory generally offers faster write and erase times and has a higher endurance for write cycles compared to EEPROM, making it preferable for applications that require frequent updates.
Conclusion
In conclusion, while it is true that flash memory does not allow for byte-level writing and requires block operations, the assertion that this is the only difference oversimplifies the characteristics of both types of memory. Therefore, the correct answer to the statement is actually False.

Chapter doubts & questions for Memory Hierarchy: Cache, Main Memory and Secondary Storage - Computer Architecture & Organisation (CAO) 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Memory Hierarchy: Cache, Main Memory and Secondary Storage - Computer Architecture & Organisation (CAO) in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev