All Exams  >   Computer Science Engineering (CSE)  >   Embedded Systems (Web)  >   All Questions

All questions of Embedded Processors & Memory for Computer Science Engineering (CSE) Exam

 Which of the following address is seen by the memory unit?
  • a)
    logical address
  • b)
    physical address
  • c)
    virtual address
  • d)
    memory address
Correct answer is option 'B'. Can you explain this answer?

Cs Toppers answered
In computing, a physical address (also real address, or binary address), is a memory address that is represented in the form of a binary number on the address bus circuitry in order to enable the data bus to access a particular storage cell of main memory, or a register of memory mapped I/O device.hence,option (A) is correct.
1 Crore+ students have signed up on EduRev. Have you? Download the App

 Which of the following is more volatile?
  • a)
    SRAM
  • b)
    DRAM
  • c)
    ROM
  • d)
    RAM
Correct answer is option 'B'. Can you explain this answer?

Explanation: DRAM is said to be more volatile because it has a capacitor as its storage element in which the data disappears when the capacitor loses its charge so even when the device is powered the data can be lost.

 Which of the following memories has more speed in accessing data?
  • a)
    SRAM
  • b)
    DRAM
  • c)
    EPROM
  • d)
    EEPROM
Correct answer is option 'A'. Can you explain this answer?

Kajal Sharma answered
Explanation: SRAM have more speed than DRAM because it has 4 to 6 transistors arranged as flip-flop logic gates, that is it can be flipped from one binary state to another but DRAM has a small capacitor as its storage element.

 Which of the following is serial access memory?
  • a)
    RAM
  • b)
    Flash memory
  • c)
    Shifters
  • d)
    ROM
Correct answer is option 'C'. Can you explain this answer?

Explanation: The memory arrays are basically divided into three which are random access memory, serial access memory, and content address memory. Serial access memory is divided into two, theses are shifters and queues.

Where is memory address stored in a C program?
  • a)
    stack
  • b)
    pointer
  • c)
    register
  • d)
    accumulator
Correct answer is option 'B'. Can you explain this answer?

Explanation: Memory model is defined by a range of memory address which is accessible to the program. For example, in C program, the memory address is stored in the pointer.

 Which of the memory organisation is widely used in parity bit?
  • a)
    by 1 organisation
  • b)
    by 4 organisation
  • c)
    by 8 organisation
  • d)
    by 9 organisation
Correct answer is option 'A'. Can you explain this answer?

Explanation: The use of By 1 organisation is declined because of the wider data path devices. But it is still used in parity bit and were used in SIMM memory.

How many memory locations can be accessed by 8086?
  • a)
    1 M
  • b)
    2 M
  • c)
    3 M
  • d)
    4 M
Correct answer is option 'A'. Can you explain this answer?

Rohan Shah answered
Explanation: The 8086 processor has a 20-bit address bus, hence it can access a memory of 220-1 M locations.

What does MESI stand for?
  • a)
    modified exclusive stale invalid
  • b)
    modified exclusive shared invalid
  • c)
    modified exclusive system input
  • d)
    modifies embedded shared invalid
Correct answer is option 'B'. Can you explain this answer?

Megha Dasgupta answered
Explanation: The MESI protocol supports a shared state which is a formal mechanism for controlling the cache coherency by using the bus snooping techniques. MESI refers to the states that cached data can access. In MESI protocol, multiple processors can cache shared data.

Why is SRAM more preferably in non-volatile memory?
  • a)
    low-cost
  • b)
    high-cost
  • c)
    low power consumption
  • d)
    transistor as a storage element
Correct answer is option 'C'. Can you explain this answer?

Explanation: SRAM will retain data as long it is powered up and it does not need to be refreshed as DRAM. It is designed for low power consumption and used in preference .DRAM is cheaper than SRAM but it is based on refresh circuitry as it loses charge since the capacitor is the storage element.

Which is a vector processor?
  • a)
    Subword parallelism
  • b)
    CISC
  • c)
    Superscalar
  • d)
    VLIW
Correct answer is option 'A'. Can you explain this answer?

Avantika Shah answered
Explanation: Subword parallelism is a form of a vector processing. A vector processor is the one whose instruction set includes operations on multiple data elements simultaneously.

How many bits are used for storing signed integers?
  • a)
    2
  • b)
    4
  • c)
    8
  • d)
    16
Correct answer is option 'D'. Can you explain this answer?

Atharva Das answered
Explanation: Signed integers in a coprocessor are stored as 16-bit word, 32-bit double word or 64-bit quadword.

Which of the following has a Harvard architecture?
  • a)
    EDSAC
  • b)
    SSEM
  • c)
    PIC
  • d)
    CSIRAC
Correct answer is option 'C'. Can you explain this answer?

Arpita Gupta answered
Explanation: PIC follows Harvard architecture in which the external bus architecture consist of separate buses for instruction and data whereas SSEM, EDSAC, CSIRAC are stored program architecture.

Which are the 4 general purpose 16 bit register in Intel 80286?
  • a)
    CS,DS,SS,ES
  • b)
    AX,BX,CX,DX
  • c)
    IP,FL,DI,SI
  • d)
    DI,SI,BP,SP
Correct answer is option 'B'. Can you explain this answer?

Shubham Das answered
Explanation: Intel 80286 possess 4 general purpose registers and these are 16-bit in size. In addition to the general purpose register, there are four segmented registers, two index registers and a base pointer register.

How many MOSFETs are required for SRAM?
  • a)
    2
  • b)
    4
  • c)
    6
  • d)
    8
Correct answer is option 'C'. Can you explain this answer?

Explanation: Six MOSFETs are required for a typical SRAM. Each bit of SRAM is stored in four transistors which form two cross-coupled inverters.

Which of the following has programmable hardware?
  • a)
    microcontroller
  • b)
    microprocessor
  • c)
    coprocessor
  • d)
    FPGA
Correct answer is option 'D'. Can you explain this answer?

Simran Chavan answered
Programmable Hardware: FPGA

Field Programmable Gate Array (FPGA) is a type of programmable hardware that allows designers to program the hardware using a hardware description language (HDL). It is a chip that can be programmed and reprogrammed to perform any digital function.

FPGA vs Microcontroller/Microprocessor/Coprocessor

Microcontroller, microprocessor, and coprocessor are all types of processors that execute instructions in a sequential manner. They are designed to perform specific functions and cannot be reprogrammed to perform different functions without changing the hardware.

On the other hand, an FPGA can be reprogrammed to perform a different function without changing the hardware. This makes it a more flexible and versatile solution than microcontrollers, microprocessors or coprocessors.

Advantages of FPGA

FPGA has several advantages over traditional processors, including:

1. Flexibility: FPGAs can be reprogrammed to perform different functions, making them more flexible than traditional processors.

2. High Performance: FPGAs can be optimized for specific tasks, making them faster and more efficient than traditional processors.

3. Low Power Consumption: FPGAs can be programmed to perform only the required tasks, resulting in lower power consumption.

4. Parallel Processing: FPGAs can perform multiple tasks in parallel, making them ideal for high-performance computing applications.

Conclusion

In conclusion, FPGAs are a type of programmable hardware that offers flexibility, high performance, low power consumption, and parallel processing capabilities. They are a versatile solution that can be used in a wide range of applications, including aerospace, defense, automotive, and consumer electronics.

 Which statement is true for a cache memory?
  • a)
    memory unit which communicates directly with the CPU
  • b)
    provides backup storage
  • c)
    a very high-speed memory to increase the speed of the processor
  • d)
    secondary storage
Correct answer is option 'C'. Can you explain this answer?

Explanation: The RAM is the primary storage which directly communicates with the CPU. ROM is the secondary storage. Disk drives are capable of providing backup storage and the cache memory is a small high-speed memory which increases the speed of the processor.

What is approximate data access time of SRAM?
  • a)
    4ns
  • b)
    10ns
  • c)
    2ns
  • d)
    60ns
Correct answer is option 'D'. Can you explain this answer?

Nandini Khanna answered
Explanation: SRAM access data in approximately 4ns because of its flip-flop arrangement of transistors whereas the data access time in DRAM is approximately 60ns since it has a single capacitor for one-bit storage.

Which of the architecture is more complex?
  • a)
    SPARC
  • b)
    MC 68030
  • c)
    MC68030
  • d)
    8086
Correct answer is option 'A'. Can you explain this answer?

Arpita Gupta answered
Explanation: SPARC have RISC architecture which has a simple instruction set but MC68020, MC68030, 8086 have CISC architecture which is more complex than CISC.

 Which memory storage is widely used in PCs and Embedded Systems?
  • a)
    SRAM
  • b)
    DRAM
  • c)
    Flash memory
  • d)
    EEPROM
Correct answer is option 'B'. Can you explain this answer?

Naina Shah answered
Explanation: DRAM is used in PCs and Embedded systems because of its low cost. SRAM, flash memory and EEPROM are more costly than DRAM.

Which of the following is the biggest challenge in the cache memory design?
  • a)
    delay
  • b)
    size
  • c)
    coherency
  • d)
    memory access
Correct answer is option 'C'. Can you explain this answer?

Rohan Patel answered
The biggest challenge in cache memory design is coherency.

Coherency refers to the consistency of data stored in different levels of cache memory and main memory. In a multi-level cache hierarchy system, different levels of cache store copies of the same data. When one level of cache modifies a data item, it needs to ensure that all other levels of cache and the main memory are updated with the latest value of the data.

Coherency is critical because if different levels of cache have inconsistent copies of the same data, it can lead to incorrect program execution and produce incorrect results. Maintaining coherency in a cache hierarchy is a challenging task due to various factors, such as:

1. Cache Snooping: Cache snooping is a technique used to detect changes made to data in one level of cache by monitoring the bus transactions. When a cache line is modified in one cache, the snooping mechanism notifies all other caches to invalidate or update their copies of the same data. Implementing efficient cache snooping mechanisms to maintain coherency requires careful design considerations.

2. Cache Invalidation and Update: When a cache line is modified in one cache, it needs to be invalidated or updated in all other caches and the main memory. This requires efficient protocols for invalidating and updating cache lines across multiple levels of cache. The design of these protocols must ensure minimal overhead in terms of latency and bandwidth.

3. Cache Coherency Protocols: Cache coherency protocols define the rules and mechanisms for maintaining coherency in a cache hierarchy. There are various protocols such as MESI (Modified, Exclusive, Shared, Invalid), MOESI (Modified, Owned, Exclusive, Shared, Invalid), and MOESIF (Modified, Owned, Exclusive, Shared, Invalid, Forward) that define the behavior of caches in terms of invalidating, updating, and sharing data. Choosing and implementing the appropriate protocol for a particular cache hierarchy is a complex task.

4. Performance Impact: Maintaining coherency introduces additional overhead in terms of cache invalidations, updates, and bus transactions. This can impact the overall performance of the system. Designing cache coherence mechanisms that minimize this overhead and optimize performance is a major challenge in cache memory design.

In conclusion, coherency is the biggest challenge in cache memory design as it requires careful consideration of cache snooping, cache invalidation and update, cache coherency protocols, and performance impact. Ensuring coherency across multiple levels of cache and main memory is crucial for the correct and efficient operation of a cache hierarchy system.

Name a processor which is used in digital audio appliances.
  • a)
    8086
  • b)
    Motorola DSP56000
  • c)
    80486
  • d)
    8087
Correct answer is option 'B'. Can you explain this answer?

Puja Bajaj answered
Explanation: Motorola DSP56000 is a powerful digital signal processor which is used in digital audio applications which have the capability of noise reduction and multi-band graphics whereas 8087 is a coprocessor and 80486 and 8086 are microprocessors.

 Which of the following is the main factor which determines the memory capacity?
  • a)
    number of transistors
  • b)
    number of capacitors
  • c)
    size of the transistor
  • d)
    size of the capacitor
Correct answer is option 'A'. Can you explain this answer?

Arindam Goyal answered
The main factor that determines the memory capacity is the number of transistors.

Explanation:
- Memory capacity refers to the amount of data that can be stored in a memory device, such as a computer's RAM or a solid-state drive.
- Transistors are the fundamental building blocks of electronic devices, including memory chips. They are responsible for storing and manipulating data in a digital format.
- The number of transistors in a memory chip directly affects its capacity to store data. Each transistor represents a binary digit, or a bit, which can be either a 0 or a 1.
- A memory chip with more transistors can represent a larger number of bits, allowing for greater storage capacity. This is because each additional transistor adds an additional bit of storage.
- For example, a memory chip with 1 million transistors can store 1 million bits or approximately 125 kilobytes of data.
- Similarly, a memory chip with 1 billion transistors can store 1 billion bits or approximately 125 megabytes of data.
- As technology advances, the number of transistors that can be packed onto a single memory chip increases, leading to larger memory capacities.
- This trend is commonly referred to as Moore's Law, which states that the number of transistors on a chip doubles approximately every two years.
- Hence, the number of transistors is the main factor that determines the memory capacity of a memory chip, as it directly influences the amount of data that can be stored.

 Which of the following provides stability to the multitasking system?
  • a)
    memory
  • b)
    DRAM
  • c)
    SRAM
  • d)
    Memory partitioning
Correct answer is option 'D'. Can you explain this answer?

answered
Memory partitioning provides stability to the multitasking system.



Introduction
In a multitasking system, multiple tasks or processes run concurrently, sharing the system resources such as CPU, memory, and I/O devices. To ensure stability and efficient execution, the system needs to allocate and manage these resources effectively. One important aspect of resource management is memory partitioning.



Memory Partitioning
Memory partitioning is a technique used in operating systems to divide the available physical memory into multiple partitions or segments. Each partition is allocated to a specific task or process, providing a dedicated memory space for its execution. The primary goal of memory partitioning is to prevent processes from interfering with each other's memory space, ensuring stability and protection.



Stability in Multitasking
When multiple tasks are running simultaneously in a multitasking system, there is a risk of one task accessing or modifying the memory space of another task. This can lead to data corruption, crashes, or other undesirable consequences. Memory partitioning helps maintain stability by providing isolated memory spaces for each task. This isolation ensures that a task can only access its allocated memory partition, preventing interference with other tasks.



Advantages of Memory Partitioning
1. Memory Protection: Memory partitioning enables the operating system to enforce strict memory protection mechanisms. Each task can only access its own memory partition, preventing unauthorized access or modifications to other tasks' memory.
2. Resource Allocation: By dividing the available memory into partitions, the system can allocate memory resources more efficiently. Each task is assigned a specific amount of memory based on its requirements, optimizing resource utilization.
3. Isolation: Memory partitioning provides isolation between tasks, ensuring that a failure or crash in one task does not affect the stability or execution of other tasks. This enhances system reliability.
4. Efficient Context Switching: Context switching, the process of switching between different tasks, becomes more efficient with memory partitioning. Since each task has its own dedicated memory space, the system can quickly save and restore the state of a task without affecting other tasks.



Conclusion
Memory partitioning plays a crucial role in providing stability to multitasking systems. It ensures that each task has its own isolated memory space, preventing interference and maintaining system stability. Memory partitioning also offers advantages such as memory protection, efficient resource allocation, isolation, and efficient context switching.

Which interfacing method lowers the speed of the processor?
  • a)
    basic DRAM interface
  • b)
    page mode interface
  • c)
    page interleaving
  • d)
    burst mode interface 
Correct answer is option 'A'. Can you explain this answer?

Dipika Chavan answered
Basic DRAM Interface
The basic DRAM interface is an older method of interfacing with dynamic random access memory (DRAM) chips. It is characterized by its simplicity and low cost, but it also has limitations that can impact the speed of the processor.

Page Mode Interface
The page mode interface is a more advanced method of interfacing with DRAM chips. It allows for faster access to data by allowing multiple accesses to the same row of memory without having to specify the row address each time. This reduces the overhead and improves the overall speed of the processor.

Page Interleaving
Page interleaving is a technique used to improve memory access performance by interleaving the pages of memory across multiple memory modules. It allows for parallel access to memory, which can increase the overall bandwidth and speed of the system.

Burst Mode Interface
The burst mode interface is another advanced method of interfacing with DRAM chips. It allows for consecutive memory accesses to be performed without having to specify the address for each access. This reduces the overhead and improves the overall speed of the processor.

Explanation
The basic DRAM interface is the only option among the given choices that can lower the speed of the processor. This is because it lacks the advanced features and optimizations present in the other options.

The basic DRAM interface requires the processor to specify the row and column addresses for each memory access, which can introduce additional latency and overhead. This can result in slower overall performance compared to the more advanced methods.

The other options, such as page mode interface, page interleaving, and burst mode interface, are designed to improve the speed and efficiency of memory access. They reduce the number of address specifications required and allow for faster consecutive accesses or parallel access to memory.

In conclusion, the basic DRAM interface is the least efficient method among the given options and can potentially lower the speed of the processor due to its higher latency and overhead.

 How are negative numbers stored in a coprocessor?
  • a)
    1’s complement
  • b)
    2’s complement
  • c)
    decimal
  • d)
    gray
Correct answer is option 'B'. Can you explain this answer?

Ujwal Roy answered
Explanation: In a coprocessor, negative numbers are stored in 2’s complement with its leftmost sign bit of 1 whereas positive numbers are stored in the form of true value with its leftmost sign bit of 0.

 Which are the two main types of processor connection to the motherboard?
  • a)
    sockets and slots
  • b)
    sockets and pins
  • c)
    slots and pins
  • d)
    pins and ports
Correct answer is option 'A'. Can you explain this answer?

Kajal Sharma answered
Explanation: The type of processor which connects to a socket on the bottom surface of the chip that connects to the motherboard by Zero Insertion Force Socket. Intel 486 is an example of this type of connection. The processor slot is one which is soldered into a card, which connects to a motherboard by a slot. Example for slot connection is Pentium 3.

Which of the processor has an internal coprocessor?
  • a)
    8087
  • b)
    80287
  • c)
    80387
  • d)
    80486DX
Correct answer is option 'D'. Can you explain this answer?

Rajesh Malik answered
Explanation: 8087 is an external IC designed to operate with the 8088/8086 processor but 80486DX is an on-chip coprocessor that is, it does not require an extra integrated chip for floating point arithmetics.

 Which are the 4 segmented registers in intel 80286?
  • a)
    AX,BX,CX,DX
  • b)
    AS,BS,CS,DS
  • c)
    SP,DI,SI,BP
  • d)
    IP,FL,SI,DI
Correct answer is option 'B'. Can you explain this answer?

Explanation: Intel 80286 possess 4 general purpose registers, 4 segmented registers, 2 index register and a base pointer register.

 Which of the following can transfer up to 1.6 billion bytes per second?
  • a)
    DRAM
  • b)
    RDRAM
  • c)
    EDO RAM
  • d)
    SDRAM
Correct answer is option 'B'. Can you explain this answer?

Nishanth Roy answered
Explanation: The Rambus RAM can transfer up to 1.6 billion bytes per second. It possesses RAM controller, a bus which connects the microprocessor and the device, and a random access memory.

Which of the following is a combination of several processors on a single chip?
  • a)
    Multicore architecture
  • b)
    RISC architecture
  • c)
    CISC architecture
  • d)
    Subword parallelism
Correct answer is option 'A'. Can you explain this answer?

Megha Yadav answered
Explanation: The Multicore machine is a combination of many processors on a single chip. The heterogeneous multicore machine also combines a variety of processor types on a single chip.

Which of the following instructions supports parallel execution?
  • a)
    VLIW
  • b)
    TTA
  • c)
    ALU operation
  • d)
    Test-and-set instructions
Correct answer is option 'A'. Can you explain this answer?

Swati Kaur answered
Parallel Execution in Computer Systems

Parallel execution refers to the ability of a computer system to perform multiple tasks or instructions simultaneously. This can significantly improve the performance and efficiency of the system by utilizing the available resources effectively.

VLIW (Very Long Instruction Word)
- VLIW is an instruction set architecture that supports parallel execution.
- It allows multiple instructions to be executed simultaneously by packing them into a single long instruction word.
- The instructions in the long word are independent and can be executed in parallel by different functional units of the processor.
- VLIW processors typically have multiple functional units (ALUs, FPUs, etc.) that can execute instructions in parallel.
- This parallelism is achieved by statically scheduling the instructions at compile time, rather than dynamically at runtime.

TTA (Transport Triggered Architecture)
- TTA is a type of microprocessor architecture that aims to exploit instruction-level parallelism.
- It uses a transport-triggered approach, where instructions are transported to functional units based on their data dependencies.
- While TTA can exploit parallelism within individual instructions, it does not inherently support parallel execution of multiple instructions.

ALU (Arithmetic Logic Unit) Operation
- ALU operations refer to arithmetic and logical operations performed by the ALU.
- While these operations can be executed in parallel within a single instruction, they do not inherently support parallel execution of multiple instructions.

Test-and-Set Instructions
- Test-and-set instructions are used for synchronization and mutual exclusion in concurrent programming.
- They are typically used to implement locks and ensure that only one thread or process can access a shared resource at a time.
- While these instructions are important for concurrency control, they do not inherently support parallel execution of multiple instructions.

Conclusion
Among the given options, only VLIW instructions support parallel execution. VLIW allows multiple instructions to be packed into a single long instruction word and executed simultaneously by different functional units of the processor. This parallelism is achieved by statically scheduling the instructions at compile time. The other options, such as TTA, ALU operations, and test-and-set instructions, do not inherently support parallel execution of multiple instructions.

 Which cache mapping have a sequential execution?
  • a)
    direct mapping
  • b)
    fully associative
  • c)
    n way set associative
  • d)
    burst fill
Correct answer is option 'D'. Can you explain this answer?

Subham Menon answered
Explanation: The burst fill mode of cache mapping have a sequential nature of executing instructions and data access. The instruction fetches and execution accesses to sequential memory locations until it has a jump instruction or a branch instruction. This kind of cache mapping is seen in MC68030 processor.

 What does VRAM stand for?
  • a)
    video RAM
  • b)
    verilog RAM
  • c)
    virtual RAM
  • d)
    volatile RAM
Correct answer is option 'A'. Can you explain this answer?

Tushar Unni answered
Explanation: Video RAM is a derivative of DRAM. It functions like a DRAM and has additional functions to access data for video hardware for creating the display.

 How does the memory management unit provide the protection?
  • a)
    disables the address translation
  • b)
    enables the address translation
  • c)
    wait for the address translation
  • d)
    remains unchanged
Correct answer is option 'A'. Can you explain this answer?

Suyash Chauhan answered
Explanation: The memory management unit can be used as a protection unit by disabling the address translation that is, the physical address and the logical address are the same.

How many regions are created by the memory range in the ARM architecture?
  • a)
    4
  • b)
    8
  • c)
    16
  • d)
    32
Correct answer is option 'B'. Can you explain this answer?

Shounak Yadav answered
Explanation: The memory protection unit in the ARM architecture divides the memory into eight separate regions. Each region can be small as well as big ranging from 4 Kbytes to 4 Gbytes.

 How many types of tables are used by the processor in the protected mode?
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'B'. Can you explain this answer?

Nishanth Mehta answered
Explanation: There are two types of descriptor table used by the processor in the protected mode which are GDT and LDT, that is global descriptor table and local descriptor table respectively.

Which of the following have a 8 KB page?
  • a)
    DEC Alpha
  • b)
    ARM
  • c)
    VAX
  • d)
    PowerPC
Correct answer is option 'A'. Can you explain this answer?

Nishanth Mehta answered
Explanation: DEC Alpha divides its memory into 8KB pages whereas VAX is a small page which are only 512 bytes in size. PowerPC pages are normally 4 KB and ARM is having 4 KB and 64 KB pages.

Which of the following processors uses Harvard architecture?
  • a)
    TEXAS TMS320
  • b)
    80386
  • c)
    80286
  • d)
    8086
Correct answer is option 'A'. Can you explain this answer?

Explanation: It is a digital signal processor which have small and highly optimized audio or video processing signals. It possesses multiple parallel data bus.

 How many comparators present in the direct mapping cache?
  • a)
    3
  • b)
    2
  • c)
    1
  • d)
    4
Correct answer is option 'C'. Can you explain this answer?

Sarthak Desai answered
Explanation: The direct mapping cache have only one comparator so that only one location possibly have all the data irrespective of the cache size.

What are the basic elements required for cache operation?
  • a)
    memory array, multivibrator, counter
  • b)
    memory array, comparator, counter
  • c)
    memory array, trigger circuit, a comparator
  • d)
    memory array, comparator, CPU
Correct answer is option 'B'. Can you explain this answer?

Sarthak Desai answered
Explanation: The cache memory operation is based on the address tag, that is, the processor generates the address which is provided to the cache and this cache stores its data with an address tag. The tag is compared with the address, if they did not match, the next tag is checked. If they match, a cache hit occurs, the data is passed to the processor. So the basic elements required is a memory array, comparator, and a counter.

How can gate delays be reduced?
  • a)
    synchronous memory
  • b)
    asynchronous memory
  • c)
    pseudo asynchronous memory
  • d)
    symmetrical memory
Correct answer is option 'A'. Can you explain this answer?

Gaurav Verma answered
Explanation: The burst interfaced is associated with the SRAM and for the efficiency of the SRAM, it uses a synchronous memory on-chip latches to reduce the gate delays.

 Which mechanism splits the external memory storage into memory pages?
  • a)
    index mechanism
  • b)
    burst mode
  • c)
    distributive mode
  • d)
    a software mechanism
Correct answer is option 'A'. Can you explain this answer?

Bijoy Sharma answered
Explanation: The index mechanism splits the external memory storage into a series of memory pages in which each page is the same size of the cache. Each page is mapped to the cache so that each page can have its own location in the cache.

Which of the following is more quickly accessed?
  • a)
    RAM
  • b)
    Cache memory
  • c)
    DRAM
  • d)
    SRAM
Correct answer is option 'B'. Can you explain this answer?

Sagar Saha answered
Explanation: The cache memory is a small random access memory which is faster than a normal RAM. It has a direct connection with the CPU otherwise, there will be a separate bus for accessing data. The processor will check whether the copy of the required data is present in the cache memory if so it will access the data from the cache memory.

Chapter doubts & questions for Embedded Processors & Memory - Embedded Systems (Web) 2023 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2023 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Embedded Processors & Memory - Embedded Systems (Web) in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Embedded Systems (Web)

47 videos|69 docs|65 tests

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev