All Exams  >   Computer Science Engineering (CSE)  >   Embedded Systems (Web)  >   All Questions

All questions of Embedded Processors & Memory for Computer Science Engineering (CSE) Exam

 Which of the following address is seen by the memory unit?
  • a)
    logical address
  • b)
    physical address
  • c)
    virtual address
  • d)
    memory address
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
In computing, a physical address (also real address, or binary address), is a memory address that is represented in the form of a binary number on the address bus circuitry in order to enable the data bus to access a particular storage cell of main memory, or a register of memory mapped I/O device.hence,option (A) is correct.

 Which of the following is serial access memory?
  • a)
    RAM
  • b)
    Flash memory
  • c)
    Shifters
  • d)
    ROM
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
Explanation: The memory arrays are basically divided into three which are random access memory, serial access memory, and content address memory. Serial access memory is divided into two, theses are shifters and queues.

 Which of the following memories has more speed in accessing data?
  • a)
    SRAM
  • b)
    DRAM
  • c)
    EPROM
  • d)
    EEPROM
Correct answer is option 'A'. Can you explain this answer?

Kajal Sharma answered
Explanation: SRAM have more speed than DRAM because it has 4 to 6 transistors arranged as flip-flop logic gates, that is it can be flipped from one binary state to another but DRAM has a small capacitor as its storage element.

Why is SRAM more preferably in non-volatile memory?
  • a)
    low-cost
  • b)
    high-cost
  • c)
    low power consumption
  • d)
    transistor as a storage element
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
Explanation: SRAM will retain data as long it is powered up and it does not need to be refreshed as DRAM. It is designed for low power consumption and used in preference .DRAM is cheaper than SRAM but it is based on refresh circuitry as it loses charge since the capacitor is the storage element.

 Which statement is true for a cache memory?
  • a)
    memory unit which communicates directly with the CPU
  • b)
    provides backup storage
  • c)
    a very high-speed memory to increase the speed of the processor
  • d)
    secondary storage
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
Explanation: The RAM is the primary storage which directly communicates with the CPU. ROM is the secondary storage. Disk drives are capable of providing backup storage and the cache memory is a small high-speed memory which increases the speed of the processor.

Where is memory address stored in a C program?
  • a)
    stack
  • b)
    pointer
  • c)
    register
  • d)
    accumulator
Correct answer is option 'B'. Can you explain this answer?

Ravi Singh answered
Explanation: Memory model is defined by a range of memory address which is accessible to the program. For example, in C program, the memory address is stored in the pointer.

 Which of the following is more volatile?
  • a)
    SRAM
  • b)
    DRAM
  • c)
    ROM
  • d)
    RAM
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
Explanation: DRAM is said to be more volatile because it has a capacitor as its storage element in which the data disappears when the capacitor loses its charge so even when the device is powered the data can be lost.

How many MOSFETs are required for SRAM?
  • a)
    2
  • b)
    4
  • c)
    6
  • d)
    8
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
Explanation: Six MOSFETs are required for a typical SRAM. Each bit of SRAM is stored in four transistors which form two cross-coupled inverters.

Which of the following is more quickly accessed?
  • a)
    RAM
  • b)
    Cache memory
  • c)
    DRAM
  • d)
    SRAM
Correct answer is option 'B'. Can you explain this answer?

Sagar Saha answered
Explanation: The cache memory is a small random access memory which is faster than a normal RAM. It has a direct connection with the CPU otherwise, there will be a separate bus for accessing data. The processor will check whether the copy of the required data is present in the cache memory if so it will access the data from the cache memory.

How many numbers of ways are possible for allocating the memory to the modular blocks?
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'C'. Can you explain this answer?

Rishika Pillai answered
Ways of Allocating Memory to Modular Blocks
There are 3 possible ways for allocating memory to the modular blocks:

1. Equal Allocation:
- In this method, memory is divided equally among all the modular blocks. For example, if there are 4 modular blocks and 16 units of memory, each block will be allocated 4 units of memory.

2. Proportional Allocation:
- Memory is allocated to the modular blocks based on their size or requirements. Larger blocks may be allocated more memory compared to smaller blocks. This method ensures efficient utilization of memory resources.

3. Custom Allocation:
- This method involves allocating memory based on specific requirements or constraints. It may involve a combination of equal and proportional allocation, depending on the needs of the system and the modular blocks.
Each of these allocation methods has its own advantages and disadvantages, and the choice of allocation method will depend on the specific requirements of the system and the modular blocks being used.

 Which of the following is the main factor which determines the memory capacity?
  • a)
    number of transistors
  • b)
    number of capacitors
  • c)
    size of the transistor
  • d)
    size of the capacitor
Correct answer is option 'A'. Can you explain this answer?

Arindam Goyal answered
The main factor that determines the memory capacity is the number of transistors.

Explanation:
- Memory capacity refers to the amount of data that can be stored in a memory device, such as a computer's RAM or a solid-state drive.
- Transistors are the fundamental building blocks of electronic devices, including memory chips. They are responsible for storing and manipulating data in a digital format.
- The number of transistors in a memory chip directly affects its capacity to store data. Each transistor represents a binary digit, or a bit, which can be either a 0 or a 1.
- A memory chip with more transistors can represent a larger number of bits, allowing for greater storage capacity. This is because each additional transistor adds an additional bit of storage.
- For example, a memory chip with 1 million transistors can store 1 million bits or approximately 125 kilobytes of data.
- Similarly, a memory chip with 1 billion transistors can store 1 billion bits or approximately 125 megabytes of data.
- As technology advances, the number of transistors that can be packed onto a single memory chip increases, leading to larger memory capacities.
- This trend is commonly referred to as Moore's Law, which states that the number of transistors on a chip doubles approximately every two years.
- Hence, the number of transistors is the main factor that determines the memory capacity of a memory chip, as it directly influences the amount of data that can be stored.

 Which of the following can transfer up to 1.6 billion bytes per second?
  • a)
    DRAM
  • b)
    RDRAM
  • c)
    EDO RAM
  • d)
    SDRAM
Correct answer is option 'B'. Can you explain this answer?

Nishanth Roy answered
Explanation: The Rambus RAM can transfer up to 1.6 billion bytes per second. It possesses RAM controller, a bus which connects the microprocessor and the device, and a random access memory.

Which of the following modes offers segmentation in the memory?
  • a)
    virtual mode
  • b)
    real mode
  • c)
    protected mode
  • d)
    memory mode 
Correct answer is option 'C'. Can you explain this answer?

Shail Kulkarni answered
Protected Mode:
Protected mode in a computer system provides segmentation in memory. Here's an explanation of how protected mode offers segmentation in memory:
- Segmentation:
In protected mode, memory segmentation allows the operating system to isolate different processes and users from each other. Segmentation divides memory into segments, each with its own access rights and permissions. This helps in preventing one process from accessing or modifying the memory of another process.
- Memory Protection:
Protected mode also provides memory protection by using a memory management unit (MMU) to enforce access control and prevent unauthorized access to memory locations. This ensures that each process can only access the memory segments assigned to it, enhancing system security and stability.
- Virtual Memory:
Protected mode also supports virtual memory, which allows the operating system to use disk space as an extension of physical memory. This feature enables efficient memory management by swapping data between physical memory and disk storage, optimizing overall system performance.
In conclusion, protected mode is a mode of operation that offers segmentation in memory by dividing memory into segments with different access rights and permissions. It also provides memory protection mechanisms and supports virtual memory, enhancing system security, stability, and performance.

Which of the architectures are made to speed up the processor?
  • a)
    CISC
  • b)
    RISC
  • c)
    program stored
  • d)
    von Neumann
Correct answer is option 'B'. Can you explain this answer?

Ananya Shah answered
Introduction:
The architecture that is specifically designed to speed up the processor is the Reduced Instruction Set Computing (RISC) architecture. RISC architecture focuses on simplicity and efficiency by using a smaller set of instructions that are executed in a single clock cycle. This allows for faster processing and improved performance compared to other architectures.

Explanation:
RISC architecture is designed to optimize the execution of instructions, resulting in faster processing. Here's a detailed explanation of why RISC architecture is made to speed up the processor:

1. Simplicity and Reduced Instruction Set:
RISC architecture follows the principle of using a reduced instruction set. It uses a small set of simple and highly optimized instructions, typically around 100 to 200 instructions. By having a smaller instruction set, the processor can decode and execute instructions more quickly.

2. Single Clock Cycle Execution:
RISC architecture emphasizes executing instructions in a single clock cycle. Each instruction is designed to be completed in a fixed number of clock cycles, typically one clock cycle per instruction. This reduces the number of cycles required to execute complex instructions, resulting in faster processing speed.

3. Register-Based Architecture:
RISC architecture uses a large number of general-purpose registers. These registers are directly accessible by the instructions, allowing for faster data access and manipulation. With more registers available, the processor can avoid accessing memory frequently, which is slower compared to register operations.

4. Pipelining:
RISC architecture supports pipelining, which is a technique that allows multiple instructions to be executed simultaneously in different stages of the pipeline. Pipelining divides the instruction execution into several stages, such as fetch, decode, execute, and write back. This overlapping of instructions enables better utilization of the processor's resources and improves overall performance.

5. Reduced Complexity:
RISC architecture simplifies the processor design by reducing the complexity of the instructions and focusing on a streamlined execution process. This reduced complexity leads to a smaller chip size, lower power consumption, and improved efficiency.

Conclusion:
In summary, the RISC architecture is specifically designed to speed up the processor by using a reduced instruction set, executing instructions in a single clock cycle, utilizing register-based operations, supporting pipelining, and reducing the overall complexity. These design choices result in faster processing, improved performance, and better utilization of the processor's resources.

 How many possibilities of mapping does a direct mapped cache have?
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'A'. Can you explain this answer?

Sarthak Desai answered
Explanation: The direct mapped cache only have one possibility to fetch data whereas a two-way system, there are two possibilities, for a three-way system, there are three possibilities and so on. It is also known as the one-way set associative cache.

 How many data lines does 256*4 have?
  • a)
    256
  • b)
    8
  • c)
    4
  • d)
    32
Correct answer is option 'C'. Can you explain this answer?

Rohan Shah answered
Explanation: There are four data lines in the memory and these different organisations of memory and these different organisations of memory are apparent when upgrading memory and it also determines how many chips are needed.

 In which scheme does the data write via a buffer to the main memory?
  • a)
    write buffer
  • b)
    write-back
  • c)
    write-through
  • d)
    no caching of the write cycle
Correct answer is option 'A'. Can you explain this answer?

Understanding Write Buffer
The process of writing data to main memory can be optimized through various caching techniques. One such method is the use of a write buffer, which facilitates efficient data transfer.
What is a Write Buffer?
- A write buffer is a temporary storage area that holds data before it is written to the main memory.
- This mechanism allows the CPU to continue processing tasks without waiting for the main memory write operation to complete.
How Does It Work?
- When the CPU needs to write data, it first sends the data to the write buffer.
- The CPU can then proceed with other operations, thus improving overall system performance.
- The data in the buffer is eventually written to main memory at an appropriate time, which may be when the buffer is full or when the CPU is idle.
Advantages of Using Write Buffer
- Increased Efficiency: By offloading the write operation, the CPU can perform additional tasks, enhancing throughput.
- Reduced Latency: The time taken for write operations to complete is masked, leading to a more responsive system.
Comparison with Other Schemes
- Write-Back: In this scheme, data is written to the cache and only updated in memory later, not using a buffer.
- Write-Through: Every write operation is immediately reflected in the main memory, which can slow down processing.
- No Caching: In this scenario, write operations are directly handled with no intermediate storage, leading to inefficiencies.
In conclusion, the write buffer scheme significantly optimizes the data writing process, making it the correct answer to the question presented.

What is approximate data access time of SRAM?
  • a)
    4ns
  • b)
    10ns
  • c)
    2ns
  • d)
    60ns
Correct answer is option 'D'. Can you explain this answer?

Nandini Khanna answered
Explanation: SRAM access data in approximately 4ns because of its flip-flop arrangement of transistors whereas the data access time in DRAM is approximately 60ns since it has a single capacitor for one-bit storage.

Which is the term that is used to refer the order of bytes?
  • a)
    endianness
  • b)
    memory organisation
  • c)
    bit
  • d)
    register
Correct answer is option 'A'. Can you explain this answer?

Explanation: Endianness defines the order of bytes, that is, whether it is big endian or little endian. The former represents the higher order bits and the latter represents the lower order bits.

Which company developed SPARC?
  • a)
    intel
  • b)
    IBM
  • c)
    Motorola
  • d)
    sun microsystem
Correct answer is option 'D'. Can you explain this answer?

Arka Dasgupta answered
Explanation: SPARC is developed by Sun Microsystem but different manufacturers from other companies like Intel, Texas worked on it.

Which of the following is necessary in the address translation in the protected mode?
  • a)
    descriptor
  • b)
    paging
  • c)
    segmentation
  • d)
    memory
Correct answer is option 'A'. Can you explain this answer?

Gaurav Verma answered
Explanation: The address translation from the logical address to physical address partitions the main memory into different blocks which is called segmentation. Each of these blocks have a descriptor which possesses a descriptor table. So the size of every block is very important for the descriptor.

What are the factors of filters which are determined by the speed of the operation in a digital signal processor?
  • a)
    attenuation constant
  • b)
    frequency
  • c)
    bandwidth
  • d)
    phase
Correct answer is option 'C'. Can you explain this answer?

Factors of filters determined by the speed of the operation in a digital signal processor:
Factors such as the attenuation constant, frequency, bandwidth, and phase play a crucial role in determining the speed of operation in a digital signal processor. Among these factors, the bandwidth is specifically determined by the speed of the operation in a digital signal processor.

Bandwidth:
- Bandwidth refers to the range of frequencies within a signal can be processed by a filter.
- In digital signal processing, the bandwidth of a filter determines the speed at which it can process signals.
- A wider bandwidth allows for faster processing of signals, while a narrower bandwidth may result in slower operation.
- The bandwidth of a filter is directly related to the speed of operation in a digital signal processor, as it dictates the range of frequencies that can be handled efficiently.
By understanding the role of bandwidth in determining the speed of operation in a digital signal processor, engineers can optimize filter designs to meet the required processing speeds for various applications.

Which of the following is a common cache?
  • a)
    DIMM
  • b)
    SIMM
  • c)
    TLB
  • d)
    Cache
Correct answer is option 'C'. Can you explain this answer?

Bijoy Sharma answered
Explanation: The translation lookaside buffer is common cache memory seen in almost all CPUs and desktops which are a part of the memory management unit. It can improve the virtual address translation speed.

Which factor determines the cache performance?
  • a)
    software
  • b)
    peripheral
  • c)
    input
  • d)
    output
Correct answer is option 'A'. Can you explain this answer?


Factors determining cache performance:

Cache performance is determined by various factors, but one of the most significant factors is the software.

Software:
- The software running on a system plays a crucial role in determining cache performance.
- The way software is designed, the algorithms it uses to access data, and how efficiently it utilizes the cache memory can have a significant impact on cache performance.
- Optimizing software to make efficient use of the cache can lead to faster data access and improved overall system performance.
- Cache-friendly algorithms and data structures can help reduce cache misses and increase the hit rate, resulting in better cache performance.
- Additionally, software that minimizes unnecessary data transfers between the cache and main memory can also contribute to improved cache performance.

In conclusion, while hardware factors such as cache size and organization are important, software optimization is key in determining cache performance. By writing efficient code and utilizing cache-friendly algorithms, software can significantly impact the overall performance of the cache system.

 How is expanded memory accessed in 80286?
  • a)
    Paging
  • b)
    Interleaving
  • c)
    RAM
  • d)
    External storage
Correct answer is option 'A'. Can you explain this answer?

Advait Shah answered
Accessing Expanded Memory in 80286
Accessing expanded memory in 80286 is typically done through Paging.

Paging
- Paging is a memory management scheme in which the computer's operating system retrieves data from secondary storage in blocks called pages.
- In the case of the 80286 processor, paging is used to access extended memory beyond the 1 MB limit of conventional memory.
- The processor uses the A20 gate to enable access to memory above the 1 MB limit, allowing paging to access expanded memory.
- The paging mechanism allows the processor to map virtual memory addresses to physical memory addresses, enabling the use of expanded memory in a controlled and efficient manner.
In conclusion, accessing expanded memory in 80286 involves the use of paging, a memory management scheme that allows the processor to access memory beyond the traditional 1 MB limit.

How many bits are used for storing signed integers?
  • a)
    2
  • b)
    4
  • c)
    8
  • d)
    16
Correct answer is option 'D'. Can you explain this answer?

Atharva Das answered
Explanation: Signed integers in a coprocessor are stored as 16-bit word, 32-bit double word or 64-bit quadword.

Which are the two register available in protected mode of 80286?
  • a)
    General and segmented
  • b)
    General and pointer
  • c)
    Index and base pointer
  • d)
    Index and segmented
Correct answer is option 'C'. Can you explain this answer?

Rashi Singh answered
In protected mode of the 80286 microprocessor, there are two types of registers available: index registers and segmented registers. Let's explore each of these register types in detail:

Index Registers:
- Index registers are used for addressing memory operands in protected mode.
- The 80286 provides four index registers: BP (base pointer), DI (destination index), SI (source index), and SP (stack pointer).
- These registers are 16 bits wide, allowing them to hold values ranging from 0 to 65535.
- The BP register is typically used as a base pointer for accessing parameters and local variables within a function.
- The DI and SI registers are commonly used for string operations, such as copying or comparing strings.
- The SP register is used to keep track of the top of the stack.

Segmented Registers:
- Segmented registers are used to store segment selectors, which are used for memory segmentation in protected mode.
- The 80286 provides four segmented registers: CS (code segment), DS (data segment), SS (stack segment), and ES (extra segment).
- Each segmented register is 16 bits wide and can hold a segment selector value.
- The CS register stores the segment selector for the currently executing code segment.
- The DS register is used for accessing data in memory.
- The SS register points to the stack segment, which contains the runtime stack.
- The ES register is an extra segment register that can be used for additional data storage.

In protected mode, memory addressing is done using a combination of segment selectors and offset values. The segmented registers hold the segment selector values, while the index registers are used to hold the offset values. Together, they allow for efficient memory addressing and manipulation.

Overall, the combination of index registers and segmented registers in protected mode provides greater flexibility and efficiency in memory management and addressing compared to real mode.

Which is the coprocessor of 8086?
  • a)
    8087
  • b)
    8088
  • c)
    8086
  • d)
    8080
Correct answer is option 'A'. Can you explain this answer?

Shalini Rane answered
The coprocessor of the 8086 processor is the 8087 coprocessor. The 8087 is also known as the math coprocessor or the numeric data processor. It is specifically designed to perform mathematical operations and enhance the computational capabilities of the 8086 processor.

Below are the reasons why the 8087 coprocessor is the correct answer:

1. Enhanced mathematical capabilities:
- The 8087 coprocessor is specifically designed for handling mathematical operations efficiently.
- It provides hardware support for floating-point arithmetic, which is essential for complex mathematical calculations.
- With the 8087 coprocessor, the 8086 processor can offload complex mathematical computations and achieve faster and more accurate results.

2. Data processing efficiency:
- The 8087 coprocessor operates in parallel with the 8086 processor, allowing simultaneous execution of instructions.
- It can execute floating-point instructions independently, which reduces the burden on the main processor and improves overall system performance.
- The coprocessor's efficient data processing capabilities make it ideal for applications that involve scientific calculations, engineering simulations, financial analysis, and more.

3. Specific compatibility:
- The 8087 coprocessor is designed to work specifically with the 8086 processor.
- It utilizes a dedicated instruction set and communicates with the 8086 processor through a specialized bus interface.
- The coprocessor is physically connected to the 8086 processor via a socket, allowing for easy integration and compatibility.

4. Upgradability:
- The 8087 coprocessor can be added to a system that already has an 8086 processor, providing an upgrade path for enhanced mathematical capabilities.
- This upgradability allows users to improve the performance of their system without needing to replace the entire processor.

In conclusion, the 8087 coprocessor is the correct answer as it provides enhanced mathematical capabilities, efficient data processing, specific compatibility with the 8086 processor, and the ability to upgrade existing systems for improved performance.

Which of the following processor possesses a similar instruction of 80486?
  • a)
    8086
  • b)
    80286
  • c)
    80386
  • d)
    8080
Correct answer is option 'C'. Can you explain this answer?

Atharva Das answered
Explanation: The instruction set is same as that of 80386 but there are some additional instructions available when the processor is in protected mode.

Which of the following is a portable device of Intel?
  • a)
    80386DX
  • b)
    8087
  • c)
    80386SL
  • d)
    80386SX
Correct answer is option 'C'. Can you explain this answer?

Explanation: Intel has 80386SL as the portable PCs which helps in controlling power and increases the power efficiency of the processor.

Which factor determines the effectiveness of cache?
  • a)
    hit rate
  • b)
    refresh cycle
  • c)
    refresh rate
  • d)
    refresh time
Correct answer is option 'A'. Can you explain this answer?

Sarthak Desai answered
Explanation: The proportion of accesses of data that forms the cache hit, which measures the effectiveness of the cache memory.

Which of the following processor can handle infinity values?
  • a)
    8080
  • b)
    8086
  • c)
    8087
  • d)
    8088
Correct answer is option 'C'. Can you explain this answer?

Ujwal Roy answered
Explanation: 8087 is a coprocessor which can handle infinity values with two types of closure known as affine closure and projective closure.

 Which are the two main types of processor connection to the motherboard?
  • a)
    sockets and slots
  • b)
    sockets and pins
  • c)
    slots and pins
  • d)
    pins and ports
Correct answer is option 'A'. Can you explain this answer?

Kajal Sharma answered
Explanation: The type of processor which connects to a socket on the bottom surface of the chip that connects to the motherboard by Zero Insertion Force Socket. Intel 486 is an example of this type of connection. The processor slot is one which is soldered into a card, which connects to a motherboard by a slot. Example for slot connection is Pentium 3.

How many bits does SPARC have?
  • a)
    8
  • b)
    16
  • c)
    32
  • d)
    64
Correct answer is option 'C'. Can you explain this answer?

Explanation: It is a 32 bit RISC architecture having 32-bit wide register bank.

Which of the refresh circuit is similar to CBR?
  • a)
    software refresh
  • b)
    hidden refresh
  • c)
    burst refresh
  • d)
    distribute refresh
Correct answer is option 'B'. Can you explain this answer?

Explanation: In the hidden refresh, the refresh cycle is added to the end of a normal read cycle. The RAS signal goes high and is then asserted low. In the end of the read cycle, the CAS is still asserted. This is similar to the CBR mechanism, that is, toggling of RAS signal at the end of the read cycle starts a CBR refresh cycle.

How many regions are created by the memory range in the ARM architecture?
  • a)
    4
  • b)
    8
  • c)
    16
  • d)
    32
Correct answer is option 'B'. Can you explain this answer?

Shounak Yadav answered
Explanation: The memory protection unit in the ARM architecture divides the memory into eight separate regions. Each region can be small as well as big ranging from 4 Kbytes to 4 Gbytes.

What is 80/20 rule?
  • a)
    80% instruction is generated and 20% instruction is executed
  • b)
    80% instruction is executed and 20% instruction is generated
  • c)
    80%instruction is executed and 20% instruction is not executed
  • d)
    80% instruction is generated and 20% instructions are not generated
Correct answer is option 'A'. Can you explain this answer?

Explanation: 80% of instructions are generated and only 20% of the instruction set is executed that is, by simplifying the instructions, the performance of the processor can be increased which lead to the formation of RISC that is reduced instruction set computing.

 How many stack register does an 8087 have?
  • a)
    4
  • b)
    8
  • c)
    16
  • d)
    32
Correct answer is option 'B'. Can you explain this answer?

Atharva Das answered
Explanation: The 8087 coprocessor does not have a main register set but they have an 8-level deep stack register from st0 to st7.

 Which are the 4 segmented registers in intel 80286?
  • a)
    AX,BX,CX,DX
  • b)
    AS,BS,CS,DS
  • c)
    SP,DI,SI,BP
  • d)
    IP,FL,SI,DI
Correct answer is option 'B'. Can you explain this answer?

The 4 Segmented Registers in Intel 80286

The Intel 80286 is a microprocessor chip that was introduced in 1982. It was the successor to the Intel 8086 and 8088 processors and was used in IBM PC/AT and other compatible computers. The 80286 had several new features and improvements over its predecessors, including the introduction of segmented memory architecture.

One of the key features of the segmented memory architecture is the use of segmented registers. These registers are used to store the segment addresses of various memory segments in the system. The 80286 has four segmented registers that are used to address different types of memory segments. They are:

1. AS (Application Segment) - This register is used to store the segment address of the current application code segment.

2. BS (Backward Segment) - This register is used to store the segment address of a data segment that is located below the current data segment.

3. CS (Code Segment) - This register is used to store the segment address of the current code segment.

4. DS (Data Segment) - This register is used to store the segment address of the current data segment.

The use of segmented registers allows the 80286 to address up to 16MB of memory, which was a significant increase over the 1MB addressable by the 8086 and 8088 processors. It also allowed for more efficient memory management and protection.

In conclusion, the four segmented registers in Intel 80286 are AS, BS, CS, and DS. These registers are used to store the segment addresses of various memory segments in the system and allow for efficient memory management and protection.

Which package has high memory speed and change in the supply?
  • a)
    DIP
  • b)
    SIMM
  • c)
    DIMM
  • d)
    zig-zag
Correct answer is option 'C'. Can you explain this answer?

Understanding Memory Packages
When it comes to memory packages, different types serve various purposes and have distinct characteristics. Among them, DIMM (Dual Inline Memory Module) stands out for its high-speed performance and ability to adapt to changes in supply voltage.
Key Features of DIMM:
- High Memory Speed:
DIMMs are designed for high-speed data transfer, making them ideal for modern computing environments. They support higher bandwidth due to their 64-bit architecture, allowing simultaneous access to multiple memory chips.
- Voltage Flexibility:
DIMMs can operate at varying supply voltages, which is crucial for optimizing power consumption and enhancing performance. This adaptability helps in achieving better efficiency in systems requiring dynamic power management.
- Form Factor:
DIMMs are typically used in desktops and servers. Their design allows for easy installation and removal, making upgrades straightforward.
Comparison with Other Packages:
- DIP (Dual Inline Package):
Older technology primarily used in low-speed applications. Lacks the high-speed capabilities of DIMMs.
- SIMM (Single Inline Memory Module):
An earlier memory module type that is now largely obsolete. It does not support the same speeds or flexibility as DIMMs.
- Zig-zag Configurations:
This refers to a layout style, not a specific memory type. It does not address memory speed or supply changes effectively.
Conclusion:
DIMMs represent the evolution of memory technology, providing the necessary speed and adaptability for today's computing demands. This makes option 'C' the correct choice when discussing high memory speed and supply changes.

Which type of storage element of SRAM is very fast in accessing data but consumes lots of power?
  • a)
    TTL
  • b)
    CMOS
  • c)
    NAND
  • d)
    NOR
Correct answer is option 'A'. Can you explain this answer?

Sagnik Singh answered
Explanation: TTL or transistor-transistor logic which is a type of bipolar junction transistor access data very fastly but consumes lots of power whereas CMOS is used in low power consumption.

Which is an example of superscalar architecture?
  • a)
    Pentium 4
  • b)
    8086
  • c)
    80386
  • d)
    Pentium pro
Correct answer is option 'A'. Can you explain this answer?

Krithika Kaur answered
Explanation: Pentium 4 is a single core CPU used in desktops, laptops which are proposed by Intel. It has Netburst architecture.

 Which is the most basic non-volatile memory?
  • a)
    Flash memory
  • b)
    PROM
  • c)
    EPROM
  • d)
    ROM
Correct answer is option 'D'. Can you explain this answer?

Kajal Sharma answered
Explanation: The basic non-volatile memory is ROM or mask ROM, and the content of ROM is fixed in the chip which is useful in firmware programs for booting up the system.

Chapter doubts & questions for Embedded Processors & Memory - Embedded Systems (Web) 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Embedded Processors & Memory - Embedded Systems (Web) in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Embedded Systems (Web)

47 videos|77 docs|65 tests

Top Courses Computer Science Engineering (CSE)