All Exams  >   Computer Science Engineering (CSE)  >   Embedded Systems (Web)  >   All Questions

All questions of Design of Embedded Processors for Computer Science Engineering (CSE) Exam

Which of the following can reduce the loop overhead and thus increase the speed?
  • a)
    loop unrolling
  • b)
    loop tiling
  • c)
    loop permutation
  • d)
    loop fusion
Correct answer is option 'A'. Can you explain this answer?

Kiran Reddy answered
Explanation: The loop unrolling can reduce the loop overhead, that is the fewer branches per execution of the loop body, which in turn increases the speed but is only restricted to loops with a constant number of iteration. The unrolling can increase the code size.
1 Crore+ students have signed up on EduRev. Have you? Download the App

Which of the following is approximated during hardware/software partitioning, during task-level concurrency management?
  • a)
    scheduling
  • b)
    compilation
  • c)
    task-level concurrency management
  • d)
    high-level transformation
Correct answer is option 'A'. Can you explain this answer?

Amrutha Sharma answered
Explanation: The scheduling is performed in several contexts. It should be approximated with the other design activities like the compilation, hardware/software partitioning, and task-level concurrency management. The scheduling should be precise for the final code.

Which activity is concerned with identifying the task at the final embedded systems?
  • a)
    high-level transformation
  • b)
    compilation
  • c)
    scheduling
  • d)
    task-level concurrency management
Correct answer is option 'D'. Can you explain this answer?

Mohit Unni answered
Task-level concurrency management is the activity concerned with identifying the task at the final embedded systems. It plays a crucial role in optimizing the performance and efficiency of embedded systems by effectively managing the execution of multiple tasks.

Embedded systems are computer systems designed to perform specific functions within larger systems. They are typically integrated into other devices and operate with limited resources, such as memory, processing power, and energy. As a result, efficient task management is essential to ensure the smooth operation of these systems.

Task-level concurrency management involves the following steps:

1. Task identification: This step involves identifying the individual tasks or processes that need to be executed within the embedded system. Each task represents a specific function or operation that contributes to the overall system functionality.

2. Task prioritization: Once the tasks are identified, they need to be prioritized based on their importance and urgency. Some tasks may have higher priority than others, and their execution should be given precedence to ensure critical operations are performed in a timely manner.

3. Task scheduling: Task scheduling involves determining the order in which tasks will be executed. This step takes into account the task priorities and the availability of system resources. The scheduler allocates system resources, such as CPU time and memory, to each task based on their priority and requirements.

4. Concurrency management: Concurrency management ensures that multiple tasks can be executed simultaneously or in parallel, taking advantage of the system's processing capabilities. This may involve techniques such as multitasking, where multiple tasks are executed concurrently by rapidly switching between them, or multiprocessing, where tasks are executed on separate processing units.

By effectively managing task-level concurrency, the embedded system can optimize resource utilization, minimize response times, and ensure efficient operation. It allows for the execution of multiple tasks in a coordinated manner, enabling the system to handle complex operations and meet real-time requirements.

In conclusion, task-level concurrency management is the activity concerned with identifying the tasks at the final embedded systems and managing their execution to ensure efficient operation and resource utilization.

Which of the following is the design in which both the hardware and software are considered during the design?
  • a)
    platform based design
  • b)
    memory based design
  • c)
    software/hardware codesign
  • d)
    peripheral design
Correct answer is option 'C'. Can you explain this answer?

Neha Mishra answered
Explanation: The software/hardware codesign is the one which having both hardware and software design concerns. This will help in the right combination of the hardware and the software for the efficient product.

What does SPM stand for?
  • a)
    scratch pad memories
  • b)
    sensor parity machine
  • c)
    scratch pad machine
  • d)
    sensor parity memories
Correct answer is option 'A'. Can you explain this answer?

Gargi Sarkar answered
Explanation: The smaller memories provides faster access and consume less energy per access and SPM or scratch pad memories is a kind of small memory which access fastly and consume less energy per access and it can be exploited by the compiler.

 In which design activity, the loops are interchangeable?
  • a)
    compilation
  • b)
    scheduling
  • c)
    high-level transformation
  • d)
    hardware/software partitioning
Correct answer is option 'C'. Can you explain this answer?

High-level transformation is the design activity in which loops are interchangeable. In this activity, loops are manipulated and transformed to optimize the code or improve its performance. Let's understand this in detail:

High-level transformation:
High-level transformation is a design activity where the algorithmic code is transformed to improve its performance, reduce resource usage, or optimize it for a specific target platform. It involves various transformations like loop interchange, loop unrolling, loop fusion, loop distribution, etc.

Interchangeability of loops:
Loop interchange is a transformation where the order of nested loops is changed. This transformation is possible when the loops are independent of each other and do not have any data dependencies. By interchanging loops, the computation can be distributed differently, which may lead to improved performance or better resource utilization.

Benefits of loop interchange:
Loop interchange can provide several benefits, including:

1. Improved cache utilization: By changing the order of nested loops, the memory access patterns can be optimized, leading to better cache utilization. This can result in reduced memory latency and improved overall performance.

2. Parallelization opportunities: Interchangeable loops can enable parallel execution of the code. By interchanging the loops, the computation can be distributed across multiple processing units, allowing for better utilization of parallel resources.

3. Reduction in loop overhead: Loop interchange can eliminate redundant computations and reduce loop overhead. This can result in faster execution and improved efficiency of the code.

Example:
Consider the following code snippet:

```
for (i = 0; i < n;="" i++)="" />
for (j = 0; j < m;="" j++)="" />
// computation
}
}
```

In this code, the outer loop iterates over variable `i` and the inner loop iterates over variable `j`. If these loops are interchangeable, we can interchange them as follows:

```
for (j = 0; j < m;="" j++)="" />
for (i = 0; i < n;="" i++)="" />
// computation
}
}
```

By interchanging the loops, we may achieve better cache utilization or parallelization opportunities, depending on the specific context and requirements of the code.

Conclusion:
In the design activity of high-level transformation, loops are interchangeable. Loop interchange is a powerful transformation that can optimize code performance, improve cache utilization, enable parallelization, and reduce loop overhead.

Which of the following is an important ingredient of all power optimization?
  • a)
    energy model
  • b)
    power model
  • c)
    watt model
  • d)
    power compiler
Correct answer is option 'B'. Can you explain this answer?

Anoushka Dey answered
Explanation: Saving energy can be done at any stage of the embedded system development. The high-level optimization techniques can reduce the power consumption and similarly compiler optimization also can reduce the power consumption and the most important thing in the power optimization are the power model.

Which of the following allows the reuse of the software and the hardware components?
  • a)
    platform based design
  • b)
    memory design
  • c)
    peripheral design
  • d)
    input design
Correct answer is option 'A'. Can you explain this answer?

Rounak Chauhan answered
Platform based design allows the reuse of both software and hardware components. It refers to the design approach where a common platform or framework is used to develop various products or applications. This platform provides a set of pre-built components and interfaces that can be easily integrated into different products.

Benefits of platform-based design:
1. Software reuse: With platform-based design, software components can be developed once and reused across multiple products. This reduces development time and effort, as well as improves the quality and reliability of the software.

2. Hardware reuse: Similarly, platform-based design enables the reuse of hardware components. Common hardware platforms can be designed and manufactured, which can be used in different products with minimal modifications. This reduces the cost and time required for the development of new hardware.

3. Standardization: Platform-based design promotes the use of standard interfaces and protocols, ensuring compatibility and interoperability between different components and systems. This simplifies integration and maintenance tasks.

4. Flexibility: By using a common platform, designers have the flexibility to add or remove software and hardware components as per the requirements of different products. This allows for customization and adaptation without significant redesign efforts.

5. Cost and time savings: Reusing software and hardware components leads to significant cost savings as there is no need to develop everything from scratch. It also reduces time to market, as the development effort is focused on integrating and configuring the existing components.

6. Improved quality: Since the platform components are already tested and validated, the overall quality and reliability of the products developed using platform-based design are improved.

7. Easier maintenance: With platform-based design, maintenance becomes easier as updates and fixes can be made at the platform level, which automatically propagate to all the products that are using that platform.

In conclusion, platform-based design allows for the reuse of both software and hardware components, leading to benefits such as cost savings, time savings, improved quality, and easier maintenance. It promotes standardization and flexibility, making it an effective approach in the development of various products and applications.

What do FRIDGE stand for?
  • a)
    fixed-point programming design environment
  • b)
    floating-point programming design environment
  • c)
    fixed-point programming decoding
  • d)
    floating-point programming decoding
Correct answer is option 'A'. Can you explain this answer?

The correct answer is option 'A': fixed-point programming design environment.

Explanation:

FRIDGE stands for Fixed-Point Programming Design Environment.

FRIDGE is a software tool used for designing and implementing fixed-point algorithms. It provides a comprehensive set of features and tools to facilitate the design and development of fixed-point programs.

Fixed-point programming:

Fixed-point programming refers to the implementation of algorithms using fixed-point arithmetic instead of floating-point arithmetic. Fixed-point arithmetic represents numbers with a fixed number of integer and fractional bits, allowing for precise control over the precision and range of the numbers.

Design Environment:

A design environment is a software tool or set of tools that assist in the design and development of software or hardware systems. It provides a user-friendly interface, debugging capabilities, simulation tools, and other features to streamline the design process.

Features of FRIDGE:

1. Fixed-Point Arithmetic: FRIDGE supports fixed-point arithmetic, allowing programmers to define and manipulate fixed-point numbers with a specified number of integer and fractional bits.

2. Algorithm Design: FRIDGE provides tools and libraries for designing and implementing fixed-point algorithms. It offers a wide range of mathematical functions and operators specifically tailored for fixed-point computations.

3. Code Generation: FRIDGE can generate optimized code for various target platforms, including microcontrollers and digital signal processors. The generated code is highly efficient and takes advantage of hardware-specific features to improve performance.

4. Simulation and Debugging: FRIDGE includes simulation and debugging capabilities, allowing programmers to test and debug their fixed-point programs before deployment. It provides a visual interface to monitor variable values, step through code execution, and analyze program behavior.

5. Performance Analysis: FRIDGE offers tools for analyzing the performance of fixed-point programs. It can generate performance reports, identify performance bottlenecks, and suggest optimizations to improve program efficiency.

6. Integration: FRIDGE can be integrated with other development tools and workflows, making it compatible with existing software development processes. It supports various programming languages and interfaces, enabling seamless integration with different software environments.

In conclusion, FRIDGE is a fixed-point programming design environment that provides a comprehensive set of tools and features for designing and implementing fixed-point algorithms. It supports fixed-point arithmetic, offers algorithm design capabilities, code generation, simulation and debugging, performance analysis, and integration with other development tools.

Which of the following helps in reducing the energy consumption of the embedded system?
  • a)
    compilers
  • b)
    simulator
  • c)
    debugger
  • d)
    emulator
Correct answer is option 'A'. Can you explain this answer?

Naina Sharma answered
Explanation: The compilers can reduce the energy consumption of the embedded system and the compilers performing the energy optimizations are available.

Which edge provides the timing constraints?
  • a)
    timing edge
  • b)
    communication edge
  • c)
    timing edge and communication edge
  • d)
    special edge
Correct answer is option 'A'. Can you explain this answer?

Vaishnavi Kaur answered
Explanation: The codesign tool has 2 edges. They are timing edges and the communication edges. The timing edge provides the timing constraints.

Which memories are faster in nature?
  • a)
    RAM
  • b)
    ROM
  • c)
    Scratch pad memories
  • d)
    EEPROM
Correct answer is option 'C'. Can you explain this answer?

Arpita Gupta answered
Explanation: As the memory size decreases, it is faster in operation, that is the smaller memories are faster than the larger memories. The small memories are caches and the scratch pad memories.

 How can one compute the power consumption of the cache?
  • a)
    Lee power model
  • b)
    First power model
  • c)
    Third power model
  • d)
    CACTI
Correct answer is option 'D'. Can you explain this answer?

Navya Iyer answered
Explanation: The CACTI can compute the power consumption of the cache which is proposed by Wilton and Jouppi in the year 1996.

 What does Index set KH denotes?
  • a)
    processor
  • b)
    hardware components
  • c)
    task graph nodes
  • d)
    task graph node type
Correct answer is option 'B'. Can you explain this answer?

Prisha Sharma answered
Explanation: There is certain index set which is used in the IP or the integer programming model. The KH denotes the hardware component types.

Which design activity helps in the transformation of the floating point arithmetic to a fixed point arithmetic?
  • a)
    high-level transformation
  • b)
    scheduling
  • c)
    compilation
  • d)
    task-level concurrency management
Correct answer is option 'A'. Can you explain this answer?

Ankita Bose answered
Explanation: The high-level transformation are responsible for the high optimizing transformations, that is, for the loop interchanging and the transformation of the floating point arithmetic to the fixed point arithmetic can be done by the high-level transformation.

 Which programming algorithm is used in the starting process of the FRIDGE?
  • a)
    C++
  • b)
    JAVA
  • c)
    C
  • d)
    BASIC
Correct answer is option 'C'. Can you explain this answer?

Explanation: The FRIDGE tool uses C programming algorithm in the initial stage and is converted to a fixed-C algorithm which extends C by two extends.

 What do Index set L denotes?
  • a)
    processor
  • b)
    task graph node
  • c)
    task graph node type
  • d)
    hardware components
Correct answer is option 'C'. Can you explain this answer?

Puja Bajaj answered
Explanation: The index set is used in the IP or the integer programming model. The Index set KP denotes the processor, I denote the task graph nodes and the L denotes the task graph node type.

 Which of the following is a meet-in-the-middle approach?
  • a)
    peripheral based design
  • b)
    platform based design
  • c)
    memory based design
  • d)
    processor design
Correct answer is option 'B'. Can you explain this answer?

Nandini Khanna answered
Explanation: The platform is an abstraction layer which covers many possible refinements to a lower level and is mainly follows a meet-in-the-middle approach.

 Which model is based on precise measurements using real hardware?
  • a)
    encc energy-aware compiler
  • b)
    first power model
  • c)
    third power model
  • d)
    second power model
Correct answer is option 'A'. Can you explain this answer?

Krithika Gupta answered
Explanation: The encc-energy aware compiler uses the energy model by Steinke et al. it is based on the precise measurements of the real hardware. The power consumption of the memory, as well as the processor, is included in this model.

 Which compiler is based on the precise measurements of two fixed configurations?
  • a)
    first power model
  • b)
    second power model
  • c)
    third power model
  • d)
    fourth power model
Correct answer is option 'C'. Can you explain this answer?

Madhurima Iyer answered
Explanation: The third model was proposed by Russell and Jacome in the year 1998 and is based on the precise measurements of the two fixed configurations.

In which loop transformation, a single loop is split into two?
  • a)
    loop tiling
  • b)
    loop fusion
  • c)
    loop permutation
  • d)
    loop unrolling 
Correct answer is option 'B'. Can you explain this answer?

Explanation: Many loop transformation are done for the optimization of the program and one such loop transformation is the loop fusion in which a single loop is split and the loop fission includes the merging of the two separate loops.

 Which of the following tool can replace the floating point arithmetic to fixed point arithmetic?
  • a)
    SDS
  • b)
    FAT
  • c)
    VFAT
  • d)
    FRIDGE
Correct answer is option 'D'. Can you explain this answer?

Kiran Reddy answered
Explanation: There are certain tools available which are developed for the optimization programmes and one such tool is the FRIDGE or fixed-point programming design environment, commercially made available by Synopsys System Studio. This tool can is used in the transformation program, that is the conversion of floating point arithmetic to the fixed point arithmetic. This is widely used in the signal processing.

What is the solution to the knapsack problem?
  • a)
    many-to-many mapping
  • b)
    one-to-many mapping
  • c)
    many-to-one mapping
  • d)
    one-to-one mapping
Correct answer is option 'D'. Can you explain this answer?

Soumya Pillai answered
Explanation: The knapsack problem is associated with the size constraints, that is the size of the scratch pad memories. This problem can be solved by one-to-one mapping which was presented in an integer programming model by Steinke et al.

How many edges does the COOL use?
  • a)
    1
  • b)
    2
  • c)
    3
  • d)
    4
Correct answer is option 'B'. Can you explain this answer?

Vaishnavi Dey answered
Explanation: The codesign tool has 2 edges. These are timing edges and the communication edges. The timing edge provides the timing constraints whereas the communication edge contains the information about the amount of information to be exchanged.

Which part of the COOL input comprises information about the available hardware platform components?
  • a)
    target technology
  • b)
    design constraints
  • c)
    both behaviour and design constraints
  • d)
    behaviour
Correct answer is option 'A'. Can you explain this answer?

Navya Menon answered
Explanation: The codesign tool consists of three input parts which are described as target technology, design constraints and the behavior. Each input does different functions. The target technology comprises the information about the different hardware platform components available within the system.

 Which loop transformations have several instances of the loop body?
  • a)
    loop fusion
  • b)
    loop unrolling
  • c)
    loop fission
  • d)
    loop tiling
Correct answer is option 'B'. Can you explain this answer?

Arpita Gupta answered
Explanation: The loop unrolling is a standard transformation which creates several instances of the loop body and the number of copies of the loop is known as the unrolling factor.

 How many inputs part does COOL have?
  • a)
    2
  • b)
    4
  • c)
    5
  • d)
    3
Correct answer is option 'D'. Can you explain this answer?

Vaishnavi Dey answered
Explanation: The codesign tool consists of three input parts. These are target technology, design constraints and the behaviour and each input follows different functions. The target technology comprises the information about the different hardware platform components available within the system, design constraints are the second part of the input which contains the design constraints, and the behaviour part is the third input which describes the required overall behaviour.

The number of copies of loop is called as
  • a)
    rolling factor
  • b)
    loop factor
  • c)
    unrolling factor
  • d)
    loop size
Correct answer is option 'C'. Can you explain this answer?

Arka Dasgupta answered
Explanation: The number of copies of the loop is known as the unrolling factor and it is a standard transformation that produces instances of the loop body

 What does API stand for?
  • a)
    address programming interfaces
  • b)
    application programming interface
  • c)
    accessing peripheral through interface
  • d)
    address programming interface
Correct answer is option 'B'. Can you explain this answer?

Maitri Yadav answered
Explanation: The platform-based design helps in the reuse of both the hardware and the software components. The application programming interface helps in extending the platform towards the software applications.

What does the second part of the COOL input comprise?
  • a)
    behaviour and target technology
  • b)
    design constraints
  • c)
    behaviour
  • d)
    target technology
Correct answer is option 'B'. Can you explain this answer?

Megha Yadav answered
Explanation: The second part of the COOL input comprises of the design constraints such as the latency, maximum memory size, required throughput or maximum area for application-specific hardware.

 Which of the following help to meet and prove real-time constraints?
  • a)
    simulator
  • b)
    debugger
  • c)
    emulator
  • d)
    compiler
Correct answer is option 'D'. Can you explain this answer?

Madhurima Iyer answered
Explanation: There are several reasons for designing the optimization and compilers and one such is that it could help to meet and prove the real-time constraints.

 Which loop transformation can increase the code size?
  • a)
    loop permutation
  • b)
    loop fusion
  • c)
    loop fission
  • d)
    loop unrolling
Correct answer is option 'D'. Can you explain this answer?

Explanation: The loop unrolling can decrease the loop overhead, the fewer branches per execution of the loop body and this can increase the speed but is only restricted to loops with a constant number of iteration and thus the loop unrolling can increase the code size.

What do COOL stand for?
  • a)
    coprocessor tool
  • b)
    codesign tool
  • c)
    code tool
  • d)
    code control
Correct answer is option 'B'. Can you explain this answer?

Sankar Sarkar answered
Explanation: The COOL is the codesign tool which is one of the optimisation technique for partitioning the software and the hardware.

 Which edge of the COOL contains information about the amount of information to be exchanged?
  • a)
    regular edge
  • b)
    timing edge
  • c)
    communication edge
  • d)
    special edge
Correct answer is option 'C'. Can you explain this answer?

Sankar Sarkar answered
Explanation: The codesign tool has 2 edges and these are timing edges and the communication edges. The communication edge contains the information about the amount of information to be exchanged.

 What do the third part of the COOL input comprise?
  • a)
    design constraints and target technology
  • b)
    design constraints
  • c)
    behaviour
  • d)
    target technology 
Correct answer is option 'C'. Can you explain this answer?

Explanation: The codesign tool consists of three input parts and the third part of the COOL input describes the overall behaviour of the system. The hierarchical task graphs are used for this.

Who proposed the first power model?
  • a)
    Jacome
  • b)
    Russell
  • c)
    Tiwari
  • d)
    Russell and Jacome
Correct answer is option 'C'. Can you explain this answer?

Swara Basak answered
Explanation: Tiwari proposed the first power model in the year 1974. The model includes the so-called bases and the inter-instruction instructions.Base costs of the instruction correspond to the energy consumed per instruction execution when an infinite sequence of that instruction is executed. Inter instruction costs model the additional energy consumed by the processor if instructions change.

Chapter doubts & questions for Design of Embedded Processors - Embedded Systems (Web) 2023 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2023 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Design of Embedded Processors - Embedded Systems (Web) in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Embedded Systems (Web)

47 videos|69 docs|65 tests

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev