All Exams  >   Electrical Engineering (EE)  >   Control Systems  >   All Questions

All questions of Introduction to optimal control for Electrical Engineering (EE) Exam

The main step for solving the optimal control problem:
  • a)
    Transfer function of system which is optimal with respect to the given performance criterion
  • b)
    Compensators for the system
  • c)
    Minimizing the quadratic function
  • d)
    All of the mentioned
Correct answer is option 'D'. Can you explain this answer?

Sanjana Chopra answered
Understanding Optimal Control Problem Steps
The optimal control problem involves several critical steps that collectively lead to the development of an effective control strategy. Let's break down each component:
Transfer Function of System
- It defines the relationship between the input and output of a system in the frequency domain.
- The transfer function is essential for analyzing system behavior and stability, serving as the foundation for designing optimal controllers.
Compensators for the System
- Compensators are designed to adjust the system dynamics to meet specific performance criteria.
- They play a vital role in enhancing stability, speed of response, and minimizing the impact of disturbances.
Minimizing the Quadratic Function
- In optimal control, performance criteria often involve minimizing a quadratic cost function, which typically measures the trade-off between control effort and system performance.
- Solutions to this function lead to optimal control inputs that drive the system towards desired states while minimizing costs.
Conclusion: All Steps are Interconnected
- Each of the aforementioned steps is crucial and interconnected in solving an optimal control problem.
- The transfer function provides the system framework, compensators modify the system's response, and minimizing the quadratic function yields the optimal control strategy.
- Therefore, the correct answer is option 'D': all of the mentioned steps are essential for addressing the optimal control problem effectively.
By understanding these steps, engineers can design systems that operate efficiently within specified performance criteria while ensuring stability and robustness.

As a result of introduction of negative feedback which of the following will not decrease?
  • a)
    Band width
  • b)
    Overall gain
  • c)
    Distortion
  • d)
    Instability
Correct answer is option 'B'. Can you explain this answer?

Hiral Kulkarni answered
Explanation: Overall gain reduces due to negative feedback and due to this reduction in gain the stable system can become unstable but excess increase in the gain causes oscillations to increase.

Any externally introduced signal affecting the controlled output is called a
  • a)
    Feedback
  • b)
    Stimulus
  • c)
    Signal
  • d)
    Gain control
Correct answer is option 'B'. Can you explain this answer?

Tarun Chawla answered
Explanation: Externally introduced signal is called stimulus that is affecting the controlled output and is a factor also affecting steady state error.

Matrix R is :
  • a)
    Positive semi definite symmetric matrix
  • b)
    Positive definite non-symmetric matrix
  • c)
    Negative definite symmetric matrix
  • d)
    Negative definite non-symmetric matrix
Correct answer is option 'A'. Can you explain this answer?

Ishan Saini answered
Explanation: Matrix R defines positive definite or non-definite symmetric matrix which is used in the performance index so as to give equal weightage to each element.

Dynamic programming is based on :
  • a)
    Principle of calculus
  • b)
    Principle of invariant imbedding
  • c)
    Principle of optimality
  • d)
    All of the mentioned
Correct answer is option 'D'. Can you explain this answer?

Alok Khanna answered
Dynamic programming is a powerful algorithmic technique that is used to solve optimization problems by breaking them down into smaller overlapping subproblems. It is based on the principle of optimality, which states that an optimal solution to a problem can be constructed from optimal solutions to its subproblems.

Principle of Optimality:
The principle of optimality is the fundamental concept behind dynamic programming. It states that an optimal solution to a problem contains within it optimal solutions to its subproblems. In other words, if we have a problem that can be divided into smaller subproblems, and we know the optimal solutions to these subproblems, we can use this information to construct the optimal solution to the original problem. This allows us to avoid redundant computations and solve the problem efficiently.

Dynamic programming uses a bottom-up approach to solve problems by iteratively solving smaller subproblems and storing their solutions in a table (often referred to as a memoization table or a dynamic programming table). This table is then used to look up solutions to larger subproblems, until the optimal solution to the original problem is obtained.

Dynamic Programming Steps:
1. Identify the subproblems: Break down the problem into smaller subproblems that can be solved independently.
2. Define the value of the solution: Determine the objective function or the criteria for evaluating the solutions to the subproblems.
3. Formulate the recurrence relation: Express the solution to a subproblem in terms of solutions to smaller subproblems.
4. Use memoization or tabulation: Store the solutions to the subproblems in a table to avoid redundant computations.
5. Solve the original problem: Use the solutions from the subproblems to construct the optimal solution to the original problem.

Conclusion:
Dynamic programming is based on the principle of optimality, which states that an optimal solution to a problem can be constructed from optimal solutions to its subproblems. It provides an efficient way to solve optimization problems by breaking them down into smaller overlapping subproblems and using memoization or tabulation to avoid redundant computations. By following the steps of dynamic programming, we can solve complex problems efficiently and effectively.

Consider the following statement:
A proportional plus derivative controller
1. Has high sensitivity
2. Increases the stability of the system
3. Improves the steady state accuracy
Which of these statements are correct?
  • a)
    1,2 and 3
  • b)
    1 and 2
  • c)
    2 and 3
  • d)
    1 and 3
Correct answer is option 'B'. Can you explain this answer?

Nayanika Kaur answered
Answer: b
Explanation: A proportional plus derivative controller has the following features as it adds open loop zero on the negative real axis, peak overshoot decreases, bandwidth increases and rise time decreases.

The principle of optimality :
  • a)
    The optimal control sequence is function of initial state
  • b)
    The optimal control sequence is function of number of stages N
  • c)
    The principle maintains the N-stage decision process
  • d)
    Find one control value at a time until optimal policy is determined
Correct answer is option 'A'. Can you explain this answer?

Rajat Kumar answered
The Principle of Optimality Explained
The principle of optimality is a fundamental concept in dynamic programming and control theory. It essentially states that an optimal policy has the property that whatever the initial state and decision are, the remaining decisions must constitute an optimal policy for the resulting state. Here’s a deeper dive into why option 'A' is the correct answer.
Optimal Control Sequence and Initial State
- The optimal control sequence is indeed a function of the initial state. This means that the first decision (or control action) taken depends on where the system starts. Each control action has implications for future states, and hence, optimal decisions are contingent upon the initial conditions.
Influence of Stages on Control Sequence
- While the number of stages (option B) does play a role in dynamic programming, the essence of the principle of optimality emphasizes the relationship between the current state and future states rather than the number of stages involved.
Maintaining the N-Stage Decision Process
- Option C suggests that the principle maintains the N-stage decision process, which is true in a broader sense, but it does not capture the essence of how decisions are made based on the initial state.
Sequential Control Value Selection
- Option D implies a method of finding one control value at a time. This might describe a sequential approach, but it overlooks the crucial dependency on the initial state, which is central to the principle of optimality.
Conclusion
In summary, the principle of optimality asserts that the optimal control sequence is fundamentally linked to the initial state, making option 'A' the most accurate representation of this concept. Understanding this relationship is crucial for effective decision-making in control systems and dynamic programming.

Hydraulic controller:
  • a)
    Flexible operation
  • b)
    High torque high speed operation
  • c)
    Fire and explosion proof operation
  • d)
    No leakage
Correct answer is option 'B'. Can you explain this answer?

Prateek Mehra answered
Explanation: Hydraulic controller must have no leakage and also it requires high torque and high speed operation due to high density of the controller.

__________increases the steady state accuracy.
  • a)
    Integrator
  • b)
    Differentiator
  • c)
    Phase lead
  • d)
    Phase lag
Correct answer is option 'A'. Can you explain this answer?

Tarun Chawla answered
Explanation: Integrator that is also the low pass filter reduces or eliminates the steady state error but is causes the slow and sluggish response but tries to eliminate the error.

By which of the following the control action is determined when a man walks along a path?
  • a)
    Brain
  • b)
    Hands
  • c)
    Legs
  • d)
    Eyes
Correct answer is option 'D'. Can you explain this answer?

Tarun Chawla answered
Explanation: Control action is the action which is the generated by the system performance and determined by eyes when man walks along the path.

The limitation of the transfer function approach are:
  • a)
    The spectral factorization becomes quite complex
  • b)
    It is restricted to the systems with all performance index
  • c)
    Multi input and multi output systems are not obvious
  • d)
    It is useful for time varying and linear systems
Correct answer is option 'A'. Can you explain this answer?

Saumya Basak answered
Explanation: The limitation of transfer function approach is that is it useful only for quadratic performance index and multi input and multi output systems are obvious and also it is ineffective for time varying and non-linear systems.

A conditionally stable system exhibits poor stability at :
  • a)
    Low frequencies
  • b)
    Reduced values of open loop gain
  • c)
    Increased values of open loop gain
  • d)
    High frequencies
Correct answer is option 'B'. Can you explain this answer?

Dhruv Datta answered
Explanation: A conditionally stable system is the system which is stable only for certain values of K and exhibits poor stability at the reduced values of open loop gain.

The position and velocity errors of a type-2 system are :
  • a)
    Constant, constant
  • b)
    Constant, infinity
  • c)
    Zero, constant
  • d)
    Zero, zero
Correct answer is option 'C'. Can you explain this answer?

Dhruv Datta answered
Explanation: The position and velocity error of the type 2 system is zero and a constant value as for type 2 system velocity error is finite while acceleration error is infinite.

The type 0 system has ______ at the origin.
  • a)
    No pole
  • b)
    Net pole
  • c)
    Simple pole
  • d)
    Two poles
Correct answer is option 'A'. Can you explain this answer?

Sarthak Yadav answered
Explanation: The type of the system is defined as the property of the system which has pole at the origin.

Matrix Q is :
  • a)
    Positive semi definite symmetric matrix
  • b)
    Positive definite non-symmetric matrix
  • c)
    Negative definite symmetric matrix
  • d)
    Negative definite non-symmetric matrix
Correct answer is option 'A'. Can you explain this answer?

Ameya Nambiar answered
Explanation: Matrix Q defines positive definite or non-definite symmetric matrix which is used in the performance index so as to give equal weightage to each element.

The optimization method based on dynamic programming views :
  • a)
    Control problem as the multistage decision problem
  • b)
    Control input as a time sequence of decisions
  • c)
    A sampled data system gives rise to sequence of transformations of the original state vector
  • d)
    All of the mentioned
Correct answer is option 'D'. Can you explain this answer?

Introduction:
The optimization method based on dynamic programming is a powerful technique used to solve control problems. It involves viewing the control problem as a multistage decision problem, where the control input is considered as a time sequence of decisions. Additionally, a sampled data system gives rise to a sequence of transformations of the original state vector. All of these aspects are taken into account in the optimization method based on dynamic programming.

Control Problem as a Multistage Decision Problem:
In dynamic programming, the control problem is viewed as a multistage decision problem. This means that the control input is not determined in a single step but rather over multiple stages or time periods. Each stage involves making a decision based on the current state and the desired objective. The decisions made at each stage affect the future states and the overall performance. By considering the problem in this way, dynamic programming allows for the optimization of the control input over a sequence of decisions.

Control Input as a Time Sequence of Decisions:
Dynamic programming treats the control input as a time sequence of decisions. The decisions made at each time period are based on the current state and the desired objective. The control input is not determined all at once but rather evolves over time. By considering the control input as a time sequence of decisions, dynamic programming allows for the optimization of the control input over the entire time horizon.

Sampled Data System and Transformations:
A sampled data system refers to a system where the continuous-time signals are converted into discrete-time signals through sampling. In the context of dynamic programming, a sampled data system gives rise to a sequence of transformations of the original state vector. These transformations occur at each time period and are based on the current state and the control input. The objective of dynamic programming is to find the optimal sequence of transformations that maximize or minimize a certain performance criterion.

Conclusion:
The optimization method based on dynamic programming takes into account various aspects of control problems. It views the control problem as a multistage decision problem, considers the control input as a time sequence of decisions, and recognizes that a sampled data system gives rise to a sequence of transformations of the original state vector. By considering these aspects, dynamic programming provides a powerful method for optimizing control systems.

For the stable system in discrete optimal control systems:
  • a)
    Poles must lie outside the unit circle
  • b)
    Poles must lie within the unit circle
  • c)
    Poles must be on the unit circle
  • d)
    Pole must be in infinity
Correct answer is option 'B'. Can you explain this answer?

Alok Verma answered
Stability in Discrete Optimal Control Systems

Introduction
In discrete optimal control systems, stability is a crucial property that ensures the system's response remains bounded and converges to a desired state. The stability of a system is determined by the location of its poles in the z-plane. The poles represent the roots of the system's characteristic equation and have a significant impact on the system's behavior.

Explanation
The correct answer is option 'B', which states that the poles must lie within the unit circle. Let's understand why this is true and why the other options are incorrect.

a) Poles must lie outside the unit circle:
If the poles of a discrete system lie outside the unit circle, the system becomes unstable. The system's response will exhibit exponential growth, leading to instability and unpredictable behavior. Therefore, this option is incorrect.

b) Poles must lie within the unit circle:
For a discrete system to be stable, it is essential that all the poles lie within the unit circle in the z-plane. This condition ensures that the system's response remains bounded and converges to a steady state. When the poles are within the unit circle, the system is said to be marginally stable or stable. Hence, this option is correct.

c) Poles must be on the unit circle:
If the poles of a discrete system lie exactly on the unit circle, the system becomes marginally stable. In this case, the system's response oscillates without growing or decaying. However, for stability, it is not necessary for the poles to be precisely on the unit circle. So, this option is incorrect.

d) Pole must be in infinity:
A pole at infinity implies that the system has an integrator. Such a system is not stable as it can exhibit unbounded growth in response to certain inputs. Therefore, this option is incorrect.

Conclusion
In summary, for a stable system in discrete optimal control systems, the poles must lie within the unit circle in the z-plane (option 'B'). This condition ensures that the system's response remains bounded and converges to a desired state. The other options are incorrect as they either lead to instability or do not guarantee stability.

The type 1 system has ______ at the origin.
  • a)
    No pole
  • b)
    Net pole
  • c)
    Simple pole
  • d)
    Two poles
Correct answer is option 'C'. Can you explain this answer?

Sarthak Yadav answered
Explanation: The type of the system is defined as the pole at the zero and type 1 is defined as the 1 pole at the origin.

In case of type-1 system steady state acceleration is :
  • a)
    Unity
  • b)
    Infinity
  • c)
    Zero
  • d)
    10
Correct answer is option 'B'. Can you explain this answer?

Steady state acceleration in a type-1 system

In control systems, type-1 systems refer to those systems that have at least one integrator in their open-loop transfer function. These systems are characterized by having a steady-state error for a step input, but the error becomes zero for a ramp input.

Understanding steady state acceleration

Steady-state acceleration refers to the acceleration of a system after it has reached a stable operating condition. In other words, it is the acceleration of the system when all transient responses have decayed, and the system has settled into a steady state.

Analysis of a type-1 system

Type-1 systems have the transfer function of the form:

G(s) = K / (s * (s + a))

Where K is the gain of the system and 'a' is a constant related to the pole location of the system.

Determining steady state acceleration

To find the steady-state acceleration of a type-1 system, we need to analyze its open-loop transfer function. The steady-state acceleration can be obtained by taking the second derivative of the output of the system with respect to time.

In the given options, the correct answer is option 'B' - Infinity. This means that the steady-state acceleration of a type-1 system is infinite.

Explanation of the answer

The reason for the steady-state acceleration being infinite in a type-1 system can be understood by analyzing the transfer function. In a type-1 system, the denominator of the transfer function has a pole at the origin (s = 0).

When we take the second derivative of the output with respect to time, we are essentially multiplying the transfer function by s^2. Since the denominator has a pole at s = 0, multiplying by s^2 will result in a division by zero. This causes the steady-state acceleration to be infinite.

Conclusion

In conclusion, the steady-state acceleration in a type-1 system is infinite. This is because the denominator of the transfer function has a pole at the origin, leading to a division by zero when taking the second derivative of the output with respect to time.

Which of the following devices are commonly used as error detectors in instruments?
  • a)
    Vern stats
  • b)
    Microsyn
  • c)
    Resolvers
  • d)
    All of the mentioned
Correct answer is option 'D'. Can you explain this answer?

Error Detectors in Instruments

Introduction
Error detectors are devices or components used in instruments to detect and measure any errors or discrepancies in the measurements taken by the instrument. These errors can occur due to various factors such as inaccuracies in the instrument's calibration, noise interference, environmental conditions, or other external factors. The error detectors help in identifying and quantifying these errors, allowing for the necessary corrections to be made.

Commonly Used Devices as Error Detectors
The following devices are commonly used as error detectors in instruments:

1. Vernier Scales (Vernier Stats)
- Vernier scales are commonly used in instruments to improve the precision of measurements.
- They consist of a main scale and a smaller auxiliary scale called the vernier scale, which slides along the main scale.
- By aligning the scales, the user can read measurements with greater accuracy than what is possible with just the main scale.
- Vernier scales can be used as error detectors by comparing the measurements obtained from the vernier scale with those obtained from the main scale. Any differences indicate measurement errors.

2. Microsynchros (Microsync)
- Microsynchros, also known as microsyncs, are devices used for measuring angular displacement or position.
- They consist of a rotor and a stator, with the rotor being the rotating part and the stator being the stationary part.
- Microsynchros can be used as error detectors by comparing the desired position or displacement with the actual position or displacement measured by the microsynchro. Any deviations indicate errors.

3. Resolvers
- Resolvers are electrical devices used to measure angular position or displacement.
- They consist of a rotor and a stator, similar to microsynchros.
- Resolvers can be used as error detectors by comparing the desired angular position or displacement with the actual position or displacement measured by the resolver. Any deviations indicate errors.

Conclusion
Instruments often use error detectors to identify and quantify errors in measurements. Vernier scales, microsynchros, and resolvers are commonly used devices that can serve as error detectors. By comparing the desired measurements or positions with the actual measurements or positions obtained from these devices, errors can be detected and appropriate corrective actions can be taken. Therefore, option 'D' - "All of the mentioned" is the correct answer.

Which of the following should be done to make an unstable system stable?
  • a)
    The gain of the system should be decreased
  • b)
    The gain of the system should be increased
  • c)
    The number of poles to the loop transfer function should be increased
  • d)
    The number of zeros to the loop transfer function should be increased
Correct answer is option 'B'. Can you explain this answer?

Avik Iyer answered
Introduction:
In control systems, stability is a critical characteristic that ensures the system's output remains bounded and does not grow indefinitely over time. An unstable system can lead to unpredictable behavior, oscillations, and even system failure. To stabilize an unstable system, certain measures need to be taken.

Explanation:
There are several methods to stabilize an unstable system, but in this case, the correct answer is option 'B' - increasing the gain of the system. Let's understand why this is the correct approach:

1. Understanding System Stability:
To understand why increasing the gain can stabilize an unstable system, we need to understand the concept of system stability. The stability of a control system is determined by the locations of poles in the system's transfer function.

2. Poles and Stability:
Poles are the values of s at which the denominator of the transfer function becomes zero. In a stable system, all the poles should have negative real parts or lie within the left half of the complex plane.

3. Unstable System:
An unstable system has at least one pole with a positive real part or lies in the right half of the complex plane. This positive real part causes the system's output to grow indefinitely over time.

4. Increasing Gain:
By increasing the gain of the system, we can shift the poles towards the left half of the complex plane, making them have negative real parts. This shift in pole locations transforms an unstable system into a stable one.

5. Root Locus:
Increasing the gain can be achieved by adjusting the proportional gain in a control system. This adjustment can be done using a root locus plot, which shows the movement of poles in the complex plane as the gain varies.

6. Effect on Stability:
As the gain is increased, the poles move towards the left half of the complex plane. If the gain is increased sufficiently, the poles can be shifted completely to the left half, resulting in a stable system.

Conclusion:
In summary, to stabilize an unstable system, the gain of the system should be increased. This adjustment shifts the poles from the unstable region to the stable region of the complex plane. It is important to note that increasing the gain should be done carefully to avoid overshoot, oscillations, or other undesirable effects. System stability is a fundamental concept in control systems design and plays a crucial role in ensuring the desired performance and behavior of the system.

A performance index written in terms of :
  • a)
    1 variable
  • b)
    2 variable
  • c)
    3 variable
  • d)
    5 variable
Correct answer is option 'B'. Can you explain this answer?

Bijoy Mehta answered
Performance Index with Two Variables
Performance index in the context of engineering often involves multiple variables that need to be optimized simultaneously. A performance index written in terms of two variables allows for a more comprehensive evaluation of system performance.

Benefits of Two Variable Performance Index
- **Comprehensive Analysis**: By considering two variables in the performance index, a more holistic evaluation of system performance can be achieved.
- **Trade-off Analysis**: Two variables allow for the examination of trade-offs between conflicting objectives, helping in decision-making processes.
- **Enhanced Optimization**: Optimization algorithms can be more effective when considering multiple variables in the performance index.

Example of Two Variable Performance Index
An example of a performance index written in terms of two variables could be the efficiency and cost of a system. The performance index could be defined as a function of both efficiency and cost, with the goal of maximizing efficiency while minimizing cost.

Challenges in Two Variable Performance Index
- **Complexity**: Managing multiple variables can increase the complexity of the performance evaluation process.
- **Interactions**: Interactions between the two variables may need to be carefully analyzed to avoid unintended consequences.
In conclusion, a performance index written in terms of two variables provides a more nuanced and balanced assessment of system performance, allowing for better decision-making and optimization strategies in engineering applications.

The industrial controller having the best steady-state accuracy is:
  • a)
    A derivative controller
  • b)
    An integral controller
  • c)
    A rate feedback controller
  • d)
    A proportional controller
Correct answer is option 'A'. Can you explain this answer?

Shivani Saha answered
Answer: a
Explanation: The best steady state accuracy is of derivative controller and this is due to the fact that derivative controller is only affected by the steady state response not the transient response.

Which of the following is the best method for determining the stability and transient response?
  • a)
    Root locus
  • b)
    Bode plot
  • c)
    Nyquist plot
  • d)
    None of the mentioned
Correct answer is option 'A'. Can you explain this answer?

Lakshmi Desai answered
Explanation: Root locus is the best method for determining stability of the transient response as it gives the exact pole zero location and also their effect on the response.

If a step function is applied to the input of a system and the output remains below a certain level for all the time, the system is :
  • a)
    Not necessarily stable
  • b)
    Stable
  • c)
    Unstable
  • d)
    Always unstable
Correct answer is option 'A'. Can you explain this answer?

Pankaj Mehta answered
Explanation: If the input is bounded and output is also bounded then the system is always stable and step input is bounded and the output is always under certain li its then the system is stable.

_______________ is a closed loop system.
  • a)
    Auto-pilot for an aircraft
  • b)
    Direct current generator
  • c)
    Car starter
  • d)
    Electric switch
Correct answer is option 'A'. Can you explain this answer?

Tarun Chawla answered
Explanation: Auto-pilot for an aircraft is an example of the closed loop system as the output is the desired destination without the use of any pilot.

In infinite time regulator for the final time is:
  • a)
    t1
  • b)
    t0
  • c)
    Infinite
  • d)
    Zero
Correct answer is option 'C'. Can you explain this answer?

Madhurima Das answered
Explanation: In infinite time regulator is the extended version of the state regulator and the final time for the infinite time regulator is infinite in this case.

The differential term in the riccati equation in infinite regulator is:
  • a)
    Finite
  • b)
    Positive
  • c)
    Infinite
  • d)
    Zero
Correct answer is option 'D'. Can you explain this answer?

Yashvi Shah answered
Explanation: As the final time is not defined hence the differential term of the riccati equation becomes zero and the equation we get is the modified riccati equation.

The regulators that convert the non-linear stability into linear stability:
  • a)
    Linear
  • b)
    Sub-optimal
  • c)
    Finite
  • d)
    Infinite
Correct answer is option 'D'. Can you explain this answer?

Harsh Kulkarni answered
Explanation: In infinite state regulators stability is not guaranteed and hence the nonlinear stability is converted into to the linear stability.

If Q is positive semi- definite:
  • a)
    The optimal closed loop system is asymptotically stable
  • b)
    The asymptotic stability is not guaranteed
  • c)
    The system is stable
  • d)
    The system is unstable
Correct answer is option 'B'. Can you explain this answer?

Madhurima Das answered
Explanation: If the plant is controllable and bounded and in infinite state regulators the asymptotic stability is not guaranteed even if the Q is positive and semi-definite.

The performance index is reduced by:
  • a)
    State variable constraint
  • b)
    Input constraint
  • c)
    Control function minimization
  • d)
    Control function constraint
Correct answer is option 'C'. Can you explain this answer?

Gaurav Chauhan answered
Explanation: Once the performance index is calculated the next task is to find the control function which is used to minimize the performance index.

Pneumatic controller are :
  • a)
    Flexible operation
  • b)
    High torque high speed operation
  • c)
    Fire and explosion proof operation
  • d)
    No leakage
Correct answer is option 'C'. Can you explain this answer?

Samridhi Bose answered
Answer: c
Explanation: Pneumatic controllers are fire and explosion proof operation as they require air and gas fuel for its operation.

Chapter doubts & questions for Introduction to optimal control - Control Systems 2025 is part of Electrical Engineering (EE) exam preparation. The chapters have been prepared according to the Electrical Engineering (EE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Electrical Engineering (EE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Introduction to optimal control - Control Systems in English & Hindi are available as part of Electrical Engineering (EE) exam. Download more important topics, notes, lectures and mock test series for Electrical Engineering (EE) Exam by signing up for free.

Signup to see your scores go up within 7 days!

Study with 1000+ FREE Docs, Videos & Tests
10M+ students study on EduRev