Lecture 31 -Revisiting the basics Linearization of A Nonlinear System Consider a system, Control Systems
In this lecture we would discuss Lyapunov stability theorem and derive the Lyapunov Matrix Equation for discrete time systems.
1 Revisiting the basics
Linearization of A Nonlinear System Consider a system
x(k + 1) = f (x(k), u(k))
where functions fi(.) are continuously differentiable. The equilibrium point (xe, ue) for this system is defined as
f (xe, ue) = 0
Linearization is the process of replacing the nonlinear system model by its linear counterpart in a small region about its equilibrium point.
We have well stabilised tools to analyze and stabilize linear systems.
The method: Let us write the general form of nonlinear system = f (x, u) as:
Let be a constant input that forces the system to settle into a constant equilibrium state xe = [x1e x2e . . . xne]T such that f (xe, ue) = 0 holds true.
We now perturb the equilibrium state by allowing: x = xe +∆x and u = ue +∆u. Taylor’s expansion yields
∆x(k + 1) = f (xe + ∆x, ue + ∆u) = f (xe, ue)
where
are the Jacobian matrices of f with respect to x and u, evaluated at the equilibrium point,
Note that f (xe, ue) = 0. Let
(3)
Neglecting higher order terms, we arrive at the linear approximation
∆x(k + 1) = A∆x(k) + B∆u(k) (4)
Similarly, if the outputs of the nonlinear system model are of the form
or in vector notation y(k) = h(x(k), u(k)) (5)
then Taylor’s series expansion can again be used to yield the linear approximation of the above output equations. Indeed, if we let
y = ye + ∆y (6)
then we obtain
∆y(k) = C ∆x(k) + D∆u(k) (7)
Example: Consider a nonlinear system
(8a)
(8b)
Linearize the system about origin which is an equilibrium point.
Evaluating the coefficients of Eqn. (3), we get
Thus Hence, the linearized system around origin is given by
(9)
Sign definiteness of functions and matrices
Positive Definite Function: A continuously differentiable function f : Rn → R+ is said to be positive definite in a region S ∈ Rn that contains the origin if
1. f (0) = 0
2. f (x) > 0; x ∈ S and x = 0
The function f (x) is said to be positive semi-definite
1. f (0) = 0
2. f (x) ≥ 0; x ∈ S and x = 0
If the condition (2) becomes f (x) < 0, the function is negative definite and if it becomes f (x) ≤ 0 it is negative semi-definite.
Example: Is the function f (x1, x2) = positive definite?
Answer: f (0, 0) = 0 shows that the first condition is satisfied. f (x1, x2) > 0 for x1, x2 = 0.
Second condition is also satisfied. Hence the function is positive definite.
A square matrix P is symmetric if P = PT . A scalar function has a quadratic form if it can be written as xT P x where P = PT and x is any real vector of dimension n × 1.
Positive Definite Matrix: A real symmetric matrix P is positive definite, i.e. P > 0 if
1. xT P x > 0 for every non-zero x.
2. xT P x = 0 only if x = 0.
A real symmetric matrix P is positive semi-definite, i.e. P ≥ 0 if xT P x ≥ 0 for every non-zero x. This implies that xT P x = 0 for some x 0.
Theorem: A symmetric square matrix P is positive definite if and only if any one of the following conditions holds.
1. Every eigenvalue of P is positive.
2. All the leading principal minors of P are positive.
3. There exists an n × n non-singular matrix Q such that P = QT Q.
Similarly, a matrix P is said to be negative definite if −P is positive definite. When none of these two conditions satisfies, the definiteness of the matrix cannot be calculated or in other words it is said to be sign indefinite.
Example: Consider the following third order matrices. Determine the sign definiteness of them.
The leading principal minors of the matrix A1 are 2, 1 and 2, hence the matrix is positive definite.
The eigenvalues of the matrix A2 can be straightaway computed as 2, 5 and −3, i.e., all the eigenvalues are not positive. Again, the eigenvalues of the matrix −A2 are −2, −5 and 3 and hence the matrix A2 is sign indefinite.
2 Lyapunov Stability
Theorems In the last section we have discussed various stability definitions. But the big question is how do we determine or check the stability or instability of an equilibrium point?
Lyapunov introduced two methods.
The first is called Lyapunov’s first or indirect method: we have already seen it as the linearization technique. Start with a nonlinear system
x(k + 1) = f (x(k)) (10)
Expanding in Taylor series around xe and neglecting higher order terms.
∆x(k + 1) = A∆x(k) (11)
where (12)
Then the nonlinear system (10) is asymptotically stable around xe if and only if the linear system (11) is; i.e., if all eigenvalues of A are inside the unit circle.
The above method is very popular because it is easy to apply and it works well for most systems, all we need to do is to be able to evaluate partial derivatives.
One disadvantage of the method is that if some eigenvalues of A are on the unit circle and the rest are inside the unit circle, then we cannot draw any conclusions, the equilibrium can be either stable or unstable.
The ma jor drawback, however, is that since it involves linearization it is applied for situations when the initial conditions are “close” to the equilibrium. The method provides no indication as to how close is “close”, which may be extremely important in practical applications.
The second method is Lyapunov’s second or direct method: this is a generalization of Lagrange’s concept of stability of minimum potential energy.
Consider the nonlinear system (10). Without loss of generality, we assume origin as the equilibrium point of the system. Suppose that there exists a function, called ‘Lyapunov function’, V (x) with the following properties:
1. V (0) = 0
2. V (x) > 0, for x = 0
3. ∆V (x) < 0 along tra jectories of (10).
Then, origin is asymptotically stable.
We can see that the method hinges on the existence of a Lyapunov function, which is an energy-like function, zero at equilibrium, positive definite everywhere else, and continuously decreasing as we approach the equilibrium.
The method is very powerful and it has several advantages:
Lyapunov Matrix Equation It is also possible to find a Lyapunov function for a linear system. For a linear system of the form x(k + 1) = Ax(k) we choose as Lyapunov function the quadratic form
V (x(k)) = xT (k)P x(k) (13)
where P is a symmetric positive definite matrix. Thus
∆V (x(k)) = V (x(k + 1)) − V (x(k)) = x(k + 1)T P x(k + 1) − xT (k)P x(k) (14)
Simplifying the above equation and omitting k
∆V (x) = (Ax)T P Ax − xT P x
= xT AT P Ax − xT P x (15)
= xT (AT P A − P )x
= −xT Qx
where
AT P A − P = −Q (16)
If the matrix Q is positive definite, then the system is asymptotically stable. Therefore, we could pick Q = I , the identity matrix and solve
AT P A − P = −I
for P and see if P is positive definite.
The equation (16) is called Lyapunov’s matrix equation for discrete time systems and can be solved through MATLAB by using the command dlyap.
Example: Determine the stability of the following system by solving Lyapunov matrix equation
Let us take Putting these into Lyapunov matrix equation,
Thus
2p2 + p4 = −1
−p1 + p4 − p2 = 0
p1 − 2p2 = −1
Solving
p1 = −1 p2 = 0 p4 = −1 which shows P is a negative definite matrix. Hence the system is unstable. To verify the result, if you compute the eigenvalues of A you would find out that they are outside the unit circle.
1. What is linearization of a nonlinear system? | ![]() |
2. Why is linearization important in control systems? | ![]() |
3. How is linearization performed for a nonlinear system? | ![]() |
4. What are the limitations of linearization? | ![]() |
5. How can linearization be used in practical applications? | ![]() |