Lecture 27 - Characteristic Equation, eigenvalues and eigen vectors, Control Systems
1 Characteristic Equation, eigenvalues and eigen vectors
For a discrete state space model, the characteristic equation is defined as
|zI − A| = 0
The roots of the characteristic equation are the eigenvalues of matrix A
1. If det(A) = 0, i.e., A is nonsingular and λ1, λ2, · · · , λn are the eigenvalues of A, then, will be the eigenvalues of A−1.
2. Eigenvalues of A and AT are same when A is a real matrix.
3. If A is a real symmetric matrix then all its eigenvalues are real.
The n × 1 vector vi which satisfies the matrix equation
Avi = λivi (1)
where λi, i = 1, 2, · · · , n denotes the ith eigenvalue, is called the eigen vector of A associated with the eigenvalue λi. If eigenvalues are distinct, they can be solved directly from equation (1).
Properties of eigen vectors
1. An eigen vector cannot be a null vector.
2. If vi is an eigen vector of A then mvi is also an eigen vector of A where m is a scalar.
3. If A has n distinct eigenvalues, then the n eigen vectors are linearly independent.
Eigen vectors of multiple order eigenvalues When the matrix A an eigenvalue λ of multiplicity m, a full set of linearly independent may not exist. The number of linearly independent eigen vectors is equal to the degeneracy d of λI − A.
The degeneracy is defined as
d = n − r
where n is the dimension of A and r is the rank of λI − A. Furthermore,
1 ≤ d ≤ m
2 Similarity Transformation and Diagonalization
Square matrices A and are similar if
The non-singular matrix P is called similarity transformation matrix. It should be noted that eigenvalues of a square matrix A are not altered by similarity transformation.
Diagonalization:
If the system matrix A of a state variable model is diagonal then the state dynamics are decoupled from each other and solving the state equations become much more simpler.
In general, if A has distinct eigenvalues, it can be diagonalized using similarity transformation. Consider a square matrix A which has distinct eigenvalues λ1, λ2, . . . λn. It is required to find a transformation matrix P which will convert A into a diagonal form
through similarity transformation AP = P Λ. If v1, v2, . . . , vn are the eigenvectors of matrix A corresponding to eigenvalues λ1, λ2, . . . λn, then we know Avi = λivi. This gives
A [v1 v2 . . . vn ] = [v1 v2 . . . vn ]
Thus P = [v1 v2 . . . vn].
Consider the following state model.
x(k + 1) = Ax(k) + Bu(k)
If P transforms the state vector x(k) to z(k) through the relation
x(k) = P z(k), or, z(k) = P −1x(k)
then the modified state space model becomes
z(k + 1) = P −1AP z(k) + P −1Bu(k)
where P −1AP = Λ.
3 Computation of Φ(t)
We have seen that to derive the state space model of a sampled data system, we need to know the continuous time state transition matrix Φ(t) = eAt.
3.1 Using Inverse Laplace Transform For the system
= Ax(t) + B u(t), the state transition matrix eAt can be computed as,
3.2 Using Similarity Transformation
If Λ is the diagonal representation of the matrix A, then Λ = P −1AP . When a matrix is in diagonal form, computation of state transition matrix is straight forward:
Given eΛt, we can show that
eAt = P eΛtP −1
3.3 Using Caley Hamilton Theorem
Every square matrix A satisfies its own characteristic equation. If the characteristic equation is
then,
Application: Evaluation of any function f (λ) and f (A)
If A has distinct eigenvalues λ1, · · · , λn , then,
f (λi ) = g(λi ), i = 1, · · · , n
The solution will give rise to β0 , β1 , · · · , βn−1 , then
f (A) = β0I + β1A + · · · + βn−1An−1
If there are multiple roots (multiplicity = 2), then
f (λi) = g(λi) (2)
(3)
Example 1:
then compute the state transition matrix using Caley Hamilton Theorem.
= (λ−1)2(λ−2) = 0 ⇒ λ1 = 1 (with multiplicity 2), λ2 = 2
Let f (λ) = eλt and g(λ) = β0 + β1λ + β2λ2 Then using (2) and (3), we can write
This implies
Solving the above equations
Then
Example 2 For the system = Ax(t) + B u(t), where A =
compute eAt using 3 different techniques.
Solution: Eigenvalues of matrix A are 1 ± j 1.
Method 1
Method 2
eAt = P eΛtP −1 where eΛt = Eigen values are 1 ± j . The corresponding eigenvectors are found by using equation Avi = λivi as follows
Taking v1 = 1, we get v2 = j . So, the eigenvector corresponding to 1 + j is and the one corresponding to 1 − j is
The transformation matrix is given by
Now,
Method 3: Caley Hamilton Theorem The eigenvalues are λ1,2 = 1 ± j .
Solving,
Hence,
We will now show through an example how to derive discrete state equation from a continuous one.
Example: Consider the following state model of a continuous time system.
If the system is under a sampling process with period T , derive the discrete state model of the system.
To derive the discrete state space model, let us first compute the state transition matrix of the continuous time system using Caley Hamilton Theorem.
This implies
Solving the above equations
Then
Thus the discrete state matrix A is given as
The discrete input matrix B can be computed as
The discrete state equation is thus described by
When T = 1, the state equations become
1. What is the characteristic equation in linear algebra? | ![]() |
2. How are eigenvalues and eigenvectors related? | ![]() |
3. How can we find eigenvalues and eigenvectors? | ![]() |
4. Why are eigenvalues and eigenvectors important in linear algebra? | ![]() |
5. Can a matrix have complex eigenvalues and eigenvectors? | ![]() |