A matrix (plural matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. The individual items in a matrix are called its elements or entries. Thus a m x n matrix is of the form
In matrix A has m rows and n column a_{ij} is the element of i^{th} row and j^{th} column.
If number of row and number of column in matrix is equal (m = n) then matrix is said to be square matrix. Then its diagonal containing the entries a_{11} ,a_{22} , a_{nn} ...., ann is called main diagonal or principal diagonal of A.
A matrix that is not diagonal is called a rectangular matrix.
Transposition
A transpose A^{T} of a m x n matrix A =[a_{ij}] is the n x m matrix that has the first row of A as its first column, the second row of A as its second column, and so on. Thus the transpose of A is
Symmetric matrices and skewsymmetric matrices
Matrices whose transpose equals the matrix are called symmetric matrix. Matrices whose transpose equals the minus of matrix are called antisymmetric matrix
A^{T} = A (symmetric matrix) and A^{T }= A (antisymmetric matrix)
Equality of matrices
Two matrices A [a_{ij}] and B [b_{ij}] are equal, written A = B , if and only if they have the same size and the corresponding entries are equal, that is a_{11} = b_{11} , a_{12} = b_{12} and so on.
Addition is defined only for matrices A [a_{ij}] and B = [b_{ij}] of the same size; their sum written A + B , is then obtained by adding the corresponding entries. Matrices of different sizes cannot be added
The product of any m x n matrix A = [a_{ij}] and any scalar λ = written λ A is the m x n matrix λA =[λa_{ij}] obtained by multiplying each entry in A by λ. Thus
Here (1) A is simply written A and is called the negative of A . Similarly, (k)A is simply written kA . Also, A +(B) is written A  B and is called the difference of A and B (which must have the same size).
For matrices of the same size m x n we obtain for addition
(i) A + B = B + A
(ii) (A + B) + C = A + (B + C)
(iii) A + 0 = A
(iv) A + (A) = 0 here 0 denote the zero matrix of size m x n.
and for scalar multiplication
(i) λ(A + B) = λA + λB
(ii) (λ + k )A = λA + kA
(iii) λ(kA) = (λk )A
(iv) 1A = A
Transposition of sum can be done term by term (A + B)^{T} =A^{T} + B^{T} and for scalar multiplication we have (λA)^{T} = λA^{T}
The product C = AB (in this order) of an m x n matrix A = [a_{ij}] and r x p matrix B = [b_{ij}] is defined if and only if r = n, that is
Number of rows of 2^{nd} factor B= Number of columns of 1^{st} factor A
and is then defined as the m x p matrix C = [c_{ij}] with entries
(i) AB ≠ BA in general
(ii) AB = 0 does not necessarily imply A = 0 or B = 0 or BA = 0
(iii) AC = AD does not necessarily imply C = D .
(iv) (kA)B = k(AB) = A(kB)
(v) A(BC) = (AB)C
(vi) (A + B )C = AC + BC
(vii) (AB)^{T} = B^{T}A^{T}
Upper triangular matrices are square matrices that can have nonzero entries only on and above the main diagonal, whereas any entry below the diagonal must be zero.
Similarly lower triangular matrices can have nonzero entries only on and below the main diagonal, whereas any entry above the diagonal must be zero. Any entry on the main diagonal of a triangular matrix may be zero or not
These are square matrices that can have nonzero entries only on the main diagonal. Any entry above or below the main diagonal must be zero.
If all the diagonal entries of a diagonal matrix S are equal, say, c , we call S a scalar matrix because multiplication of any square matrix A of the same size by S has same effect as the multiplication by scalar, that is,
AS = SA = cA
Example:
and
A scalar matrix whose entries on the main diagonal are all 1 is called a unit matrix (identity matrix) and is denoted by I . Thus
AI = IA = A
The inverse of a n x n matrix A = [aij] is denoted by A^{1} and is an n n matrix such that AA^{1} = A^{1} A = I
where I is the n x n unit matrix.
Note:
(i) If A has inverse, then A is called a nonsingular matrix.
(ii) If A has no inverse, then A is called a singular matrix.
(iii) If A has an inverse, the inverse is unique.
The inverse of a nonsingular n x n matrix A = [aij] is given by
where A_{ij} is the cofactor of a_{ij} in det A . Note well that in A^{1}, the cofactor A_{ij} occupies the same place as a_{ji} does in A .
Cofactor of a_{ij}
_{A determinant of order n is a scalar associated with an n n matrix A = [aij], which is written}
_{}
_{and is defined for n = 1 by D = a11 ,}
_{and is defined for n > 2 by}
D = a_{i1} C_{i1} +_{ }a_{i2}_{Ci2 + •••• + }a_{in}_{Cin (i = 1,2 .......,or n)}
where C_{ij} = (1_{i})^{i + j} M_{ij}
and M_{ij} is a determinant of order (n  1), namely, the determinant of the submatrix of A obtained from A by deleting the row and column of the entry a_{ij} .
M_{ij} is called the minor of a_{ij} in D and C_{ij }is the cofactor of a_{ij} in D .
Example 1: Let
Example 2: Let then cofactor of A is
Hence,
Example 3: Let
Let A = [a_{ij}] be a given n xn square matrix and consider the equation
AX = λX ........(1)
Here X is an unknown vector and λ an unknown scalar and, we want to determine both.
A value of λ for which (1) has a solution X ≠ 0 is called eigenvalue of the matrix A . The corresponding solutions X ≠ 0 of (1) are called eigenvevtors of A corresponding to that eigenvalue A.
In matrix notation, (AλI)X = 0 ........(2)
This homogeneous linear system of equations has a nontrivial solution if and only if the corresponding determinant of the coefficients is zero
D (λ) is called the characteristic determinant. The equation is called the characteristic
equation of the matrix A . By developing D (λ) we obtain a polynomial of nth degree
in λ . This is called the characteristic polynomial of A .
Note:
Orthonormality Condition
If i = j then δ_{ij} = 1 (Normalisation Condition)
and If i ≠ j then δ_{ij} = 0 (Orthogonal Condition)
Linear independence and dimensionality of a vector space
A set of A vectors X_{1},X_{2},...,X_{N} is said to be linearly independent if
is satisfied when a_{1} = a_{2} = a_{3} = a_{4} =........= 0 otherwise it is said to be linear dependent.
The dimension of a space vector is given by the maximum number of linearly independent vectors the space can have.
The maximum number of linearly independent vectors a space has isN(X_{1}X_{2},...,X_{N}) . This space is said to be N dimensional. In this case any vector Y of the vector space can be expressed as linear combination.
Example 4: Find the eigenvalues and orthonormal eigenvectors of matrix
For eigenvalues
(2λ) {X2  1} = 0 ⇒ λ = 2 and λ = +1
Eigenvalues are given by λ =1, X_{2} = 1, λ3 = 2. All eigenvalues are different so they are nondegenerate.Eigenvector can be determined by the equation AX = λX .
For λ1 = 1
Normalized eigenvector can be determined by relation Xx = 1.
Normalized Eigenvector corresponds to λ = 1 is
For λ_{2} = 1
Normalized eigenvector can be determined by relation X_{2}^{T} X_{2} = 1 ⇒ X_{2} =For λ3 = 2
Normalized eigenvector can be determined by relation X_{3}^{T}X_{3} = 1
Example 5: Find the eigenvalues and orthonormal eigenvectors of matrix A =
Also check that eigenvectors are linearly independent.
For eigenvalues (A  λI) = 0
(1λ){λ^{2}l} = 0 ⇒ 1 = 1, 1,1
where λ = 1 is nondegenerate eigenvalue and λ_{2} = λ_{3} = 1 is doubly degenerate eigenvalue.
Eigenvector can be determined by the equation AX = λX .For λ_{1} =1
Normalized eigenvector can be determined by relation
For λ_{2}, = λ_{3} = 1
So eigenvector is given byfor any value of x, and x,. One can find Eigenvector corresponds to λ_{2} = λ_{3} = 1 but we need to find orthogonal Eigenvectors.
Let X_{1} = X_{1} and x_{2} = 0 so X_{2} =
Normalized eigenvector can be determined by relation X_{2}^{T}X_{2}, =1 ⇒ X2 =
Now we must find third Eigenvector which will satisfied equation x_{1} = x_{1} and x_{2} = x_{3} and orthogonal to X_{2}.So and value of x_{2} can be find with orthogonal relation X_{3}^{T}X_{3} = 1
⇒ a_{1} = 0, a_{2} = 0, a_{3} = 0,So they are linearly independent.
Example 6: Find the eigenvalues and orthonormal eigenvectors of A =
For eigenvalues (A  λI) = 0
⇒  (2 + λ) {λ (1  λ) 12}  2 {2λ  6}  3 {4 + (1  λ)) = 0
⇒ (2 + λ){λ + λ^{2}12} 2{2λ6}3{3λ} = 0
⇒ (2 +λ) {λ +λ^{2} 12}  2 {2λ + 6}  3 {3 +λ} = 0
⇒ 2{λ + λ^{2}12} + {λ^{2}+λ^{3}12λ)7λ21 = 0
⇒ λ^{3} +λ^{2} 21λ45 = 0 ⇒ (λ + 3)^{2 }(λ5) = 0 ⇒ λ = 3, 3, 5where λ_{1} = 5 is nondegenerate eigenvalue and λ_{2}, = λ_{3}, = 3 is doubly degenerate eigenvalue.
Eigenvector can be determined by the equation AX = λX .
⇒ 2x_{1} + 2x_{2}  3x_{3} = 5x_{1}, 2x_{1} + x_{2}  6x_{3 }= 5x_{2} and x_{1} + 2x_{2} =5x_{3}⇒ 7x_{1} + 2x_{2}  3x_{3} = 0  2x_{1}  4x_{2}  6x_{3} = 0 and x_{1} + 2x_{2} + 5x_{3} = 0
From above relations x_{2} = 2x_{1} and x_{3} = x_{1} ⇒ X_{1} =
Normalized eigenvector can be determined by relation X_{1}^{T} X_{1} = 1 ⇒
For λ_{2} = λ_{3} = 3
⇒ 2X_{1} + 2x_{2}  3x_{3} = 3X_{1}, 2X_{1} + x_{2}  6x_{3} = 3x_{2} and X_{1}  2x_{2} = 3x_{3}
⇒ X_{1} + 2x_{2}  3x_{3} = 0, 2X_{1} + 4x_{2}  6x_{3} = 0 and x_{1} + 2x_{2}  3x_{3} = 0
Let x_{3} = 0 ⇒ X_{1} = 2x_{2}. So eigenvector is given by for any value of x_{2} .
Normalized eigenvector can be determined by relation X_{2}^{T}X_{2} =1 ⇒ X_{2} =
Now we must find third Eigenvector which will satisfy equation x_{1} + 2x_{2}  3x_{3} = 0 and orthogonal to X_{2}. X_{2}^{T}X_{3} = 0 ⇒ [2 1 0]
So X_{3} = and value of x_{1} can be find with orthogonal relation X_{3}^{T}X_{3} = 1
This theorem provides an alternative method for finding the inverse of a matrix A . Also any positive integral power of A can be expressed, using this theorem, as a linear combination of those of lower degree.
Every square matrix satisfied its own characteristic equation. That means that, if
is the characteristic equation of a square matrix A of order n , then
Note:
When λ is replaced by A in the characteristic equation, then constant term an should be replaced by a_{n} to get the result of CayleyHamilton theorem, where I is the unit matrix of order n . Also 0 in the R.H.S is a null matrix of order n .
Example 7: Find A^{1} by CayleyHamilton theorem if A =
The characteristic equation of A is (AλI) = 0
By CayleyHamilton theorem
Premultiplying by A^{1} we get A^{1} =
Example 8: Find A^{1} by CayleyHamilton theorem if A =
The characteristic equation of A is (AλI) = 0
By CayleyHamilton theorem
Premultiplying by A^{1} we get A^{1} =
Example 9: Find A^{1} by CayleyHamilton theorem if A =
The characteristic equation of A is (A λI) = 0
By CayleyHamilton theorem
A^{3}  A^{2}  A + I = 0 ⇒ I = [A^{3} + A^{2} + A]
Premultiplying by A^{1} we get A^{1} = [A^{2} + A + I]
78 videos18 docs24 tests

1. What is matrix multiplication? 
2. What are special matrices? 
3. What are matrix eigenvalue problems? 
4. What are the basic concepts of matrix addition and scalar multiplication? 
5. What is the relevance of matrix multiplication in IIT JAM exam? 

Explore Courses for Physics exam
