Orthonormal Bases
The canonical/standard basis has many useful properties.
Each of the standard basis vectors has unit length:
The standard basis vectors are orthogonal (in other words, at right angles or perpendicular).
This is summarized by
where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix.
Given column vectors v and w, we have seen that the dot product v • w is the same as the matrix multiplication vT w. This is the inner product on Rn. We can also form the outer product vwT , which gives a square matrix.
The outer product on the standard basis vectors is interesting. Set
In short, IIi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.1
Notice that
Moreover, for a diagonal matrix D with diagonal entries λ1 ..... λn, we can write
Other bases that share these properties should behave in many of the same ways as the standard basis. As such, we will study:
This is reminiscent of an older notation, where vectors are written in juxtaposition. This is called a `dyadic tensor,' and is still used in some applications. |
Orthogonal bases {v1,.....vn}:
vi vj = 0 if i ≠ j
In other words, all vectors in the basis are perpendicular.
Orthonormal bases {u1 ,..... un}:
ui uj = δij :
In addition to being orthogonal, each vector has unit length.
Suppose T = {u1,......, un} is an orthonormal basis for Rn. Since T is a basis, we can write any vector v uniquely as a linear combination of the vectors in T :
v = c1u1 + .... cnun:
Since T is orthonormal, there is a very easy way to nd the coecients of this linear combination. By taking the dot product of v with any of the vectors in T , we get:
This proves the theorem:
Theorem. For an orthonormal basis {u1,....un} any, any vector v can be expressed
Relating Orthonormal Bases
Suppose T = {u1,.... un} and R = {w1,.... wn} are two orthonormal bases for Rn. Then:
As such, the matrix for the change of basis from T to R is given by
Consider the product PPT in this case.
In the equality (*) is explained below. So assuming (*) holds, we have shown that PPT = In, which implies that
PT = P -1.
The equality in the line (*) says that To see this, we examine for an arbitrary vector v. We can nd constants cj such that so that:
since all terms with i ≠ j vanish
= v.
Then as a linear transformation, xes every vector, and thus must be the identity In.
Definition A matrix P is orthogonal if P-1 = PT.
Then to summarize,
Theorem : A change of basis matrix P relating two orthonormal bases is an orthogonal matrix. i.e.
P-1 = PT.
Example : Consider R3 with the orthonormal basis
Let R be the standard basis {e1, e2, e3}. Since we are changing from the standard basis to a new basis, then the columns of the change of basis matrix are exactly the images of the standard basis vectors. Then the change of basis matrix from R to S is given by:
From our theorem, we observe that:
We can check that P T P = In by a lengthy computation, or more simply, notice that
We are using orthonormality of the ui for the matrix multiplication above.
Orthonormal Change of Basis and Diagonal Matrices. Suppose D is a diagonal matrix, and we use an orthogonal matrix P to change to a new basis. Then the matrix M of D in the new basis is:
M = P DP-1 = P DPT :
Now we calculate the transpose of M .
So we see the matrix P DPT is symmetric!
556 videos|198 docs
|
1. What is an orthonormal basis in matrix algebra? |
2. Why are orthonormal bases important in matrix algebra? |
3. How can we determine if a set of vectors form an orthonormal basis? |
4. Can a matrix have multiple orthonormal bases? |
5. What are the advantages of using an orthonormal basis in solving matrix equations? |
556 videos|198 docs
|
|
Explore Courses for Mathematics exam
|