PE Exam Exam  >  PE Exam Notes  >  Engineering Fundamentals Revision for PE  >  Formula Sheet: Linear Algebra

Formula Sheet: Linear Algebra

Matrix Operations and Properties

Matrix Addition and Subtraction

  • Matrix Addition: \([A]_{m×n} + [B]_{m×n} = [C]_{m×n}\) where \(c_{ij} = a_{ij} + b_{ij}\)
  • Requirements: Matrices must have identical dimensions
  • Commutative Property: \(A + B = B + A\)
  • Associative Property: \((A + B) + C = A + (B + C)\)

Scalar Multiplication

  • Scalar Multiplication: \(k[A]_{m×n} = [B]_{m×n}\) where \(b_{ij} = k \cdot a_{ij}\)
  • k = scalar constant
  • Distributive Property: \(k(A + B) = kA + kB\)
  • Associative Property: \((kp)A = k(pA)\)

Matrix Multiplication

  • Matrix Product: \([A]_{m×n} × [B]_{n×p} = [C]_{m×p}\)
  • Element Formula: \(c_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj}\)
  • Requirement: Number of columns in first matrix must equal number of rows in second matrix
  • Non-commutative: \(AB ≠ BA\) in general
  • Associative Property: \((AB)C = A(BC)\)
  • Distributive Property: \(A(B + C) = AB + AC\)
  • Identity Matrix Multiplication: \(AI = IA = A\)

Matrix Transpose

  • Transpose Operation: \([A^T]_{n×m}\) from \([A]_{m×n}\) where \((A^T)_{ij} = a_{ji}\)
  • Double Transpose: \((A^T)^T = A\)
  • Sum Transpose: \((A + B)^T = A^T + B^T\)
  • Product Transpose: \((AB)^T = B^T A^T\)
  • Scalar Transpose: \((kA)^T = k A^T\)

Special Matrix Types

  • Square Matrix: \(m = n\) (number of rows equals number of columns)
  • Diagonal Matrix: \(a_{ij} = 0\) for all \(i ≠ j\)
  • Identity Matrix: \(I\) where \(a_{ii} = 1\) and \(a_{ij} = 0\) for \(i ≠ j\)
  • Zero Matrix: All elements equal zero
  • Symmetric Matrix: \(A = A^T\) (only for square matrices)
  • Skew-Symmetric Matrix: \(A = -A^T\) and diagonal elements are zero
  • Upper Triangular Matrix: \(a_{ij} = 0\) for all \(i > j\)
  • Lower Triangular Matrix: \(a_{ij} = 0\) for all \(i <>

Determinants

Determinant of 2×2 Matrix

\[ \det(A) = |A| = \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc \]

Determinant of 3×3 Matrix

\[ \det(A) = \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix} \]
  • Expansion by First Row:
\[ \det(A) = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31}) \]

Cofactor Expansion (Laplace Expansion)

  • Minor: \(M_{ij}\) = determinant of submatrix obtained by deleting row \(i\) and column \(j\)
  • Cofactor: \(C_{ij} = (-1)^{i+j} M_{ij}\)
  • Determinant by Row \(i\): \(\det(A) = \sum_{j=1}^{n} a_{ij} C_{ij}\)
  • Determinant by Column \(j\): \(\det(A) = \sum_{i=1}^{n} a_{ij} C_{ij}\)

Determinant Properties

  • Product Rule: \(\det(AB) = \det(A) \cdot \det(B)\)
  • Transpose Rule: \(\det(A^T) = \det(A)\)
  • Scalar Multiplication: \(\det(kA) = k^n \det(A)\) for \(n×n\) matrix
  • Inverse Rule: \(\det(A^{-1}) = \frac{1}{\det(A)}\)
  • Triangular Matrix: Determinant equals product of diagonal elements
  • Row Interchange: Swapping two rows changes sign of determinant
  • Row Multiplication: Multiplying a row by scalar \(k\) multiplies determinant by \(k\)
  • Row Addition: Adding multiple of one row to another doesn't change determinant
  • Zero Row/Column: If any row or column is all zeros, \(\det(A) = 0\)
  • Proportional Rows: If two rows (or columns) are proportional, \(\det(A) = 0\)

Matrix Inverse

Inverse Definition and Properties

  • Inverse Matrix: \(A^{-1}\) such that \(AA^{-1} = A^{-1}A = I\)
  • Existence Condition: \(A^{-1}\) exists if and only if \(\det(A) ≠ 0\)
  • Singular Matrix: Matrix with \(\det(A) = 0\) (non-invertible)
  • Non-singular Matrix: Matrix with \(\det(A) ≠ 0\) (invertible)
  • Uniqueness: If inverse exists, it is unique

Inverse of 2×2 Matrix

\[ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, \quad A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} \]
  • Requirement: \(ad - bc ≠ 0\)

Adjugate (Adjoint) Method for General Matrices

\[ A^{-1} = \frac{1}{\det(A)} \text{adj}(A) \]
  • Adjugate Matrix: \(\text{adj}(A) = [C_{ij}]^T\) where \(C_{ij}\) are cofactors
  • Element Formula: \((\text{adj}(A))_{ij} = C_{ji}\)

Inverse Properties

  • Double Inverse: \((A^{-1})^{-1} = A\)
  • Product Inverse: \((AB)^{-1} = B^{-1}A^{-1}\)
  • Transpose Inverse: \((A^T)^{-1} = (A^{-1})^T\)
  • Scalar Inverse: \((kA)^{-1} = \frac{1}{k}A^{-1}\) for \(k ≠ 0\)
  • Identity Inverse: \(I^{-1} = I\)

Systems of Linear Equations

Matrix Form

  • System Representation: \(Ax = b\)
  • A = coefficient matrix (\(m×n\))
  • x = variable vector (\(n×1\))
  • b = constant vector (\(m×1\))
  • Augmented Matrix: \([A|b]\)

Solution Methods

Matrix Inverse Method

  • Solution Formula: \(x = A^{-1}b\)
  • Requirement: \(A\) must be square and non-singular (\(\det(A) ≠ 0\))
  • Application: Best for systems where \(m = n\) and unique solution exists

Cramer's Rule

  • Solution for Variable \(x_i\): \(x_i = \frac{\det(A_i)}{\det(A)}\)
  • Ai = matrix formed by replacing column \(i\) of \(A\) with vector \(b\)
  • Requirements: \(A\) must be square and \(\det(A) ≠ 0\)
  • Application: Useful for small systems (2×2 or 3×3) or finding single variable

Solution Classification

  • Unique Solution: System has exactly one solution when \(\det(A) ≠ 0\) for square systems
  • No Solution: System is inconsistent (parallel planes, no intersection)
  • Infinite Solutions: System is dependent (overlapping or identical equations)
  • Homogeneous System: \(Ax = 0\) always has at least the trivial solution \(x = 0\)
  • Non-trivial Solution: Homogeneous system has non-trivial solutions if \(\det(A) = 0\)

Gaussian Elimination

  • Elementary Row Operations:
    • Interchange two rows
    • Multiply a row by non-zero constant
    • Add multiple of one row to another row
  • Row Echelon Form (REF): Upper triangular form with leading 1's
  • Reduced Row Echelon Form (RREF): REF with zeros above and below each leading 1
  • Back Substitution: Solve from bottom row upward after achieving REF

Vector Spaces and Subspaces

Vector Operations

  • Vector Addition: \(\vec{u} + \vec{v} = [u_1 + v_1, u_2 + v_2, ..., u_n + v_n]\)
  • Scalar Multiplication: \(k\vec{v} = [kv_1, kv_2, ..., kv_n]\)
  • Dot Product (Inner Product): \(\vec{u} \cdot \vec{v} = \sum_{i=1}^{n} u_i v_i = u_1v_1 + u_2v_2 + ... + u_nv_n\)
  • Dot Product Properties:
    • Commutative: \(\vec{u} \cdot \vec{v} = \vec{v} \cdot \vec{u}\)
    • Distributive: \(\vec{u} \cdot (\vec{v} + \vec{w}) = \vec{u} \cdot \vec{v} + \vec{u} \cdot \vec{w}\)
    • Scalar: \((k\vec{u}) \cdot \vec{v} = k(\vec{u} \cdot \vec{v})\)

Vector Magnitude and Distance

  • Magnitude (Norm): \(||\vec{v}|| = \sqrt{\vec{v} \cdot \vec{v}} = \sqrt{v_1^2 + v_2^2 + ... + v_n^2}\)
  • Unit Vector: \(\hat{v} = \frac{\vec{v}}{||\vec{v}||}\) where \(||\hat{v}|| = 1\)
  • Distance Between Vectors: \(d(\vec{u}, \vec{v}) = ||\vec{u} - \vec{v}||\)
  • Angle Between Vectors: \(\cos\theta = \frac{\vec{u} \cdot \vec{v}}{||\vec{u}|| \cdot ||\vec{v}||}\)
  • Orthogonal Vectors: \(\vec{u} \perp \vec{v}\) if and only if \(\vec{u} \cdot \vec{v} = 0\)

Cross Product (3D Vectors Only)

\[ \vec{u} × \vec{v} = \begin{vmatrix} \vec{i} & \vec{j} & \vec{k} \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix} \]
  • Component Form: \(\vec{u} × \vec{v} = (u_2v_3 - u_3v_2)\vec{i} - (u_1v_3 - u_3v_1)\vec{j} + (u_1v_2 - u_2v_1)\vec{k}\)
  • Magnitude: \(||\vec{u} × \vec{v}|| = ||\vec{u}|| \cdot ||\vec{v}|| \sin\theta\)
  • Direction: Perpendicular to both \(\vec{u}\) and \(\vec{v}\) (right-hand rule)
  • Anti-commutative: \(\vec{u} × \vec{v} = -(\vec{v} × \vec{u})\)
  • Parallel Vectors: \(\vec{u} × \vec{v} = \vec{0}\) if vectors are parallel
  • Area of Parallelogram: \(A = ||\vec{u} × \vec{v}||\)

Linear Independence and Dependence

Linear Combinations

  • Linear Combination: \(\vec{v} = c_1\vec{v}_1 + c_2\vec{v}_2 + ... + c_n\vec{v}_n\)
  • ci = scalar coefficients
  • Span: Set of all possible linear combinations of given vectors

Linear Independence

  • Linearly Independent: Vectors \(\vec{v}_1, \vec{v}_2, ..., \vec{v}_n\) are linearly independent if:
    \(c_1\vec{v}_1 + c_2\vec{v}_2 + ... + c_n\vec{v}_n = \vec{0}\)
    implies \(c_1 = c_2 = ... = c_n = 0\) (trivial solution only)
  • Linearly Dependent: At least one non-trivial solution exists
  • Test for Independence: Form matrix with vectors as columns; vectors are independent if \(\det(A) ≠ 0\) (for square matrix)

Rank and Nullity

Rank

  • Row Rank: Maximum number of linearly independent rows
  • Column Rank: Maximum number of linearly independent columns
  • Rank Property: Row rank = Column rank = rank(A)
  • Full Rank: rank(A) = min(m, n) for \(m×n\) matrix
  • Rank and Determinant: For square matrix, rank(A) = n if and only if \(\det(A) ≠ 0\)
  • Rank and Invertibility: Square matrix \(A_{n×n}\) is invertible if and only if rank(A) = n

Nullity and Rank-Nullity Theorem

  • Null Space: Set of all solutions to \(Ax = 0\)
  • Nullity: Dimension of null space (number of free variables)
  • Rank-Nullity Theorem: \(\text{rank}(A) + \text{nullity}(A) = n\)
  • n = number of columns in matrix A

Eigenvalues and Eigenvectors

Fundamental Definitions

  • Eigenvalue Problem: \(A\vec{v} = \lambda\vec{v}\)
  • λ = eigenvalue (scalar)
  • v = eigenvector (non-zero vector)
  • A = square matrix

Characteristic Equation

  • Characteristic Equation: \(\det(A - \lambda I) = 0\)
  • Characteristic Polynomial: \(p(\lambda) = \det(A - \lambda I)\)
  • Degree: For \(n×n\) matrix, polynomial has degree \(n\)
  • Number of Eigenvalues: At most \(n\) eigenvalues (counting multiplicities)

Finding Eigenvalues and Eigenvectors

  • Step 1 - Find Eigenvalues: Solve \(\det(A - \lambda I) = 0\)
  • Step 2 - Find Eigenvectors: For each \(\lambda\), solve \((A - \lambda I)\vec{v} = \vec{0}\)
  • Eigenvector Condition: \(\vec{v} ≠ \vec{0}\)
  • Scaling Property: If \(\vec{v}\) is eigenvector, so is \(k\vec{v}\) for any \(k ≠ 0\)

Eigenvalue Properties

  • Trace: \(\text{tr}(A) = \sum_{i=1}^{n} a_{ii} = \sum_{i=1}^{n} \lambda_i\)
  • Determinant: \(\det(A) = \prod_{i=1}^{n} \lambda_i\)
  • Similar Matrices: Have same eigenvalues
  • Triangular Matrix: Eigenvalues are diagonal elements
  • Symmetric Matrix: All eigenvalues are real
  • Orthogonal Eigenvectors: Eigenvectors of symmetric matrix corresponding to distinct eigenvalues are orthogonal

Eigenvalue Applications

  • Matrix Powers: If \(A\vec{v} = \lambda\vec{v}\), then \(A^n\vec{v} = \lambda^n\vec{v}\)
  • Inverse Eigenvalues: If \(\lambda\) is eigenvalue of \(A\), then \(1/\lambda\) is eigenvalue of \(A^{-1}\)
  • Diagonalization: \(A = PDP^{-1}\) where \(D\) is diagonal matrix of eigenvalues and \(P\) contains eigenvectors as columns
  • Diagonalizable Condition: Matrix is diagonalizable if it has \(n\) linearly independent eigenvectors

Orthogonality

Orthogonal Vectors and Matrices

  • Orthogonal Vectors: \(\vec{u} \perp \vec{v}\) if \(\vec{u} \cdot \vec{v} = 0\)
  • Orthonormal Vectors: Orthogonal and each has unit length (\(||\vec{v}_i|| = 1\))
  • Orthogonal Matrix: \(Q\) such that \(Q^TQ = QQ^T = I\)
  • Orthogonal Matrix Property: \(Q^{-1} = Q^T\)
  • Determinant: \(\det(Q) = ±1\) for orthogonal matrix
  • Preservation of Length: \(||Q\vec{x}|| = ||\vec{x}||\) for orthogonal matrix

Gram-Schmidt Orthogonalization

  • Purpose: Convert linearly independent set to orthogonal or orthonormal set
  • First Vector: \(\vec{u}_1 = \vec{v}_1\)
  • Projection Formula: \(\text{proj}_{\vec{u}}\vec{v} = \frac{\vec{v} \cdot \vec{u}}{\vec{u} \cdot \vec{u}}\vec{u}\)
  • Second Vector: \(\vec{u}_2 = \vec{v}_2 - \text{proj}_{\vec{u}_1}\vec{v}_2\)
  • General Formula: \(\vec{u}_k = \vec{v}_k - \sum_{j=1}^{k-1} \text{proj}_{\vec{u}_j}\vec{v}_k\)
  • Normalization: \(\hat{u}_i = \frac{\vec{u}_i}{||\vec{u}_i||}\) to obtain orthonormal set

Matrix Decomposition

LU Decomposition

  • Decomposition: \(A = LU\)
  • L = lower triangular matrix
  • U = upper triangular matrix
  • Application: Efficiently solve \(Ax = b\) by solving \(Ly = b\) then \(Ux = y\)
  • Existence: Requires no row exchanges during Gaussian elimination

QR Decomposition

  • Decomposition: \(A = QR\)
  • Q = orthogonal matrix (\(Q^TQ = I\))
  • R = upper triangular matrix
  • Method: Obtained via Gram-Schmidt orthogonalization
  • Application: Solving least squares problems, eigenvalue computation

Singular Value Decomposition (SVD)

  • Decomposition: \(A = U\Sigma V^T\)
  • U = \(m×m\) orthogonal matrix (left singular vectors)
  • Σ = \(m×n\) diagonal matrix (singular values \(\sigma_i ≥ 0\))
  • V = \(n×n\) orthogonal matrix (right singular vectors)
  • Singular Values: \(\sigma_i = \sqrt{\lambda_i}\) where \(\lambda_i\) are eigenvalues of \(A^TA\)
  • Rank: Number of non-zero singular values equals rank(A)

Matrix Norms

Vector Norms

  • L1 Norm (Manhattan): \(||\vec{x}||_1 = \sum_{i=1}^{n} |x_i|\)
  • L2 Norm (Euclidean): \(||\vec{x}||_2 = \sqrt{\sum_{i=1}^{n} x_i^2}\)
  • L Norm (Maximum): \(||\vec{x}||_\infty = \max_{i} |x_i|\)
  • p-Norm: \(||\vec{x}||_p = \left(\sum_{i=1}^{n} |x_i|^p\right)^{1/p}\)

Matrix Norms

  • Frobenius Norm: \(||A||_F = \sqrt{\sum_{i=1}^{m}\sum_{j=1}^{n} |a_{ij}|^2}\)
  • Induced Matrix Norm: \(||A||_p = \max_{x≠0} \frac{||Ax||_p}{||x||_p}\)
  • 1-Norm: \(||A||_1 = \max_{j} \sum_{i=1}^{m} |a_{ij}|\) (maximum absolute column sum)
  • ∞-Norm: \(||A||_\infty = \max_{i} \sum_{j=1}^{n} |a_{ij}|\) (maximum absolute row sum)
  • 2-Norm (Spectral): \(||A||_2 = \sqrt{\lambda_{\max}(A^TA)}\) (largest singular value)

Condition Number

Matrix Condition Number

  • Condition Number: \(\kappa(A) = ||A|| \cdot ||A^{-1}||\)
  • Well-Conditioned: \(\kappa(A)\) is small (close to 1)
  • Ill-Conditioned: \(\kappa(A)\) is large
  • Minimum Value: \(\kappa(A) ≥ 1\) for any matrix
  • Orthogonal Matrix: \(\kappa(Q) = 1\) in 2-norm
  • Interpretation: Measures sensitivity of solution to perturbations in data
  • Relative Error Bound: \(\frac{||\Delta x||}{||x||} ≤ \kappa(A) \frac{||\Delta b||}{||b||}\)
The document Formula Sheet: Linear Algebra is a part of the PE Exam Course Engineering Fundamentals Revision for PE.
All you need of PE Exam at this link: PE Exam

Top Courses for PE Exam

Related Searches
MCQs, Sample Paper, past year papers, ppt, Important questions, Extra Questions, Formula Sheet: Linear Algebra, Objective type Questions, study material, Formula Sheet: Linear Algebra, Viva Questions, Semester Notes, video lectures, practice quizzes, pdf , shortcuts and tricks, Previous Year Questions with Solutions, Free, Summary, mock tests for examination, Exam, Formula Sheet: Linear Algebra;