JEE Exam  >  JEE Notes  >  Mathematics (Maths) Main & Advanced  >  System of Linear Equations

System of Linear Equations

I. SYSTEM OF LINEAR EQUATIONS

A system of linear equations in two variables consists of two linear equations in the same pair of variables. Geometrically, each linear equation in two variables represents a straight line in the plane. The solution of the system is the point(s) at which the lines meet. There are three possible types of systems (in two variables):

  • Consistent system - the system has a unique (definite) solution. Geometrical interpretation: the lines intersect at exactly one point (intersecting lines).
  • Inconsistent system - the system has no solution. Geometrical interpretation: the lines are parallel and never meet (parallel lines).
  • Dependent system - the system has infinitely many solutions. Geometrical interpretation: the equations represent the same line (coincident or identical lines).
I. SYSTEM OF LINEAR EQUATIONS

Cramer's Rule (Simultaneous Equations Involving Three Unknowns)

Consider three linear equations in three unknowns:

a1x + b1y + c1z = d1

a2x + b2y + c2z = d2

a3x + b3y + c3z = d3

Form the determinant D of the coefficient matrix and the determinants Dx, Dy, Dz obtained by replacing the appropriate column by the constants column. If D ≠ 0, the system has the unique solution

x = Dx / D, y = Dy / D, z = Dz / D.

Cramer`s Rule (Simultaneous Equations Involving Three Unknowns)

Note :

Cramer`s Rule (Simultaneous Equations Involving Three Unknowns)

Trivial solution - if a homogeneous system has only the zero solution (all variables zero), that solution is called the trivial solution.

Solving System of Linear Equations Using Matrices

Solving System of Linear Equations Using Matrices

A system of linear equations can be written in matrix form as AX = B, where A is the coefficient matrix, X is the column vector of unknowns and B is the column vector of constants. A system is consistent if it has at least one solution.

(i) System of Linear Equations and Matrix Inverse

If a system consists of n linear equations in n unknowns, we can write it as AX = B where A is square (n × n).

If A is non-singular (i.e., det(A) ≠ 0), then A has an inverse and the unique solution is

X = A-1 B.

If A is singular (det(A) = 0), there are two possibilities determined by the ranks:

  • If rank(A) = rank([A|B]) = r < n, the system has infinitely many solutions (dependent system).
  • If rank(A) < rank([A|B]), the system has no solution (inconsistent system).

The statements involving adjoint of A in some textbooks are used to study special cases, but the rank condition gives the complete criterion for existence and number of solutions.

(i) System of Linear Equations and Matrix Inverse

(ii) Homogeneous Systems and Matrix Inverse

A homogeneous system has the form AX = 0. For an n × n matrix A:

  • If A is non-singular (det A ≠ 0), the only solution is the trivial solution X = 0.
  • If A is singular (det A = 0), the system has infinitely many solutions including non-trivial solutions.
(ii) Homogeneous Systems and Matrix Inverse

(iii) Elementary Row Transformations

The following operations on a matrix are called elementary row transformations. They do not change the solution set of the corresponding system when applied to the augmented matrix [A|B]:

  • Interchange (swap) two rows.
  • Multiply all entries of a row by a nonzero scalar.
  • Add to one row a scalar multiple of another row.

Note : Similar elementary column transformations also exist. Two matrices A and B are said to be equivalent (A ~ B) if one can be obtained from the other by a finite sequence of elementary row (or column) operations.

(iv) Echelon Forms of a Matrix

Two commonly used echelon forms are defined as follows:

  • Row Echelon Form (REF) - a matrix is in REF if:
    • All nonzero rows are above any rows of all zeros.
    • The first nonzero entry (leading entry) of each nonzero row is to the right of the leading entry of the row above it.
    • Entries below a leading entry are all zeros.
  • Reduced Row Echelon Form (RREF) - a matrix is in RREF if it is in REF and additionally:
    • The leading entry in each nonzero row is 1.
    • Each leading 1 is the only nonzero entry in its column (i.e., entries above and below each leading 1 are zeros).

The process of obtaining RREF from the augmented matrix to read off solutions directly is called Gauss-Jordan elimination. Reducing to REF by forward elimination and then back substitution is called Gaussian elimination.

(v) System of Linear Equations and the Augmented Matrix

For AX = B where A is an m × n matrix, X is an n × 1 column vector and B is an m × 1 column vector, the augmented matrix [A|B] is the m × (n+1) matrix obtained by adjoining B as the (n+1)-th column to A. Solving the system by row reductions uses this augmented matrix.

Ex.25 Solve the equations

(v) System of Linear Equations and the Augmented Matrix

Sol.

(v) System of Linear Equations and the Augmented Matrix

Ex.26 Solve

(v) System of Linear Equations and the Augmented Matrix

Sol.

(v) System of Linear Equations and the Augmented Matrix

We see that the system has an infinite number of solutions. Specific solutions can be generated by choosing specific values for the free parameter k.

Ex.27 Number of triplets of a, b & c for which the system of equations ax - by = 2a - b and (c + 1)x + cy = 10 - a + 3 b has infinitely many solutions and x = 1, y = 3 is one of the solutions is

Sol.

(v) System of Linear Equations and the Augmented Matrix

Ex.28 Solve

(v) System of Linear Equations and the Augmented Matrix

Sol.

(v) System of Linear Equations and the Augmented Matrix

Ex.29 Solve

(v) System of Linear Equations and the Augmented Matrix

by reducing the augmented matrix of the system to reduced row echelon form.

Sol.

(v) System of Linear Equations and the Augmented Matrix
(v) System of Linear Equations and the Augmented Matrix

It is easy to see that x1 = 1, x2 = -3, x3 = 6. The process of solving a system by reducing the augmented matrix to reduced row echelon form is called Gauss-Jordan elimination.

Ex.30 Determine conditions on a, b and c so that

(v) System of Linear Equations and the Augmented Matrix

will have no solutions or have an infinite number of solution.

Sol.

(v) System of Linear Equations and the Augmented Matrix

J. INVERSE OF A MATRIX

(i) Singular & Non-singular Matrix

A square matrix A is said to be non-singular if det(A) ≠ 0 and singular if det(A) = 0.

Ex.31 Show that every skew-symmetric matrix of odd order is singular.

Sol.

(i) Singular & Non-singular Matrix

(ii) Cofactor Matrix & Adjoint Matrix

Let A = [aij]n be an n × n matrix.

The matrix obtained by replacing each element aij by its cofactor Cij is called the cofactor matrix (also called the matrix of cofactors).

The transpose of the cofactor matrix is called the adjoint (or adjugate) of A and is denoted by adj A.

(ii) Cofactor Matrix & Adjoint Matrix

(iii) Properties of Cofactor A and adj A

(iii) Properties of Cofactor A and adj A

(iv) Inverse of a Matrix (Reciprocal Matrix)

If A is non-singular, its inverse is given by

(iv) Inverse of a Matrix (Reciprocal Matrix)

That is, A-1 = (1 / det A) adj A.

Remarks :

(iv) Inverse of a Matrix (Reciprocal Matrix)

Characteristic Polynomial & Characteristic Equation

For a square matrix A, the polynomial |A - xI| in the variable x is called the characteristic polynomial of A. The equation |A - xI| = 0 is the characteristic equation.

Remark : By the Cayley-Hamilton theorem, every square matrix A satisfies its own characteristic equation, i.e., if p(x) = |A - xI|, then p(A) = 0.

Characteristic Polynomial & Characteristic Equation

Ex.32 Find the adjoint of the matrix A =

Characteristic Polynomial & Characteristic Equation

Sol.

Characteristic Polynomial & Characteristic Equation
Characteristic Polynomial & Characteristic Equation

Ex.33 If A and B are square matrices of the same order, then adj (AB) = adj B · adj A.

Sol.

Characteristic Polynomial & Characteristic Equation

Ex.34 If A be an n-square matrix and B be its adjoint, then show that Det (AB + K In) = [Det (A) + K]n, where K is a scalar quantity.

Sol.

Characteristic Polynomial & Characteristic Equation

Ex.35 If

Characteristic Polynomial & Characteristic Equation

be the direction cosines of three mutually perpendicular lines referred to an orthogonal Cartesian co-ordinate system, then prove that

Characteristic Polynomial & Characteristic Equation

is an orthogonal matrix.

Sol.

Characteristic Polynomial & Characteristic Equation

Ex.36 Obtain the characteristic equation of the matrix A =

Characteristic Polynomial & Characteristic Equation

and verify that it is satisfied by A and hence find its inverse.

Sol.

Characteristic Polynomial & Characteristic Equation
Characteristic Polynomial & Characteristic Equation

Ex.37 Find the inverse of the matrix A =

Characteristic Polynomial & Characteristic Equation

Sol.

Characteristic Polynomial & Characteristic Equation

Ex.38 If a non-singular matrix A is symmetric, show that A-1 is also symmetric.

Sol.

Characteristic Polynomial & Characteristic Equation
The document System of Linear Equations is a part of the JEE Course Mathematics (Maths) for JEE Main & Advanced.
All you need of JEE at this link: JEE

FAQs on System of Linear Equations

1. What is a system of linear equations?
Ans. A system of linear equations is a set of two or more equations with the same variables. The solution to the system is the set of values for the variables that satisfy all the equations simultaneously.
2. How can I solve a system of linear equations?
Ans. There are several methods to solve a system of linear equations, such as graphing, substitution, and elimination. Graphing involves plotting the equations on a coordinate plane and finding the point(s) of intersection. Substitution involves solving one equation for one variable and substituting it into the other equation. Elimination involves adding or subtracting the equations to eliminate one variable and solve for the remaining variable.
3. Can a system of linear equations have no solution?
Ans. Yes, a system of linear equations can have no solution. This occurs when the equations are inconsistent and do not intersect. For example, if the lines represented by the equations are parallel, they will never intersect and the system will have no solution.
4. Can a system of linear equations have infinitely many solutions?
Ans. Yes, a system of linear equations can have infinitely many solutions. This occurs when the equations are dependent and represent the same line. In this case, any point on the line will satisfy both equations, resulting in infinitely many solutions.
5. What is the importance of solving systems of linear equations?
Ans. Solving systems of linear equations is important in various fields, such as physics, engineering, economics, and computer science. It allows us to find the intersection points of multiple lines or planes, determine the values of variables in a system, and solve real-world problems involving multiple variables.
Explore Courses for JEE exam
Get EduRev Notes directly in your Google search
Related Searches
Viva Questions, Important questions, Previous Year Questions with Solutions, System of Linear Equations, Extra Questions, Objective type Questions, mock tests for examination, shortcuts and tricks, System of Linear Equations, Semester Notes, study material, practice quizzes, pdf , System of Linear Equations, Exam, MCQs, Summary, past year papers, Free, video lectures, Sample Paper, ppt;