Definition: A system of "m n" numbers arranged along m rows and n columns.
Conventionally, single capital letter is used to denote a matrix.
Thus,
aij → ith row, jth column
1. Types of Matrices
3. Square Matrix
4. Rectangular Matrix
Number of rows ≠ Number of columns
5. Diagonal Matrix
A Square matrix in which all the elements except those in leading diagonal are zero.
6. Unit Matrix (or Identity Matrix)
A Diagonal matrix in which all the leading diagonal elements are '1'.
e.g.
7. Null Matrix (or Zero Matrix)
A matrix is said to be Null Matrix if all the elements are zero.
e.g.
8. Symmetric and Skew Symmetric Matrices;
Note: All the diagonal elements of skew symmetric matrix must be zero.
Symmetric Matrix AT = A
Skew Symmetric Matrix AT = - A
9. Triangular Matrix
10. Orthogonal Matrix:
If A. AT = 1, then matrix A is said to be Orthogonal matrix.
11. Singular Matrix:
If |A| = 0, then A is called a singular matrix.
12. Unitary Matrix:
If we define, = transpose of a conjugate of matrix A Then the matrix is unitary if
13. Hermitian Matrix:
It is a square matrix with complex entries which is equal to its own conjugate transpose.
14. Note:
In Hermitian matrix, diagonal elements → always real
15. Skew Hermitian matrix:
It is a square matrix with complex entries which is equal to the negative of conjugate transpose.
Note: In Skew-Hermitian matrix, diagonal elements -> either zero or Pure Imaginary
16. Idempotent Matrix
If A2 = A, then the matrix A is called idempotent matrix.
17. Multiplication of Matrix by a Scalar:
Every element of the matrix gets multiplied by that scalar.
Multiplication of Matrices:
Two matrices can be multiplied only when number of columns of the first matrix is equal to the number of rows of the second matrix. Multiplication of (m x n) and (n x p) matrices results in matrix of (m x p)dimension
18. Determinant:
An nth order determinant is an expression associated with n x n square matrix.
If A = [aij], Element aij with ith row, jth column.
For n = 2 ;
Determinant of 'order n"
19. Minors & Co-Factors:
20. Properties of Determinants:
21. Inverse of a Matrix
Important Points:
22. Elementary Transformation of a Matrix:
1. Interchange of any 2 lines
2. Multiplication of a line by a constant (e.g. k Ri)
3. Addition of constant multiplication of any line to the another line (e. g. Ri + p Rj)
Note:
23. Rank of Matrix
If we select any r rows and r columns from any matrix A, deleting all other rows and columns, then the determinant formed by these r x r elements is called minor of A of order r.
Definition: A matrix is said to be of rank r when,
i) it has at least one non-zero minor of order r.
ii) Every minor of order higher than r vanishes.
Other definition: The rank is also defined as maximum number of linearly independent row vectors.
Special case; Rank of Square matrix
Rank = Number of non-zero row in upper triangular matrix using elementary transformation.
Note:
24. Solution of linear System of Equations;
For the following system of equations A X = B
Where,
A = Coefficient Matrix, C = (A, B) = Augmented Matrix
r = rank (A), r' = rank (C), n = Number of unknown variables (x1 , x2, - - - xn)
Consistency of a System of Equations:
For Non-Homogenous Equations (A X = B)
(i) If r ≠ r', the equations are inconsistent i.e. there is no solution.
(ii) If r = r' = n, the equations are consistent and there is a unique solution.
(iii) If r = r' < n, the equations are consistent and there are infinite number of solutions.
For Homogenous Equations (A X = 0)
(i) If r = n, the equations have only a trivial zero solution ( i.e. x1 = x2 = - - - xn = 0).
(ii) If r < n, then (n-r) linearly independent solution (i.e. infinite non-trivial solutions).
Note:
Consistent means: → one or more solution (i.e. unique or infinite solution)
Inconsistent means: → No solution
Cramer's Rule
Let the following two equations be there
a11 x1 + a12 x2 = b1 ...(i)
a21 x1 + a22 x2 = b2 ...(ii)
Solution using Cramer's rule:
In the above method, it is assumed that
1. No of equations = No of unknowns
2. D ≠ 0
In general, for Non-Homogenous Equations
D ≠ 0 → single solution (non trivial)
D = 0 → infinite solution
For Homogenous Equations
D ≠ 0 → trivial solutions (x1 = x2 = ... xn = 0)
D = 0 → non- trivial solution (or infinite solution)
Eigen Values & Eigen Vectors
25. Characteristic Equation and Eigen Values:
Characteristic equation: |A — λ1| = 0, The roots of this equation are called the characteristic roots /latent roots / Eigen values of the matrix A.
Eigen vectors: [A - λI] X = 0
For each Eigen value λ, solving for X gives the corresponding Eigen vector.
Note: For a given Eigen value, there can be different Eigen vectors, but for same Eigen vector, there can't be different Eigen values.
Properties of Eigen values
Properties of Eigen Vectors
Cayley Hamilton Theorem: Every square matrix satisfies its own characteristic equation.
26. Vector:
1. Probability
Event: Outcome of an experiment is called event.
Mutually Exclusive Events (Disjoint Events): Two events are called mutually exclusive, if the occurrence of one excludes the occurrence of others i.e. both can't occur simultaneously.
A ∩ B = φ, P(A ∩ B) =0
Equally Likely Events: If one of the events cannot happen in preference to other, then such events are said to be equally likely.
Odds in Favour of an Event = m/n
Where m → no. of ways favourable to A
n → no. of ways not favourable to A
Odds Against the Event = n/m
Probability:
P(A)+ P(A')=1
Important points:
Addition Law of Probability:
Independent Events: Two events are said to be independent, if the occurrence of one does not affect the occurrence of the other.
If P(A∩B) = P(A) P(B) ↔ Independent events A & B
Conditional Probability: If A and B are dependent events, then P(B/A) denotes the probability of occurrence of B when A has already occurred. This is known as conditional probability.
For independent events A & 13 → P(B/A) = P(B)
Theorem of Combined Probability: If the probability of an event A happening as a result of trial is P{A). Probability of an event 13 happening as a result of trial after A has happened is P(B/A) then the probability of both the events A and B happening is
P(A∩B)= P(A). P(B/A), [P(A)≠0]
= P(B). P(A/B), [P(B)≠ 0]
This is also known as Multiplication Theorem.
For independent events A & B → P(B/A) = P(B), P(A/B )= P(A)
Hence P(A∩B) = P(A) P(B)
Important Points:
If P1 & P2 are probabilities of two independent events then
Baye's theorem:
An event A corresponds to a number of exhaustive events B1, B2,.., Bn.
If P(Bi) and P(A/Bi) are given then,
This is also known as theorem of Inverse Probability.
Random variable: Real variable associated with the outcome of a random experiment is called a random variable.
Probability Density Function (PDF) or Probability Mass Function: The set of values Xi with their probabilities Pi constitute a probability distribution or probability density function of the variable X. If f(x) is the PDF, then f(xk) = P(X = xk),
PDF has the following properties:
Discrete Cumulative Distribution Function (CDF) or Distribution Function
The Cumulative Distribution Function F(x) of the discrete variable x is defined by,
Continuous Cumulative Distribution function (CDF) or Distribution Function: If is defined as the cumulative distribution function or simply the distribution function of the continuous variable.
CDF has the following properties:
(i)
(ii) 1≥ Fx(x) ≥ 0
(iii) If x2 > x1 then Fx (x2) > Fx (x1), i.e. CDF is monotone (non-decreasing function)
(iv) Fx (-∞) = 0
(v) Fx (∞) = 1
(vi)
Expectation [E(x)]:
(i) E(X) =
(ii) E(X) =
Properties of Expectation
(i) E(constant) = constant
(ii) E(CX) = C.E(X) [C is constant]
(iii) E(AX+BY) = AE(X) + BE(Y) [A & B are constants]
(iv) E(XY) = E(X)E(Y/X) = E(Y) E(X/Y)
E(XY) ≠ E(X) E(Y) in general
But E(XY) = E(X) E(Y), if X & Y are independent
Variance (Var(X))
Properties of Variance
(i) Var(constant) = 0
(ii) Var(Cx) = C2 Var(x) -Variance is non-linear [here C is constant]
(iii) Var(Cx±D) = C2 Var(x) -Variance is translational invariant [C & D are constants]
(iv) Var(x-k) = Var(x) [k is constant]
(v) Var(ax+by) = a2Var(x) + b2 Var(y) ± 2ab cov(x, y) (if not independent) [A & B are constants]
= a2Var(x) + b2Var(y) (if independent)
Covariance
Cov (x, y) = E(xy)-E(x) E(y)
If independent ⇒ covariance = 0, E (xy) = E(x) . E(y)
(if covariance = 0, then the events are not necessarily independent)
Properties of Covariance
Standard Distribution Function (Discreter.v. case):
(i) Binomial Distribution : P(r) = nCrPrqn-r
Mean = np, Variance = npq,
(ii) Poisson Distribution: Probability of k success is P (k) =
k → no. of success trials, n → no. of trials, P → success case probability
λ → mean of the distribution
For Poisson distribution: Mean = λ, variance =λ, and λ=np
Standard Distribution Function (Continuous r.v. case):
Mean:
Median: When the values in a data sample are arranged in descending order or ascending order of magnitude the median is the middle term if the no. of sample is odd and is the mean of two middle terms if the number is even.
Mode: It is defined as the value in the sampled data that occurs most frequently.
Important Points:
Co-efficient of variation
Correlation coefficient = p(x,y) =
Line of Regression:
The equation of the line of regression of y on x is
The equation of the line of Regression of x on y is
is called the regression coefficient of y on x and is denoted by byx.
is called the regression coefficient of x on y and is denoted by bxy.
Joint Probability Distribution: If X & Y are two random variables then Joint distribution is defined as, Fxy(x, y) = P(X ≤ x ; Y ≤ y)
Properties of Joint Distribution Function/ Cumulative Distribution Function:
(i) Fxy (-∞, -∞) = 0
(ii) Fxy(∞, ∞) = 1
(iii) Fxy(-∞, ∞) = 0 {Fxy (-∞, ∞) = P(X ≤ -∞; Y ≤ y) = 0 x 1 = 0}
(iv) Fxy(x, ∞) = P(X ≤ x ; Y ≤ ∞) = Fx(x).1 = Fx(x)
(v) Fxy(∞, y) = Fy(y)
Joint Probability Density Function:
Defined as f(x, y) =
Property: f(x, y) dx dy = 1
Note: X and Y are said to be independent random variable
If fxy(x,y) = fx(x). fy(y)
1. Solution of Algebraic and Transcendental Equation / Root Finding;
Consider an equation f(x) = 0
(i) Bisection method
This method finds the root between points "a" and "b".
If f(x) is continuous between a and b and f (a) and f (b) are of opposite sign then there is a root between a & b (Intermediate Value Theorem).
First approximation to the root is x1 =.
If f(x1) = 0, then x1 is the root of f(x) = 0, otherwise root lies between a and x1 or x1 and b.
Similarly x2 and x3 ...... are determined.
(ii) Newton Raphson Method (or Successive Substitution Method or Tangent Method)
(iii) Secant Method
(iv) Regula Falsi Method or (Method of False Position)
Given f(x) = 0
Select x0 and x1 such that f(x0) f(x1) < 0 (i.e. opposite sign)
Check if f(x0) f(x2) < 0 or f(x1) f(x2) < 0
Compute x3 ....
which is an approximation to the root.
2. Solution of Linear System of Equations
(i) Gauss Elimination Method
Here equations are converted into "upper triangular matrix" form, then solved by "back substitution" method.
Consider
Step 1: To eliminate x from second and third equation (we do this by subtracting suitable multiple of first equation from second and third equation)
a1x + b1y + c1z = d1. (pivotal equation, a1 pivot point)
b2y + c2'z = d2.
b3y + c3'z = d3.
Step 2: Eliminate y from third equation
a1x + b1y + c1z = d1.
b2.y + C2z = d2. (pivotal equation, b2' is pivot point.)
c3'z = d3"
Step 3: The value of x , y and z can be found by back substitution.
Note: Number of operations:
(ii) Gauss ]ordon Method
Step 1: Eliminate x from 2nd and 3rd
Step 2: Eliminate y from 1st and 3rd
Step 3: Eliminate z from 1st and 2nd
(iii) L U Decomposition
(iv) Iterative Method
3. Numerical Integration
Trapezoidal Formula: Step size h =
{(first term + last term) + 2 (remaining terms)}
Error = Exact - approximate
The error in approximating an integral using Trapezoidal rule is bounded by
Simpson's One Third Rule (Simpson's Rule):
{(first term + last term) + 4 (all odd terms) + 2 (all even terms)}
The error in approximating an integral using Simpson's one third rule is
Simpson's Three Eighth Rule:
{(first term + last term) + 2 (all multiple of 3 terms)+3 (all remaining terms)}
The error in approximating an integral using Simpson's 3/8 rule is
4. Solving Differential Equations
(i) Euler method (for first order differential equation)
Given equation is y' = f(x, y); y(x0) = y0
Solution is given by, Yn+i = yn + h f(xn, yn)
(ii) Rimge Kutta Method
Used for finding the y at a particular x without solving the 1st order differential equation
K1 = h f(x0, y0)
1 videos|30 docs|57 tests
|
1 videos|30 docs|57 tests
|
|
Explore Courses for Mechanical Engineering exam
|