Numerical Methods | Engineering Mathematics - Engineering Mathematics PDF Download

Truncation errors and the Taylor series

  1. Definition

    Truncation error is the error that arises when an infinite or exact mathematical procedure is replaced by an approximation that uses a finite number of terms. Truncation errors differ from round-off errors (which come from finite precision arithmetic); truncation errors originate from the mathematical approximation itself.

    Example: approximation of a derivative by a finite difference.

Definition
  1. Using the Taylor series to estimate truncation errors

    The Taylor series expands a sufficiently smooth function about a point and is used to derive finite-difference formulas and to estimate the size and order of truncation errors.

    For a function f(x) that is sufficiently differentiable, the Taylor expansion about x gives:

    f(x + h) = f(x) + h f'(x) + (h^2/2) f''(x) + (h^3/6) f'''(ξ) for some ξ in (x, x + h).

    Rearranging to form the forward difference approximation for the derivative:

    f'(x) = [f(x + h) - f(x)]/h - (h/2) f''(x) + O(h^2).

    Therefore the forward-difference approximation

    f'(x) ≈ [f(x + h) - f(x)]/h

    has a leading truncation error term proportional to h, so the truncation error is O(h) (first order).

Using the Taylor series to estimate truncation errors
Using the Taylor series to estimate truncation errors
Using the Taylor series to estimate truncation errors
Using the Taylor series to estimate truncation errors

Numerical solution of ordinary differential equations (ODE)

  1. Definition

    A differential equation is an equation that involves derivatives of a function. Many physical and engineering laws are modelled by differential equations. Solving them analytically is not always possible, so numerical methods are used to approximate solutions.

    The typical initial-value problem (IVP) considered here is:

    y' = F(x, y), y(x0) = y0

    where y' denotes dy/dx.

  2. Euler's method

    Euler's method is the simplest one-step numerical method for IVPs. With step size h and mesh points $x_n = x_{n-1} + h,$ the numerical approximation yn to y(xn) is:

    $y_n = y_{n-1} + h · F(x_{n-1}, y_{n-1}).$

    Euler's method has a local truncation error of order $O(h^2)$ and a global error of order O(h).

    Use Euler's method when a low-cost, first-order approximation is acceptable; decrease h to improve accuracy or use higher-order methods for better efficiency.

Euler`s method
  1. Runge-Kutta second order (RK2)

    The Runge-Kutta 2nd order methods are a family of two-stage one-step methods that produce a second-order accurate result (global error $O(h^2)$). One common form (the improved Euler or Heun method) is:

    $k1 = f(x_n, y_n)$

    $k2 = f(x_n + h, y_n + h k1)$

    $y_{n+1} = y_n + (h/2)(k1 + k2).$

    These methods are applicable to first-order equations y' = f(x, y). Higher-order ODEs can be converted into first-order systems and solved similarly.

    Worked conversion example

    Example: Rewrite dy/dx + 2y = 1.3 e-x, y(0) = 5 in the form dy/dx = f(x, y), y(0) = y0.

    Solution:

    Move 2y to the right-hand side.

    dy/dx = 1.3 e-x - 2y

    y(0) = 5

    Therefore f(x, y) = 1.3 e-x - 2y.

Numerical Integration

  1. Newton-Raphson method

    Although often presented with root-finding topics, the Newton-Raphson iteration is a fundamental method used to solve nonlinear equations that arise in many numerical tasks, including solving for parameters in numerical quadrature and other problems.

    If x0 is an initial guess for a root α of f(x) = 0, write α = x0 + h and expand f(x0 + h) in Taylor series. Neglecting higher terms and solving for h leads to the Newton iteration:

    $x_{n+1} = x_n - f(x_n)/f'(x_n).$

    Under standard regularity conditions and when f'(α) ≠ 0, convergence is quadratic near the root (error roughly squares each iteration).

Newton-Raphson method
  1. Trapezoidal rule

    The trapezoidal rule is a Newton-Cotes formula that approximates the integral by approximating the integrand by a first-degree polynomial (a straight line) on each interval.

    For a single interval [a, b]:

    ∫_a^b f(x) dx ≈ (b - a) [f(a) + f(b)] / 2.

    For better accuracy over [a, b], use the composite trapezoidal rule with n subintervals of width h = (b - a)/n:

    ∫_a^b f(x) dx ≈ h [ (1/2)f(x0) + f(x1) + f(x2) + ... + f(xn-1) + (1/2)f(xn) ].

Trapezoidal rule
  1. Simpson's one-third rule

    Simpson's 1/3 rule approximates the integrand by a quadratic polynomial on pairs of subintervals. It requires an even number of subintervals (n even).

    If the total interval [a, b] is split into two equal segments, the segment width is:

    h = (b - a)/2.

    The Simpson's 1/3 formula for two segments (three nodes x0, x1, x2) is:

    ∫_{x_0}^{x_2} f(x) dx ≈ (h/3) [ f(x_0) + 4 f(x1) + f(x2) ].

    For composite Simpson over n subintervals (n even) with width h = (b - a)/n:

    ∫_a^b f(x) dx ≈ (h/3)[ f(x_0) + 4 f(x_1) + 2 f(x_2) + 4 f(x_3) + ... + 4 f(x_{n-1}) + f(x_n) ].

    Simpson's 1/3 rule is of order O(h^4) for the composite rule under sufficient smoothness of f.

Simpson`s one-third rule
Simpson`s one-third rule

Roots of Equations

  1. False-position (Regula Falsi) method

    The bisection method brackets a root by repeatedly halving the interval [xl, xu]. A drawback is that it ignores the function values at the endpoints and halves the interval regardless of where the root is likely located.

    The false-position method improves on this by drawing a straight line between the points (xl, f(xl)) and (xu, f(xu)); the x-intercept of this line is taken as a better estimate of the root.

    The formula for the point of intersection (the false position) is:

    xr = xu - f(xu) · (xl - xu) / (f(xl) - f(xu)).

    Rearranged (equivalently):

    xr = (xl f(xu) - xu f(xl)) / (f(xu) - f(xl)).

    This estimate replaces the endpoint whose function value has the same sign as $f(x_r)$, keeping the root bracketed.

False-position (Regula Falsi) method
False-position (Regula Falsi) method
  1. Secant method

    The secant method uses two previous approximations $x_{n-1}$ and xn and forms the line through $(x_{n-1}, f(x_{n-1}))$ and $(x_n, f(x_n))$. The x-intercept of this line gives the next approximation:

    $x_{n+1} = x_n - f(x_n) · (x_n - x_{n-1}) / (f(x_n) - f(x_{n-1})).$

    Unlike the false-position method, the secant method does not require bracketing and $x_{n+1}$ may fall outside $[x_{n-1}, x_n]$. The secant method has superlinear convergence with order approximately 1.618 (the golden ratio), assuming the root is simple and initial guesses are sufficiently close.

Secant method

Additional remarks on error, convergence and practical use

  • Round-off vs truncation error: round-off error comes from finite precision arithmetic; truncation error from approximating infinite processes with finite ones. Both must be managed in practice.
  • Order of a method: a numerical method is said to be of order p if the global error behaves like $O(h^p)$ as h → 0. Higher order typically gives faster error reduction with decreasing h, but may require more function evaluations.
  • Stability and stiffness: for ODEs, some methods are more stable for stiff problems (implicit methods); explicit methods (Euler, explicit RK) require smaller h for stability in stiff cases.
  • Composite rules: for integration, composite trapezoidal and composite Simpson rules apply the single-interval formula repeatedly; composite Simpson requires an even number of subintervals.
  • Choosing methods: select the method considering desired accuracy, smoothness of function, computational cost per step, and stability requirements.

Summary (optional): Taylor series provide a systematic way to derive finite-difference formulas and estimate truncation errors. For ODEs, Euler is first order while RK2 is second order. For integration, trapezoidal and Simpson rules are Newton-Cotes formulas of increasing polynomial degree and accuracy. For root finding, false-position and secant methods improve on simple bracketing or linear interpolation; Newton-Raphson gives rapid quadratic convergence when applicable.

The document Numerical Methods | Engineering Mathematics - Engineering Mathematics is a part of the Engineering Mathematics Course Engineering Mathematics.
All you need of Engineering Mathematics at this link: Engineering Mathematics
71 videos|135 docs|94 tests

FAQs on Numerical Methods - Engineering Mathematics - Engineering Mathematics

1. What is Numerical Methods in the context of the GATE exam?
Numerical Methods is a subject that is included in the GATE (Graduate Aptitude Test in Engineering) exam. It is a branch of mathematics and computer science that focuses on the development and analysis of algorithms for solving mathematical problems numerically. In the GATE exam, questions related to Numerical Methods can be asked to test the candidates' understanding and application of these algorithms.
2. What are some common topics covered under Numerical Methods in the GATE exam?
Some common topics covered under Numerical Methods in the GATE exam include numerical solutions of linear and nonlinear equations, interpolation and approximation, numerical differentiation and integration, numerical methods for solving ordinary differential equations, numerical optimization, and numerical linear algebra.
3. How can I prepare effectively for the Numerical Methods section in the GATE exam?
To prepare effectively for the Numerical Methods section in the GATE exam, you can start by understanding the fundamental concepts and algorithms related to the different topics mentioned earlier. It is important to practice solving numerical problems and implementing algorithms using programming languages like MATLAB or Python. Referring to standard textbooks and solving previous years' GATE question papers can also be helpful in gaining a better understanding of the subject.
4. Are there any specific tips or strategies to improve performance in the Numerical Methods section of the GATE exam?
Yes, here are some tips to improve performance in the Numerical Methods section of the GATE exam: - Practice solving numerical problems regularly to improve your speed and accuracy. - Understand the underlying algorithms and their applications in different contexts. - Focus on understanding the concepts and their derivations rather than memorizing formulas. - Solve previous years' GATE question papers to get familiar with the exam pattern and identify important topics to prioritize. - Use online resources, video tutorials, and online forums to clarify doubts and gain additional insights into the subject.
5. Can you suggest some reference books for studying Numerical Methods for the GATE exam?
Yes, here are some popular reference books for studying Numerical Methods for the GATE exam: - "Numerical Methods: Principles, Analysis, and Algorithms" by G. Shanker Rao - "Numerical Methods for Engineers" by Steven C. Chapra and Raymond P. Canale - "Numerical Methods: A MATLAB Approach" by Abdelwahab Kharab and Ronald B. Guenther - "Numerical Methods: Using MATLAB" by George Lindfield and John Penny - "An Introduction to Numerical Methods: A MATLAB Approach" by Abdelwahab Kharab and Ronald B. Guenther These books provide a comprehensive understanding of the subject and include examples, practice problems, and MATLAB programming exercises to enhance your learning experience.
Related Searches
Numerical Methods | Engineering Mathematics - Engineering Mathematics , shortcuts and tricks, Exam, Important questions, Summary, MCQs, ppt, Extra Questions, practice quizzes, pdf , Previous Year Questions with Solutions, Viva Questions, past year papers, Free, study material, Numerical Methods | Engineering Mathematics - Engineering Mathematics , video lectures, mock tests for examination, Semester Notes, Numerical Methods | Engineering Mathematics - Engineering Mathematics , Objective type Questions, Sample Paper;