| Table of contents |
Truncation error is the error that arises when an infinite or exact mathematical procedure is replaced by an approximation that uses a finite number of terms. Truncation errors differ from round-off errors (which come from finite precision arithmetic); truncation errors originate from the mathematical approximation itself.
Example: approximation of a derivative by a finite difference.

The Taylor series expands a sufficiently smooth function about a point and is used to derive finite-difference formulas and to estimate the size and order of truncation errors.
For a function f(x) that is sufficiently differentiable, the Taylor expansion about x gives:
f(x + h) = f(x) + h f'(x) + (h^2/2) f''(x) + (h^3/6) f'''(ξ) for some ξ in (x, x + h).
Rearranging to form the forward difference approximation for the derivative:
f'(x) = [f(x + h) - f(x)]/h - (h/2) f''(x) + O(h^2).
Therefore the forward-difference approximation
f'(x) ≈ [f(x + h) - f(x)]/h
has a leading truncation error term proportional to h, so the truncation error is O(h) (first order).




A differential equation is an equation that involves derivatives of a function. Many physical and engineering laws are modelled by differential equations. Solving them analytically is not always possible, so numerical methods are used to approximate solutions.
The typical initial-value problem (IVP) considered here is:
y' = F(x, y), y(x0) = y0
where y' denotes dy/dx.
Euler's method is the simplest one-step numerical method for IVPs. With step size h and mesh points $x_n = x_{n-1} + h,$ the numerical approximation yn to y(xn) is:
$y_n = y_{n-1} + h · F(x_{n-1}, y_{n-1}).$
Euler's method has a local truncation error of order $O(h^2)$ and a global error of order O(h).
Use Euler's method when a low-cost, first-order approximation is acceptable; decrease h to improve accuracy or use higher-order methods for better efficiency.

The Runge-Kutta 2nd order methods are a family of two-stage one-step methods that produce a second-order accurate result (global error $O(h^2)$). One common form (the improved Euler or Heun method) is:
$k1 = f(x_n, y_n)$
$k2 = f(x_n + h, y_n + h k1)$
$y_{n+1} = y_n + (h/2)(k1 + k2).$
These methods are applicable to first-order equations y' = f(x, y). Higher-order ODEs can be converted into first-order systems and solved similarly.
Worked conversion example
Example: Rewrite dy/dx + 2y = 1.3 e-x, y(0) = 5 in the form dy/dx = f(x, y), y(0) = y0.
Solution:
Move 2y to the right-hand side.
dy/dx = 1.3 e-x - 2y
y(0) = 5
Therefore f(x, y) = 1.3 e-x - 2y.
Although often presented with root-finding topics, the Newton-Raphson iteration is a fundamental method used to solve nonlinear equations that arise in many numerical tasks, including solving for parameters in numerical quadrature and other problems.
If x0 is an initial guess for a root α of f(x) = 0, write α = x0 + h and expand f(x0 + h) in Taylor series. Neglecting higher terms and solving for h leads to the Newton iteration:
$x_{n+1} = x_n - f(x_n)/f'(x_n).$
Under standard regularity conditions and when f'(α) ≠ 0, convergence is quadratic near the root (error roughly squares each iteration).

The trapezoidal rule is a Newton-Cotes formula that approximates the integral by approximating the integrand by a first-degree polynomial (a straight line) on each interval.
For a single interval [a, b]:
∫_a^b f(x) dx ≈ (b - a) [f(a) + f(b)] / 2.
For better accuracy over [a, b], use the composite trapezoidal rule with n subintervals of width h = (b - a)/n:
∫_a^b f(x) dx ≈ h [ (1/2)f(x0) + f(x1) + f(x2) + ... + f(xn-1) + (1/2)f(xn) ].

Simpson's 1/3 rule approximates the integrand by a quadratic polynomial on pairs of subintervals. It requires an even number of subintervals (n even).
If the total interval [a, b] is split into two equal segments, the segment width is:
h = (b - a)/2.
The Simpson's 1/3 formula for two segments (three nodes x0, x1, x2) is:
∫_{x_0}^{x_2} f(x) dx ≈ (h/3) [ f(x_0) + 4 f(x1) + f(x2) ].
For composite Simpson over n subintervals (n even) with width h = (b - a)/n:
∫_a^b f(x) dx ≈ (h/3)[ f(x_0) + 4 f(x_1) + 2 f(x_2) + 4 f(x_3) + ... + 4 f(x_{n-1}) + f(x_n) ].
Simpson's 1/3 rule is of order O(h^4) for the composite rule under sufficient smoothness of f.


The bisection method brackets a root by repeatedly halving the interval [xl, xu]. A drawback is that it ignores the function values at the endpoints and halves the interval regardless of where the root is likely located.
The false-position method improves on this by drawing a straight line between the points (xl, f(xl)) and (xu, f(xu)); the x-intercept of this line is taken as a better estimate of the root.
The formula for the point of intersection (the false position) is:
xr = xu - f(xu) · (xl - xu) / (f(xl) - f(xu)).
Rearranged (equivalently):
xr = (xl f(xu) - xu f(xl)) / (f(xu) - f(xl)).
This estimate replaces the endpoint whose function value has the same sign as $f(x_r)$, keeping the root bracketed.


The secant method uses two previous approximations $x_{n-1}$ and xn and forms the line through $(x_{n-1}, f(x_{n-1}))$ and $(x_n, f(x_n))$. The x-intercept of this line gives the next approximation:
$x_{n+1} = x_n - f(x_n) · (x_n - x_{n-1}) / (f(x_n) - f(x_{n-1})).$
Unlike the false-position method, the secant method does not require bracketing and $x_{n+1}$ may fall outside $[x_{n-1}, x_n]$. The secant method has superlinear convergence with order approximately 1.618 (the golden ratio), assuming the root is simple and initial guesses are sufficiently close.

Summary (optional): Taylor series provide a systematic way to derive finite-difference formulas and estimate truncation errors. For ODEs, Euler is first order while RK2 is second order. For integration, trapezoidal and Simpson rules are Newton-Cotes formulas of increasing polynomial degree and accuracy. For root finding, false-position and secant methods improve on simple bracketing or linear interpolation; Newton-Raphson gives rapid quadratic convergence when applicable.
71 videos|135 docs|94 tests |
| 1. What is Numerical Methods in the context of the GATE exam? | ![]() |
| 2. What are some common topics covered under Numerical Methods in the GATE exam? | ![]() |
| 3. How can I prepare effectively for the Numerical Methods section in the GATE exam? | ![]() |
| 4. Are there any specific tips or strategies to improve performance in the Numerical Methods section of the GATE exam? | ![]() |
| 5. Can you suggest some reference books for studying Numerical Methods for the GATE exam? | ![]() |