These notes will prove that there is a unique solution to the initial value problem for a wide range of first-order ordinary differential equations (ODEs). The initial value problem we consider is
(1)
where F is a given function, and a and b are given real numbers. A solution to this problem is a function u(x) satisfying the differential equation u' = F (x, u) with the proper initial condition u(a) = b.
We shall solve the integral equation by using the method of successive approximations due to Picard. For this, let y0 (x) be any continuous function (we often pick y0 (x) ≡ y0) which we assume to be the initial approximation of the unknown solution, then we define y1 (x)as
We take this y1 (x) as our next approximation and substitute this for y(x) on the right side of and call it y2 (x). Continuing in this way, the (m + 1)st approximation ym+1 (x) is obtained from ym (x) by means of the relation
If the sequence {ym(x)} converges uniformly to a continuous function y(x) in some interval J containing x0 and for all x ∈ J the points (x, ym(x)) ∈ D, then using Theorem we may pass to the limit in both sides, to obtain
so that y(x) is the desired solution.
Example: The initial value problem y' = −y, y(0) = 1 is equivalent to solving the integral equation
Let y0(x)=1, to obtain
Recalling Taylor’s series expansion of e−x, we see that limm→∞ ym(x) = e−x. The function y(x) = e−x is indeed the solution of the given initial value problem in J = IR.
An important characteristic of this method is that it is constructive, moreover bounds on the difference between iterates and the solution are easily available. Such bounds are useful for the approximation of solutions and also in the study of qualitative properties of solutions. The following result provides sufficient conditions for the uniform convergence of the sequence {ym(x)} to the unique solution y(x) of the integral equation, or equivalently of the initial value problem.
Definition: A function y(x) defined in J is said to be an ∈ approximate solution of the DE y'= f(x, y) if (i) y(x) is continuous for all x in J, (ii) for all x ∈ J the points (x, y(x)) ∈ D, (iii) y(x) has a piecewise continuous derivative in J which may fail to be defined only for a finite
number of points, say, x1, x2,...,xk, and (iv) |y (x) − f(x, y(x))| ≤ for all x ∈ J, x ≠ xi, i = 1, 2,..., k.
The existence of an ∈-approximate solution is proved in the following theorem.
Let f(x, y) be continuous in and hence there exists aM > 0 such that |f(x,y)| ≤ M for all (x, y) ∈ . Then for any ∈ > 0, there exists an -approximate solution y(x) of the DE y = f(x, y) in the interval Jh such that y(x0) = y0.
Proof: Since f(x, y) is continuous in the closed rectangle , it is uniformly continuous in this rectangle. Thus, for a given > 0 there exists a δ > 0 such that
|f(x, y) − f(x1, y1)| ≤ (9.2)
for all (x, y), (x1, y1) in whenever |x − x1| ≤ δ and |y − y1| ≤ δ.
We shall construct an approximate solution in the interval x0 ≤ x ≤ x0 + h and a similar process will define it in the interval x0 − h ≤ x ≤ x0. For this, we divide the interval x0 ≤ x ≤ x0 + h into m parts x0 < x1 < ··· < xm = x0 + h such that
Next we define a function y(x) in the interval x0 ≤ x ≤ x0 + h by the recursive formula
y(x) = y(xi−1)+(x−xi−1)f(xi−1, y(xi−1)), xi−1 ≤ x ≤ xi, i = 1, 2, . . . , m. (9.4)
Obviously, this function y(x) is continuous and has a piecewise contin-
uous derivative y'(x) = f(xi−1, y(xi−1)), xi−1 <x<xi, i = 1, 2,...,m which fails to be defined only at the points xi, i = 1, 2,...,m − 1. Since in each subinterval [xi−1, xi], i = 1, 2,...,m the function y(x) is a straight line, to prove (x, y(x)) ∈ it suffices to show that |y(xi) − y0| ≤ b for all i = 1, 2, . . . , m. For this, in (9.4) let i = 1 and x = x1 to obtain
|y(x1) − y0| = (x1 − x0)|f(x0, y0)| ≤ Mh ≤ b.
Now let the assertion be true for i = 1, 2,...,k − 1 < m − 1, then from (9.4), we find
y(x1) − y0 = (x1 − x0)f(x0, y0)
y(x2) − y(x1)= (x2 − x1)f(x1, y(x1))
···
y(xk) − y(xk−1)= (xk − xk−1)f(xk−1, y(xk−1))
and hence,
which gives
Finally, if xi−1 <x<xi then from (9.4) and (9.3), we have
we find
|y'(x) − f(x, y(x))| = |f(xi−1, y(xi−1)) − f(x, y(x))| ≤ ∈
for all x ∈ Jh, x ≠ xi, i = 1, 2,...,m−1. This completes the proof that y(x) is an-approximate solution of the DE y = f(x, y). This method of constructing an approximate solution is known as Cauchy–Euler method.
Let the conditions of Theorem 9.3 be satisfied. Then the initial value problem has at least one solution in Jh.
Proof: Once again we shall give the proof only in the interval x0 ≤x ≤ x0 + h. Let {∈m} be a monotonically decreasing sequence of positive numbers such that ∈m → 0. For each ∈m we use Theorem to construct an -approximate solution ym(x). Now as in Theorem 9.1, for any two points x and x∗ in [x0, x0 + h] it is easy to prove that
|ym(x) − ym(x∗)| ≤ M|x − x∗|
and from this it follows that the sequence {ym(x)} is equicontinuous. Further, as in Theorem for each x in [x0, x0 +h], we have |ym(x)|≤|y0|+b, and hence the sequence {ym(x)} is also uniformly bounded. Therefore, again Theorem is applicable and the sequence {ym(x)} contains a sub-sequence {ymp (x)} which converges uniformly in [x0, x0+h] to a continuous function y(x). To show that the function y(x) is a solution of, we define
em(x) = y'm(x) − f(x, ym(x)), at the points where y'm(x) exists
= 0, otherwise.
Thus, it follows that
and |em(x)| ≤ ∈m. Since f(x, y) is continuous in S and ymp (x) converges
to y(x) uniformly in [x0, x0 + h], the function f(x, ymp (x)) converges to
f(x, y(x)) uniformly in [x0, x0 + h]. Further, since by ∈mp in (9.5) and letting p → ∞, we find that y(x) is a solution of the integral equation. ∈mp → 0 we find that |mp (x)| converges to zero uniformly in [x0, x0 + h]. Thus, by replacing m by mp in (9.5) and letting p → ∞, we find that y(x) is a solution of the integral equation.
If in a domain D the function f(x, y) is continuous, then for every point (x0, y0) in D there is a rectangle S such that has a solution y(x) in Jh. Since S lies in D, by applying to the point at which the solution goes out of S, we can extend the region in which the solution exists. For example, the function y(x)=1/(1 − x) is the solution of the problem y = y2, y(0) = 1. Clearly, this solution exists in (−∞, 1). For this problem
and h = min{a, b/(1 + b)2}. Since b/(1 + b)2 ≤ 1/4 we can (independent of the choice of a) take h = 1/4. Thus, Corollary 9.2 gives the existence of a solution y1(x) only in the interval |x| ≤ 1/4. Now consider the continuation of y1(x) to the right obtained by finding a solution y2(x) of the problem y = y2, y(1/4) = 4/3. For this new problem S : |x−1/4| ≤ a, |y−4/3| ≤ b,
and maxS y2 = (4/3 +b)2. Since b/(4/3 +b)2 ≤ 3/16 we can take h = 3/16. Thus, y2(x) exists in the interval |x−1/4| ≤ 3/16. This ensures the existence of a solution
in the interval −1/4 ≤ x ≤ 7/16. This process of continuation of the solution can be used further to the right of the point (7/16, 16/9), or to the left of the point (−1/4, 4/5). In order to establish how far the solution can be continued, we need the following lemma.
Lemma: Let f(x, y) be continuous in the domain D and let supD |f(x, y)| ≤ M. Further, let the initial value problem has a solution y(x) in an interval J = (α, β). Then the limits limx→α+ y(x) = y(α+0) and limx→β− y(x) = y(β − 0) exist.
Proof. For α<x1 < x2 < β, integral equation gives that
Therefore, y(x2) − y(x1) → 0 as x1, x2 → α+. Thus, by the Cauchy criterion of convergence limx→α+ y(x) exists. A similar argument holds for limx→β− y(x).
Let f (x, y) be continuous and satisfy a uniform Lipschitz condition in S. Then has at most one solution in |x − x0 |≤ a.
Proof: In Theorem 8.1 the uniqueness of the solutions of is proved in the interval Jh; however, it is clear that Jh can be replaced by the interval |x − x0 |≤ a.
Let f(x, y) be continuous in S+ : x0 ≤ x ≤ x0 + a, |y − y0| ≤ b and nonincreasing in y for each fixed x in x0 ≤ x ≤ x0 + a. Then has at most one solution in x0 ≤ x ≤ x0 + a.
Proof: Suppose y1(x) and y2(x) are two solutions of in x0 ≤ x ≤ x0 + a which differ somewhere in x0 ≤ x ≤ x0 + a. We assume that y2(x) > y1(x) in x1 <x<x1 + ≤ x0 + a, while y1(x) = y2(x) in x0 ≤ x ≤ x1, i.e., x1 is the greatest lower bound of the set A consisting of those x for which y2(x) > y1(x). This greatest lower bound exists because the set A is bounded below by x0 at least. Thus, for all x ∈ (x1, x1 +) we have f(x, y1(x)) ≥ f(x, y2(x)); i.e., y 1(x) ≥ y 2(x). Hence, the function z(x) = y2(x) − y1(x) is nonincreasing, since if z(x1) = 0 we should have z(x) ≤ 0 in (x1, x1 +). This contradiction proves that y1(x) = y2(x) in x0 ≤ x ≤ x0 + a.
Example: The function |y| 1/2sgn y, where sgn y = 1 if y ≥ 0, and −1 if y < 0 is continuous, nondecreasing, and the initial value problem y = |y|1/2sgn y, y(0) = 0 has two solutions y(x) ≡ 0, y(x) = x2/4 in the interval [0,∞). Thus, in Theorem “nonincreasing” cannot be replaced by “nondecreasing.”
For our next result, we need the following lemma.
Lemma: Let w(z) be continuous and increasing function in the interval [0,∞), and w(0) = 0, w(z) > 0 for z > 0, with also
Let u(x) be a nonnegative continuous function in [0, a]. Then the inequality
implies that u(x) ≡ 0 in [0, a].
Proof: Define v(x) = max0≤t≤x u(t) and assume that v(x) > 0 for 0 < x ≤ a. Then u(x) ≤ v(x) and for each x there is an x1 ≤ x such that u(x1) = v(x). From this, we have
i.e., the nondecreasing function v(x) satisfies the same inequality as u(x)
does. Let us set
then v(0) = 0, v(x) ≤ v(x), v(x) = w(v(x)) ≤ w(v(x)). Hence, for 0 <δ < a, we have
However, from it follows that
becomes infinite when ∈→ 0 (δ → 0). This contradiction shows that v(x) cannot be positive, so v(x) ≡ 0, and hence u(x) = 0 in [0, a].
Let f(x, y) be continuous in S and for all (x, y1), (x, y2) ∈ S it satisfies
where w(z) is the same as in Lemma Then has at most one
solution in |x − x0| ≤ a.
Proof: Suppose y1(x) and y2(x) are two solutions of in |x−x0| ≤ a.
Then from it follows that
For any x in [x0, x0 + a], we set u(x) = |y1(x0 + x) − y2(x0 + x)|. Then the nonnegative continuous function u(x) satisfies the inequality, and therefore, Lemma implies that u(x) = 0 in [0, a], i.e., y1(x) = y2(x) in [x0, x0+a]. If x is in [x0−a, x0], then the proof remains the same except that we need to define the function u(x) = |y1(x0−x)−y2(x0−x)| in [0, a].
556 videos|198 docs
|
1. What is Picard's method in solving initial value problems for first-order ODEs? |
2. What is Peano's existence theorem for solutions of initial value problems? |
3. How does Picard's method ensure the uniqueness of solutions for initial value problems? |
4. How does Peano's existence theorem differ from Picard's method in solving initial value problems? |
5. Are there any limitations or drawbacks of using Picard's method to solve initial value problems? |
556 videos|198 docs
|
|
Explore Courses for Mathematics exam
|