Page 1
Fredholm Equations with Degenerate Kernels
We have seen that a Fredholm integral equation of the second kind is de?ned as
f (x) = ?
Z
b
a
K(x,y)f (y)dy+g(x). (3.2)
De?nition 3.9: The kernel K(x,y) is said to be degenerate (separable) if it can be
written as a sum of terms, each being a product of a function of x and a function of y.
Thus,
K(x,y) =
n
X
j=1
u
j
(x)v
j
(y) =u
T
(x)v(y) =u(x)·v(y) =hu,vi, (3.3)
where the latter notation is the inner product for ?nite vector spaces (i.e. the dot product).
Equation (3.2) may be solved by reduction to a set of simultaneous linear algebraic equa-
tions as we shall now show. Substituting (3.3) into (3.2) gives
f (x) = ?
Z
b
a
"
n
X
j=1
u
j
(x)v
j
(y)
#
f (y)dy+g(x)
= ?
n
X
j=1
u
j
(x)
Z
b
a
v
i
(y)f (y)dy
+g(x)
and letting
c
j
=
Z
b
a
v
j
(y)f (y)dy =hv
j
,fi, (3.4)
then
f (x) = ?
n
X
j=1
c
j
u
j
(x)+g(x). (3.5)
For this class of kernel, it is su?cient to ?nd the c
j
in order to obtain the solution to the
integral equation. Eliminating f between equations (3.4)and(3.5)(i.e. take inner product
of both sides with v
i
) gives
c
i
=
Z
b
a
v
i
(y)
"
?
n
X
j=1
c
j
u
j
(y)+g(y)
#
dy,
or interchanging the summation and integration,
c
i
= ?
n
X
j=1
c
j
Z
b
a
v
i
(y)u
j
(y)dy+
Z
b
a
v
i
(y)g(y)dy. (3.6)
Writing
a
ij
=
Z
b
a
v
i
(y)u
j
(y)dy =hv
i
,u
j
i, (3.7)
and
g
i
=
Z
b
a
v
i
(y)g(y)dy =hv
i
,gi, (3.8)
Page 2
Fredholm Equations with Degenerate Kernels
We have seen that a Fredholm integral equation of the second kind is de?ned as
f (x) = ?
Z
b
a
K(x,y)f (y)dy+g(x). (3.2)
De?nition 3.9: The kernel K(x,y) is said to be degenerate (separable) if it can be
written as a sum of terms, each being a product of a function of x and a function of y.
Thus,
K(x,y) =
n
X
j=1
u
j
(x)v
j
(y) =u
T
(x)v(y) =u(x)·v(y) =hu,vi, (3.3)
where the latter notation is the inner product for ?nite vector spaces (i.e. the dot product).
Equation (3.2) may be solved by reduction to a set of simultaneous linear algebraic equa-
tions as we shall now show. Substituting (3.3) into (3.2) gives
f (x) = ?
Z
b
a
"
n
X
j=1
u
j
(x)v
j
(y)
#
f (y)dy+g(x)
= ?
n
X
j=1
u
j
(x)
Z
b
a
v
i
(y)f (y)dy
+g(x)
and letting
c
j
=
Z
b
a
v
j
(y)f (y)dy =hv
j
,fi, (3.4)
then
f (x) = ?
n
X
j=1
c
j
u
j
(x)+g(x). (3.5)
For this class of kernel, it is su?cient to ?nd the c
j
in order to obtain the solution to the
integral equation. Eliminating f between equations (3.4)and(3.5)(i.e. take inner product
of both sides with v
i
) gives
c
i
=
Z
b
a
v
i
(y)
"
?
n
X
j=1
c
j
u
j
(y)+g(y)
#
dy,
or interchanging the summation and integration,
c
i
= ?
n
X
j=1
c
j
Z
b
a
v
i
(y)u
j
(y)dy+
Z
b
a
v
i
(y)g(y)dy. (3.6)
Writing
a
ij
=
Z
b
a
v
i
(y)u
j
(y)dy =hv
i
,u
j
i, (3.7)
and
g
i
=
Z
b
a
v
i
(y)g(y)dy =hv
i
,gi, (3.8)
then (3.6) becomes
c
i
= ?
n
X
j=1
a
ij
c
j
+g
i
. (3.9)
By de?ning the matrices
A = (a
ij
), c =
?
?
?
?
?
c
1
c
2
.
.
.
c
n
?
?
?
?
?
, g =
?
?
?
?
?
g
1
g
2
.
.
.
g
n
?
?
?
?
?
this equation may be written in matrix notation as
c = ?Ac+g
i.e.
(I-?A)c =g (3.10)
where I is the identity. This is just a simple linear system of equations for c. We
therefore need to understand how we solve the canonical system Ax = b where A is a
given matrix, b is the given forcing vector and x is the vector to be determined. Let’s
state an important theorem from Linear Algebra:
Theorem 3.10: (Fredholm Alternative)
Consider the linear system
Ax =b (3.11)
where A is an n×n matrix, x is an unknown n×1 column vector, and b is a speci?ed
n×1 column vector.
We also introduce the related (adjoint) homogeneous problem
A
T
ˆ x =0 (3.12)
with p = n- rank(A) non-trivial linearly independent solutions
ˆ x
1
,ˆ x
2
,...,ˆ x
p
.
[Reminder, rank(A) is the number of linearly independent rows (or columns) of the matrix
A.]
Then the following alternatives hold:
either
(i) DetA6= 0, so that there exists a unique solution to (3.11) given by x =A
-1
b for each
given b. (And b =0?x =0)
or
(ii) DetA = 0 and then
(a) If b is such that hb,ˆ x
j
i = 0 for all j then there are in?nitely many solutions
to equation (3.11).
(b) If b is such that hb,ˆ x
j
i 6= 0 for any j then there is no solution to equation
(3.11).
Page 3
Fredholm Equations with Degenerate Kernels
We have seen that a Fredholm integral equation of the second kind is de?ned as
f (x) = ?
Z
b
a
K(x,y)f (y)dy+g(x). (3.2)
De?nition 3.9: The kernel K(x,y) is said to be degenerate (separable) if it can be
written as a sum of terms, each being a product of a function of x and a function of y.
Thus,
K(x,y) =
n
X
j=1
u
j
(x)v
j
(y) =u
T
(x)v(y) =u(x)·v(y) =hu,vi, (3.3)
where the latter notation is the inner product for ?nite vector spaces (i.e. the dot product).
Equation (3.2) may be solved by reduction to a set of simultaneous linear algebraic equa-
tions as we shall now show. Substituting (3.3) into (3.2) gives
f (x) = ?
Z
b
a
"
n
X
j=1
u
j
(x)v
j
(y)
#
f (y)dy+g(x)
= ?
n
X
j=1
u
j
(x)
Z
b
a
v
i
(y)f (y)dy
+g(x)
and letting
c
j
=
Z
b
a
v
j
(y)f (y)dy =hv
j
,fi, (3.4)
then
f (x) = ?
n
X
j=1
c
j
u
j
(x)+g(x). (3.5)
For this class of kernel, it is su?cient to ?nd the c
j
in order to obtain the solution to the
integral equation. Eliminating f between equations (3.4)and(3.5)(i.e. take inner product
of both sides with v
i
) gives
c
i
=
Z
b
a
v
i
(y)
"
?
n
X
j=1
c
j
u
j
(y)+g(y)
#
dy,
or interchanging the summation and integration,
c
i
= ?
n
X
j=1
c
j
Z
b
a
v
i
(y)u
j
(y)dy+
Z
b
a
v
i
(y)g(y)dy. (3.6)
Writing
a
ij
=
Z
b
a
v
i
(y)u
j
(y)dy =hv
i
,u
j
i, (3.7)
and
g
i
=
Z
b
a
v
i
(y)g(y)dy =hv
i
,gi, (3.8)
then (3.6) becomes
c
i
= ?
n
X
j=1
a
ij
c
j
+g
i
. (3.9)
By de?ning the matrices
A = (a
ij
), c =
?
?
?
?
?
c
1
c
2
.
.
.
c
n
?
?
?
?
?
, g =
?
?
?
?
?
g
1
g
2
.
.
.
g
n
?
?
?
?
?
this equation may be written in matrix notation as
c = ?Ac+g
i.e.
(I-?A)c =g (3.10)
where I is the identity. This is just a simple linear system of equations for c. We
therefore need to understand how we solve the canonical system Ax = b where A is a
given matrix, b is the given forcing vector and x is the vector to be determined. Let’s
state an important theorem from Linear Algebra:
Theorem 3.10: (Fredholm Alternative)
Consider the linear system
Ax =b (3.11)
where A is an n×n matrix, x is an unknown n×1 column vector, and b is a speci?ed
n×1 column vector.
We also introduce the related (adjoint) homogeneous problem
A
T
ˆ x =0 (3.12)
with p = n- rank(A) non-trivial linearly independent solutions
ˆ x
1
,ˆ x
2
,...,ˆ x
p
.
[Reminder, rank(A) is the number of linearly independent rows (or columns) of the matrix
A.]
Then the following alternatives hold:
either
(i) DetA6= 0, so that there exists a unique solution to (3.11) given by x =A
-1
b for each
given b. (And b =0?x =0)
or
(ii) DetA = 0 and then
(a) If b is such that hb,ˆ x
j
i = 0 for all j then there are in?nitely many solutions
to equation (3.11).
(b) If b is such that hb,ˆ x
j
i 6= 0 for any j then there is no solution to equation
(3.11).
In the case of (ii)(a), then there are in?nitely many solutions because the theorem states
that we can ?nd a particular solution x
PS
and furthermore, the homogeneous system
Ax =0 (3.13)
has p = n- rank(A) > 0 non-trivial linearly independent solutions
x
1
,x
2
,...,x
p
.
so that there are in?nitely many solutions because we can write
x =x
PS
+
p
X
j=1
a
j
ˆ x
j
where a
j
are arbitrary constants (and hence there are in?nitely many solutions).
No proof of this theorem is given.
To illustrate this theorem consider the following simple 2×2 matrix example:
Example 5
Determine the solution structure of the linear system Ax =b when
(I) A =
2 1
1 1
(II) A =
1 1
2 2
(3.14)
and in the case of (II) when
b =
1
2
b =
1
1
(3.15)
(I) Since Det(A) = 16= 0 the solution exists for any b, given by x =A
-1
b.
(II) Here Det(A) = 0 so we have to consider solutions to the adjoint homogeneous system,
i.e.
A
T
ˆ x = 0 (3.16)
i.e.
1 2
1 2
ˆ x = 0. (3.17)
This has the 1 non-trivial linearly independent solution ˆ x
1
= (2 -1)
T
. It is clear that
there should be 1 such solution, i.e. p = n-rank(A) = 2-1 = 1.
Note also that the homogeneous system
Aˆ x = 0 (3.18)
i.e.
1 1
2 2
ˆ x = 0 (3.19)
has the 1 non-trivial linearly independent solution x
1
= (1 -1)
T
. If solutions do exist
they will therefore have the form x =x
PS
+a
1
x
1
.
A solution to the problem Ax = b will exist if ˆ x
1
·b = 0. This condition does hold for
b = (1 2)
T
and so the theorem predicts that a solution will exist. Indeed it does, note
that x
PS
= (1/2 1/2)
T
and so x =x
PS
+a
1
x
1
is the in?nite set of solutions.
The orthogonality condition does not hold for b = (1 1)
T
and so the theorem predicts
that a solution will not exist. This is clear from looking at the system.
Page 4
Fredholm Equations with Degenerate Kernels
We have seen that a Fredholm integral equation of the second kind is de?ned as
f (x) = ?
Z
b
a
K(x,y)f (y)dy+g(x). (3.2)
De?nition 3.9: The kernel K(x,y) is said to be degenerate (separable) if it can be
written as a sum of terms, each being a product of a function of x and a function of y.
Thus,
K(x,y) =
n
X
j=1
u
j
(x)v
j
(y) =u
T
(x)v(y) =u(x)·v(y) =hu,vi, (3.3)
where the latter notation is the inner product for ?nite vector spaces (i.e. the dot product).
Equation (3.2) may be solved by reduction to a set of simultaneous linear algebraic equa-
tions as we shall now show. Substituting (3.3) into (3.2) gives
f (x) = ?
Z
b
a
"
n
X
j=1
u
j
(x)v
j
(y)
#
f (y)dy+g(x)
= ?
n
X
j=1
u
j
(x)
Z
b
a
v
i
(y)f (y)dy
+g(x)
and letting
c
j
=
Z
b
a
v
j
(y)f (y)dy =hv
j
,fi, (3.4)
then
f (x) = ?
n
X
j=1
c
j
u
j
(x)+g(x). (3.5)
For this class of kernel, it is su?cient to ?nd the c
j
in order to obtain the solution to the
integral equation. Eliminating f between equations (3.4)and(3.5)(i.e. take inner product
of both sides with v
i
) gives
c
i
=
Z
b
a
v
i
(y)
"
?
n
X
j=1
c
j
u
j
(y)+g(y)
#
dy,
or interchanging the summation and integration,
c
i
= ?
n
X
j=1
c
j
Z
b
a
v
i
(y)u
j
(y)dy+
Z
b
a
v
i
(y)g(y)dy. (3.6)
Writing
a
ij
=
Z
b
a
v
i
(y)u
j
(y)dy =hv
i
,u
j
i, (3.7)
and
g
i
=
Z
b
a
v
i
(y)g(y)dy =hv
i
,gi, (3.8)
then (3.6) becomes
c
i
= ?
n
X
j=1
a
ij
c
j
+g
i
. (3.9)
By de?ning the matrices
A = (a
ij
), c =
?
?
?
?
?
c
1
c
2
.
.
.
c
n
?
?
?
?
?
, g =
?
?
?
?
?
g
1
g
2
.
.
.
g
n
?
?
?
?
?
this equation may be written in matrix notation as
c = ?Ac+g
i.e.
(I-?A)c =g (3.10)
where I is the identity. This is just a simple linear system of equations for c. We
therefore need to understand how we solve the canonical system Ax = b where A is a
given matrix, b is the given forcing vector and x is the vector to be determined. Let’s
state an important theorem from Linear Algebra:
Theorem 3.10: (Fredholm Alternative)
Consider the linear system
Ax =b (3.11)
where A is an n×n matrix, x is an unknown n×1 column vector, and b is a speci?ed
n×1 column vector.
We also introduce the related (adjoint) homogeneous problem
A
T
ˆ x =0 (3.12)
with p = n- rank(A) non-trivial linearly independent solutions
ˆ x
1
,ˆ x
2
,...,ˆ x
p
.
[Reminder, rank(A) is the number of linearly independent rows (or columns) of the matrix
A.]
Then the following alternatives hold:
either
(i) DetA6= 0, so that there exists a unique solution to (3.11) given by x =A
-1
b for each
given b. (And b =0?x =0)
or
(ii) DetA = 0 and then
(a) If b is such that hb,ˆ x
j
i = 0 for all j then there are in?nitely many solutions
to equation (3.11).
(b) If b is such that hb,ˆ x
j
i 6= 0 for any j then there is no solution to equation
(3.11).
In the case of (ii)(a), then there are in?nitely many solutions because the theorem states
that we can ?nd a particular solution x
PS
and furthermore, the homogeneous system
Ax =0 (3.13)
has p = n- rank(A) > 0 non-trivial linearly independent solutions
x
1
,x
2
,...,x
p
.
so that there are in?nitely many solutions because we can write
x =x
PS
+
p
X
j=1
a
j
ˆ x
j
where a
j
are arbitrary constants (and hence there are in?nitely many solutions).
No proof of this theorem is given.
To illustrate this theorem consider the following simple 2×2 matrix example:
Example 5
Determine the solution structure of the linear system Ax =b when
(I) A =
2 1
1 1
(II) A =
1 1
2 2
(3.14)
and in the case of (II) when
b =
1
2
b =
1
1
(3.15)
(I) Since Det(A) = 16= 0 the solution exists for any b, given by x =A
-1
b.
(II) Here Det(A) = 0 so we have to consider solutions to the adjoint homogeneous system,
i.e.
A
T
ˆ x = 0 (3.16)
i.e.
1 2
1 2
ˆ x = 0. (3.17)
This has the 1 non-trivial linearly independent solution ˆ x
1
= (2 -1)
T
. It is clear that
there should be 1 such solution, i.e. p = n-rank(A) = 2-1 = 1.
Note also that the homogeneous system
Aˆ x = 0 (3.18)
i.e.
1 1
2 2
ˆ x = 0 (3.19)
has the 1 non-trivial linearly independent solution x
1
= (1 -1)
T
. If solutions do exist
they will therefore have the form x =x
PS
+a
1
x
1
.
A solution to the problem Ax = b will exist if ˆ x
1
·b = 0. This condition does hold for
b = (1 2)
T
and so the theorem predicts that a solution will exist. Indeed it does, note
that x
PS
= (1/2 1/2)
T
and so x =x
PS
+a
1
x
1
is the in?nite set of solutions.
The orthogonality condition does not hold for b = (1 1)
T
and so the theorem predicts
that a solution will not exist. This is clear from looking at the system.
Now let us apply the Fredholm Alternative theorem to equation (3.10) in order to solve
the problem of degenerate kernels in general.
Case (i) if
det(I-?A)6= 0 (3.20)
then the Fredholm Alternative theorem tells us that (3.10) has a unique solution for c:
c = (I-?A)
-1
g. (3.21)
Hence (3.2), with degenerate kernel (3.3), has the solution (3.5):
f (x) = ?
n
X
i=1
c
i
u
i
(x)+g(x) = ?(u(x))
T
c+g(x)
or from (3.21)
f(x) = ?(u(x))
T
(I-?A)
-1
g+g(x),
which may be expressed, from (3.8), as
f(x) = ?
Z
b
a
(u(x))
T
(I-?A)
-1
v(y)
g(y)dy+g(x).
De?nition 3.11: The resolvent kernel R(?,x,y) is such that the integral representa-
tion for the solution
f (x) = ?
Z
b
a
R(?,x,y)g(y)dy+g(x)
holds.
Theorem 3.12: For a degenerate kernel, the resolvent kernel is given by
R(?,x,y) = (u(x))
T
(I-?A)
-1
v(y).
Case (i) covered the simple case when there is a unique solution. Let us know concern
ourselves with the case when the determinant of the matrix on the left hand side of the
linear system is zero.
Case (ii) suppose
det(I-?A) = 0, (3.22)
and that the homogeneous equation
(I-?A)c = 0 (3.23)
has p non-trivial linearly independent solutions
c
1
,c
2
,...,c
p
.
Then, the homogeneous form of the integral equation (3.2), i.e.
f (x) = ?
Z
b
a
K(x,y)f (y)dy, (3.24)
Page 5
Fredholm Equations with Degenerate Kernels
We have seen that a Fredholm integral equation of the second kind is de?ned as
f (x) = ?
Z
b
a
K(x,y)f (y)dy+g(x). (3.2)
De?nition 3.9: The kernel K(x,y) is said to be degenerate (separable) if it can be
written as a sum of terms, each being a product of a function of x and a function of y.
Thus,
K(x,y) =
n
X
j=1
u
j
(x)v
j
(y) =u
T
(x)v(y) =u(x)·v(y) =hu,vi, (3.3)
where the latter notation is the inner product for ?nite vector spaces (i.e. the dot product).
Equation (3.2) may be solved by reduction to a set of simultaneous linear algebraic equa-
tions as we shall now show. Substituting (3.3) into (3.2) gives
f (x) = ?
Z
b
a
"
n
X
j=1
u
j
(x)v
j
(y)
#
f (y)dy+g(x)
= ?
n
X
j=1
u
j
(x)
Z
b
a
v
i
(y)f (y)dy
+g(x)
and letting
c
j
=
Z
b
a
v
j
(y)f (y)dy =hv
j
,fi, (3.4)
then
f (x) = ?
n
X
j=1
c
j
u
j
(x)+g(x). (3.5)
For this class of kernel, it is su?cient to ?nd the c
j
in order to obtain the solution to the
integral equation. Eliminating f between equations (3.4)and(3.5)(i.e. take inner product
of both sides with v
i
) gives
c
i
=
Z
b
a
v
i
(y)
"
?
n
X
j=1
c
j
u
j
(y)+g(y)
#
dy,
or interchanging the summation and integration,
c
i
= ?
n
X
j=1
c
j
Z
b
a
v
i
(y)u
j
(y)dy+
Z
b
a
v
i
(y)g(y)dy. (3.6)
Writing
a
ij
=
Z
b
a
v
i
(y)u
j
(y)dy =hv
i
,u
j
i, (3.7)
and
g
i
=
Z
b
a
v
i
(y)g(y)dy =hv
i
,gi, (3.8)
then (3.6) becomes
c
i
= ?
n
X
j=1
a
ij
c
j
+g
i
. (3.9)
By de?ning the matrices
A = (a
ij
), c =
?
?
?
?
?
c
1
c
2
.
.
.
c
n
?
?
?
?
?
, g =
?
?
?
?
?
g
1
g
2
.
.
.
g
n
?
?
?
?
?
this equation may be written in matrix notation as
c = ?Ac+g
i.e.
(I-?A)c =g (3.10)
where I is the identity. This is just a simple linear system of equations for c. We
therefore need to understand how we solve the canonical system Ax = b where A is a
given matrix, b is the given forcing vector and x is the vector to be determined. Let’s
state an important theorem from Linear Algebra:
Theorem 3.10: (Fredholm Alternative)
Consider the linear system
Ax =b (3.11)
where A is an n×n matrix, x is an unknown n×1 column vector, and b is a speci?ed
n×1 column vector.
We also introduce the related (adjoint) homogeneous problem
A
T
ˆ x =0 (3.12)
with p = n- rank(A) non-trivial linearly independent solutions
ˆ x
1
,ˆ x
2
,...,ˆ x
p
.
[Reminder, rank(A) is the number of linearly independent rows (or columns) of the matrix
A.]
Then the following alternatives hold:
either
(i) DetA6= 0, so that there exists a unique solution to (3.11) given by x =A
-1
b for each
given b. (And b =0?x =0)
or
(ii) DetA = 0 and then
(a) If b is such that hb,ˆ x
j
i = 0 for all j then there are in?nitely many solutions
to equation (3.11).
(b) If b is such that hb,ˆ x
j
i 6= 0 for any j then there is no solution to equation
(3.11).
In the case of (ii)(a), then there are in?nitely many solutions because the theorem states
that we can ?nd a particular solution x
PS
and furthermore, the homogeneous system
Ax =0 (3.13)
has p = n- rank(A) > 0 non-trivial linearly independent solutions
x
1
,x
2
,...,x
p
.
so that there are in?nitely many solutions because we can write
x =x
PS
+
p
X
j=1
a
j
ˆ x
j
where a
j
are arbitrary constants (and hence there are in?nitely many solutions).
No proof of this theorem is given.
To illustrate this theorem consider the following simple 2×2 matrix example:
Example 5
Determine the solution structure of the linear system Ax =b when
(I) A =
2 1
1 1
(II) A =
1 1
2 2
(3.14)
and in the case of (II) when
b =
1
2
b =
1
1
(3.15)
(I) Since Det(A) = 16= 0 the solution exists for any b, given by x =A
-1
b.
(II) Here Det(A) = 0 so we have to consider solutions to the adjoint homogeneous system,
i.e.
A
T
ˆ x = 0 (3.16)
i.e.
1 2
1 2
ˆ x = 0. (3.17)
This has the 1 non-trivial linearly independent solution ˆ x
1
= (2 -1)
T
. It is clear that
there should be 1 such solution, i.e. p = n-rank(A) = 2-1 = 1.
Note also that the homogeneous system
Aˆ x = 0 (3.18)
i.e.
1 1
2 2
ˆ x = 0 (3.19)
has the 1 non-trivial linearly independent solution x
1
= (1 -1)
T
. If solutions do exist
they will therefore have the form x =x
PS
+a
1
x
1
.
A solution to the problem Ax = b will exist if ˆ x
1
·b = 0. This condition does hold for
b = (1 2)
T
and so the theorem predicts that a solution will exist. Indeed it does, note
that x
PS
= (1/2 1/2)
T
and so x =x
PS
+a
1
x
1
is the in?nite set of solutions.
The orthogonality condition does not hold for b = (1 1)
T
and so the theorem predicts
that a solution will not exist. This is clear from looking at the system.
Now let us apply the Fredholm Alternative theorem to equation (3.10) in order to solve
the problem of degenerate kernels in general.
Case (i) if
det(I-?A)6= 0 (3.20)
then the Fredholm Alternative theorem tells us that (3.10) has a unique solution for c:
c = (I-?A)
-1
g. (3.21)
Hence (3.2), with degenerate kernel (3.3), has the solution (3.5):
f (x) = ?
n
X
i=1
c
i
u
i
(x)+g(x) = ?(u(x))
T
c+g(x)
or from (3.21)
f(x) = ?(u(x))
T
(I-?A)
-1
g+g(x),
which may be expressed, from (3.8), as
f(x) = ?
Z
b
a
(u(x))
T
(I-?A)
-1
v(y)
g(y)dy+g(x).
De?nition 3.11: The resolvent kernel R(?,x,y) is such that the integral representa-
tion for the solution
f (x) = ?
Z
b
a
R(?,x,y)g(y)dy+g(x)
holds.
Theorem 3.12: For a degenerate kernel, the resolvent kernel is given by
R(?,x,y) = (u(x))
T
(I-?A)
-1
v(y).
Case (i) covered the simple case when there is a unique solution. Let us know concern
ourselves with the case when the determinant of the matrix on the left hand side of the
linear system is zero.
Case (ii) suppose
det(I-?A) = 0, (3.22)
and that the homogeneous equation
(I-?A)c = 0 (3.23)
has p non-trivial linearly independent solutions
c
1
,c
2
,...,c
p
.
Then, the homogeneous form of the integral equation (3.2), i.e.
f (x) = ?
Z
b
a
K(x,y)f (y)dy, (3.24)
with degenerate kernel (3.3), has p solutions, from (3.5):
f
j
(x) = ?
n
X
i=1
c
j
i
u
i
(x) (3.25)
with j = 1,2,...,p.
Turning to the inhomogenous equation, (3.10), it has a solution if and only if the forcing
term g is orthogonal to every solution of
(I-?A)
T
h =0 i.e. h
T
g =0 (3.26)
or
n
X
i=1
h
i
g
i
= 0.
Hence (3.8) yields
n
X
i=1
h
i
Z
b
a
v
i
(y)g(y)dy = 0
which is equivalent to
Z
b
a
n
X
i=1
h
i
v
i
(y)
!
g(y)dy = 0.
Thus, writing
h(y) =
n
X
i=1
h
i
v
i
(y) (3.27)
then
Z
b
a
h(y)g(y)dy = 0,
which means that g(x) must be orthogonal to h(x) on [a,b].
Let us explore the function h(x) a little; we start be expressing (3.26) as
h
i
-?
n
X
j=1
a
ji
h
j
= 0.
Withoutlossofgeneralityassume thatallthe v
i
(x)in(3.3)arelinearlyindependent (since
ifoneisdependentontheothers,eliminateitandobtainaseparablekernelwithnreplaced
by (n-1)). Multiply the ith equation in (3.26) by v
i
(x) and sum over all i from 1 to n:
n
X
i=1
h
i
v
i
(x)-?
n
X
i=1
n
X
j=1
a
ji
h
j
v
i
(x) = 0,
i.e. from (3.7)
n
X
i=1
h
i
v
i
(x)-?
Z
b
a
n
X
i=1
n
X
j=1
h
j
v
j
(y)u
i
(y)v
i
(x)dy = 0.
Using (3.27) and (3.3) we see that this reduces to the integral equation:
h(x)-?
Z
b
a
K(y,x)h(y)dy = 0. (3.28)
Read More