Page 1
Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.
The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
Page 2
Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.
The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array
!
"#" $
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
error('Matrices must be square')
end
if n ~= u
error('Inner dimensions must agree')
end
C = zeros(n);
for i=1:n
for j=i:n
C(i,j) = A(i,i:j)*B(i:j,j);
end
end
Page 3
Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.
The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array
!
"#" $
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
error('Matrices must be square')
end
if n ~= u
error('Inner dimensions must agree')
end
C = zeros(n);
for i=1:n
for j=i:n
C(i,j) = A(i,i:j)*B(i:j,j);
end
end
3
In the following example a product of two random triangular matrices is computed using function
prod2t. Number of flops is also determined.
A = triu(randn(4)); B = triu(rand(4));
flops(0)
C = prod2t(A, B)
nflps = flops
C =
-0.4110 -1.2593 -0.6637 -1.4261
0 0.9076 0.6371 1.7957
0 0 -0.1149 -0.0882
0 0 0 0.0462
nflps =
36
For comparison, using MATLAB's "general purpose" matrix multiplication operator *,
the number of flops needed for computing the product of matrices A and B is
flops(0)
A*B;
flops
ans =
128
Product of two Hessenberg matrices A and B, where A is a lower Hessenberg and B is an upper
Hessenberg can be computed using function Hessprod.
function C = Hessprod(A, B)
% Product C = A*B, where A and B are the lower and
% upper Hessenberg matrices, respectively.
[m, n] = size(A);
C = zeros(n);
for i=1:n
for j=1:n
if( j<n )
l = min(i,j)+1;
else
l = n;
end
C(i,j) = A(i,1:l)*B(1:l,j);
end
end
We will run this function on Hessenberg matrices obtained from the Hilbert matrix H
H = hilb(10);
Page 4
Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.
The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array
!
"#" $
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
error('Matrices must be square')
end
if n ~= u
error('Inner dimensions must agree')
end
C = zeros(n);
for i=1:n
for j=i:n
C(i,j) = A(i,i:j)*B(i:j,j);
end
end
3
In the following example a product of two random triangular matrices is computed using function
prod2t. Number of flops is also determined.
A = triu(randn(4)); B = triu(rand(4));
flops(0)
C = prod2t(A, B)
nflps = flops
C =
-0.4110 -1.2593 -0.6637 -1.4261
0 0.9076 0.6371 1.7957
0 0 -0.1149 -0.0882
0 0 0 0.0462
nflps =
36
For comparison, using MATLAB's "general purpose" matrix multiplication operator *,
the number of flops needed for computing the product of matrices A and B is
flops(0)
A*B;
flops
ans =
128
Product of two Hessenberg matrices A and B, where A is a lower Hessenberg and B is an upper
Hessenberg can be computed using function Hessprod.
function C = Hessprod(A, B)
% Product C = A*B, where A and B are the lower and
% upper Hessenberg matrices, respectively.
[m, n] = size(A);
C = zeros(n);
for i=1:n
for j=1:n
if( j<n )
l = min(i,j)+1;
else
l = n;
end
C(i,j) = A(i,1:l)*B(1:l,j);
end
end
We will run this function on Hessenberg matrices obtained from the Hilbert matrix H
H = hilb(10);
4
A = tril(H,1); B = triu(H,-1);
flops(0)
C = Hessprod(A,B);
nflps = flops
nflps =
1039
Using the multiplication operator * the number of flops used for the same problem is
flops(0)
C = A*B;
nflps = flops
nflps =
2000
For more algorithms for computing the matrix-matrix products see the subsequent sections of this
tutorial.
% "
The goal of this section is to discuss important matrix transformations that are used in numerical
linear algebra.
On several occasions we will use function ek(k, n) – the kth coordinate vector in the
n-dimensional Euclidean space
function v = ek(k, n)
% The k-th coordinate vector in the n-dimensional Euclidean space.
v = zeros(n,1);
v(k) = 1;
4.3.1 Gauss transformation
In many problems that arise in applied mathematics one wants to transform a matrix to an upper
triangular one. This goal can be accomplished using the Gauss transformation (synonym:
elementary matrix).
Let m, e
k
n
. The Gauss transformation M
k
M is defined as M = I – me
k
T
. Vector m used
here is called the Gauss vector and I is the n-by-n identity matrix. In this section we present two
functions for computations with this transformation. For more information about this
transformation the reader is referred to [3].
Page 5
Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.
The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array
!
"#" $
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
error('Matrices must be square')
end
if n ~= u
error('Inner dimensions must agree')
end
C = zeros(n);
for i=1:n
for j=i:n
C(i,j) = A(i,i:j)*B(i:j,j);
end
end
3
In the following example a product of two random triangular matrices is computed using function
prod2t. Number of flops is also determined.
A = triu(randn(4)); B = triu(rand(4));
flops(0)
C = prod2t(A, B)
nflps = flops
C =
-0.4110 -1.2593 -0.6637 -1.4261
0 0.9076 0.6371 1.7957
0 0 -0.1149 -0.0882
0 0 0 0.0462
nflps =
36
For comparison, using MATLAB's "general purpose" matrix multiplication operator *,
the number of flops needed for computing the product of matrices A and B is
flops(0)
A*B;
flops
ans =
128
Product of two Hessenberg matrices A and B, where A is a lower Hessenberg and B is an upper
Hessenberg can be computed using function Hessprod.
function C = Hessprod(A, B)
% Product C = A*B, where A and B are the lower and
% upper Hessenberg matrices, respectively.
[m, n] = size(A);
C = zeros(n);
for i=1:n
for j=1:n
if( j<n )
l = min(i,j)+1;
else
l = n;
end
C(i,j) = A(i,1:l)*B(1:l,j);
end
end
We will run this function on Hessenberg matrices obtained from the Hilbert matrix H
H = hilb(10);
4
A = tril(H,1); B = triu(H,-1);
flops(0)
C = Hessprod(A,B);
nflps = flops
nflps =
1039
Using the multiplication operator * the number of flops used for the same problem is
flops(0)
C = A*B;
nflps = flops
nflps =
2000
For more algorithms for computing the matrix-matrix products see the subsequent sections of this
tutorial.
% "
The goal of this section is to discuss important matrix transformations that are used in numerical
linear algebra.
On several occasions we will use function ek(k, n) – the kth coordinate vector in the
n-dimensional Euclidean space
function v = ek(k, n)
% The k-th coordinate vector in the n-dimensional Euclidean space.
v = zeros(n,1);
v(k) = 1;
4.3.1 Gauss transformation
In many problems that arise in applied mathematics one wants to transform a matrix to an upper
triangular one. This goal can be accomplished using the Gauss transformation (synonym:
elementary matrix).
Let m, e
k
n
. The Gauss transformation M
k
M is defined as M = I – me
k
T
. Vector m used
here is called the Gauss vector and I is the n-by-n identity matrix. In this section we present two
functions for computations with this transformation. For more information about this
transformation the reader is referred to [3].
5
function m = Gaussv(x, k)
% Gauss vector m from the vector x and the position
% k (k > 0)of the pivot entry.
if x(k) == 0
error('Wrong vector')
end;
n = length(x);
x = x(:);
if ( k > 0 & k < n )
m = [zeros(k,1);x(k+1:n)/x(k)];
else
error('Index k is out of range')
end
Let M be the Gauss transformation. The matrix-vector product M*b can be computed without
forming the matrix M explicitly. Function Gaussprod implements a well-known formula for the
product in question.
function c = Gaussprod(m, k, b)
% Product c = M*b, where M is the Gauss transformation
% determined by the Gauss vector m and its column
% index k.
n = length(b);
if ( k < 0 | k > n-1 )
error('Index k is out of range')
end
b = b(:);
c = [b(1:k);-b(k)*m(k+1:n)+b(k+1:n)];
Let
x = 1:4; k = 2;
m = Gaussv(x,k)
m =
0
0
1.5000
2.0000
Then
c = Gaussprod(m, k, x)
c =
1
2
0
0
Read More