Computer Science Engineering (CSE) Exam  >  Computer Science Engineering (CSE) Notes  >  Numerical Linear Algebra - Tutorial 4, Semester, Engineering

Numerical Linear Algebra - Tutorial 4, Semester, Engineering - Computer Science Engineering (CSE) PDF Download

Download, print and study this document offline
Please wait while the PDF view is loading
 Page 1


	


	
	

	
	

Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.

 		
 			

The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
Page 2


	


	
	

	
	

Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.

 		
 			

The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array

! 
	"#"	$ 
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
   error('Matrices must be square')
end
if n ~= u
   error('Inner dimensions must agree')
end
C = zeros(n);
 for i=1:n
   for j=i:n
       C(i,j) = A(i,i:j)*B(i:j,j);
   end
end
Page 3


	


	
	

	
	

Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.

 		
 			

The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array

! 
	"#"	$ 
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
   error('Matrices must be square')
end
if n ~= u
   error('Inner dimensions must agree')
end
C = zeros(n);
 for i=1:n
   for j=i:n
       C(i,j) = A(i,i:j)*B(i:j,j);
   end
end
3
In the following example a product of two random triangular matrices is computed using function
prod2t. Number of flops is also determined.
A = triu(randn(4));  B = triu(rand(4));
flops(0)
C = prod2t(A, B)
nflps = flops  
C =
   -0.4110   -1.2593   -0.6637   -1.4261
         0    0.9076    0.6371    1.7957
         0         0   -0.1149   -0.0882
         0         0         0    0.0462
nflps =
    36  
For comparison, using MATLAB's "general purpose" matrix multiplication operator *,
the number of flops needed for computing the product of matrices A and B is
flops(0)
A*B;
flops  
ans =
   128  
Product of two Hessenberg matrices A and B, where A is a lower Hessenberg and B is an upper
Hessenberg can be computed using function Hessprod.
function C = Hessprod(A, B)
% Product C = A*B, where A and B are the lower and
% upper Hessenberg matrices, respectively.
[m, n] = size(A);
C = zeros(n);
for i=1:n
   for j=1:n
      if( j<n )
         l = min(i,j)+1;
      else
         l = n;
      end
         C(i,j) = A(i,1:l)*B(1:l,j);
      end
   end
We will run this function on Hessenberg matrices obtained from the Hilbert matrix H
H = hilb(10);  
Page 4


	


	
	

	
	

Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.

 		
 			

The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array

! 
	"#"	$ 
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
   error('Matrices must be square')
end
if n ~= u
   error('Inner dimensions must agree')
end
C = zeros(n);
 for i=1:n
   for j=i:n
       C(i,j) = A(i,i:j)*B(i:j,j);
   end
end
3
In the following example a product of two random triangular matrices is computed using function
prod2t. Number of flops is also determined.
A = triu(randn(4));  B = triu(rand(4));
flops(0)
C = prod2t(A, B)
nflps = flops  
C =
   -0.4110   -1.2593   -0.6637   -1.4261
         0    0.9076    0.6371    1.7957
         0         0   -0.1149   -0.0882
         0         0         0    0.0462
nflps =
    36  
For comparison, using MATLAB's "general purpose" matrix multiplication operator *,
the number of flops needed for computing the product of matrices A and B is
flops(0)
A*B;
flops  
ans =
   128  
Product of two Hessenberg matrices A and B, where A is a lower Hessenberg and B is an upper
Hessenberg can be computed using function Hessprod.
function C = Hessprod(A, B)
% Product C = A*B, where A and B are the lower and
% upper Hessenberg matrices, respectively.
[m, n] = size(A);
C = zeros(n);
for i=1:n
   for j=1:n
      if( j<n )
         l = min(i,j)+1;
      else
         l = n;
      end
         C(i,j) = A(i,1:l)*B(1:l,j);
      end
   end
We will run this function on Hessenberg matrices obtained from the Hilbert matrix H
H = hilb(10);  
4
A = tril(H,1);   B = triu(H,-1);  
flops(0)  
C = Hessprod(A,B);  
nflps = flops  
nflps =
        1039  
Using the multiplication operator * the number of flops used for the same problem is
flops(0)  
C = A*B;  
nflps = flops  
nflps =
        2000  
For more algorithms for computing the matrix-matrix products see the subsequent sections of this
tutorial.

% "	
The goal of this section is to discuss important matrix transformations that are used in numerical
linear algebra.
On several occasions we will use function ek(k, n) – the kth coordinate vector in the
n-dimensional Euclidean space
function v = ek(k, n)
% The k-th coordinate vector in the n-dimensional Euclidean space.
v = zeros(n,1);
v(k) = 1;
4.3.1    Gauss transformation
In many problems that arise in applied mathematics one wants to transform a matrix to an upper
triangular one. This goal can be accomplished using the Gauss transformation (synonym:
elementary matrix).
Let m, e
k
   n
. The Gauss transformation M
k 
 M is defined as M = I – me
k
T
. Vector m used
here is called the Gauss vector and I is the n-by-n identity matrix. In this section we present two
functions for computations with this transformation. For more information about this
transformation the reader is referred to [3].
Page 5


	


	
	

	
	

Edward Neuman
Department of Mathematics
Southern Illinois University at Carbondale
edneuman@siu.edu
This tutorial is devoted to discussion of the computational methods used in numerical linear
algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical
methods for solving systems of linear equations, the linear least squares, orthogonality, singular
value decomposition, the matrix eigenvalue problem, and computations with sparse matrices.

 		
 			

The following MATLAB functions will be used in this tutorial.
Function Description
abs Absolute value
chol Cholesky factorization
cond Condition number
det Determinant
diag Diagonal matrices and diagonals of a matrix
diff Difference and approximate derivative
eps Floating point relative accuracy
eye Identity matrix
fliplr Flip matrix in left/right direction
flipud Flip matrix in up/down direction
flops Floating point operation count
full Convert sparse matrix to full matrix
funm Evaluate general matrix function
hess Hessenberg form
hilb Hilbert matrix
imag Complex imaginary part
inv Matrix inverse
length Length of vector
lu LU factorization
max Largest component
2
min Smallest component
norm Matrix or vector norm
ones Ones array
pascal Pascal matrix
pinv Pseudoinverse
qr Orthogonal-triangular decomposition
rand Uniformly distributed random numbers
randn Normally distributed random numbers
rank Matrix rank
real Complex real part
repmat Replicate and tile an array
schur Schur decomposition
sign Signum function
size Size of matrix
sqrt Square root
sum Sum of elements
svd Singular value decomposition
tic Start a stopwatch timer
toc Read the stopwach timer
trace Sum of diagonal entries
tril Extract lower triangular part
triu Extract upper triangular part
zeros Zeros array

! 
	"#"	$ 
Computation of the product of two or more matrices is one of the basic operations in the
numerical linear algebra. Number of flops needed for computing a product of two matrices A and
B can be decreased drastically if a special structure of matrices A and B is utilized properly. For
instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper
(lower) triangular matrix.
function C = prod2t(A, B)
% Product C = A*B of two upper triangular matrices A and B.
[m,n] = size(A);
[u,v] = size(B);
if (m ~= n) | (u ~= v)
   error('Matrices must be square')
end
if n ~= u
   error('Inner dimensions must agree')
end
C = zeros(n);
 for i=1:n
   for j=i:n
       C(i,j) = A(i,i:j)*B(i:j,j);
   end
end
3
In the following example a product of two random triangular matrices is computed using function
prod2t. Number of flops is also determined.
A = triu(randn(4));  B = triu(rand(4));
flops(0)
C = prod2t(A, B)
nflps = flops  
C =
   -0.4110   -1.2593   -0.6637   -1.4261
         0    0.9076    0.6371    1.7957
         0         0   -0.1149   -0.0882
         0         0         0    0.0462
nflps =
    36  
For comparison, using MATLAB's "general purpose" matrix multiplication operator *,
the number of flops needed for computing the product of matrices A and B is
flops(0)
A*B;
flops  
ans =
   128  
Product of two Hessenberg matrices A and B, where A is a lower Hessenberg and B is an upper
Hessenberg can be computed using function Hessprod.
function C = Hessprod(A, B)
% Product C = A*B, where A and B are the lower and
% upper Hessenberg matrices, respectively.
[m, n] = size(A);
C = zeros(n);
for i=1:n
   for j=1:n
      if( j<n )
         l = min(i,j)+1;
      else
         l = n;
      end
         C(i,j) = A(i,1:l)*B(1:l,j);
      end
   end
We will run this function on Hessenberg matrices obtained from the Hilbert matrix H
H = hilb(10);  
4
A = tril(H,1);   B = triu(H,-1);  
flops(0)  
C = Hessprod(A,B);  
nflps = flops  
nflps =
        1039  
Using the multiplication operator * the number of flops used for the same problem is
flops(0)  
C = A*B;  
nflps = flops  
nflps =
        2000  
For more algorithms for computing the matrix-matrix products see the subsequent sections of this
tutorial.

% "	
The goal of this section is to discuss important matrix transformations that are used in numerical
linear algebra.
On several occasions we will use function ek(k, n) – the kth coordinate vector in the
n-dimensional Euclidean space
function v = ek(k, n)
% The k-th coordinate vector in the n-dimensional Euclidean space.
v = zeros(n,1);
v(k) = 1;
4.3.1    Gauss transformation
In many problems that arise in applied mathematics one wants to transform a matrix to an upper
triangular one. This goal can be accomplished using the Gauss transformation (synonym:
elementary matrix).
Let m, e
k
   n
. The Gauss transformation M
k 
 M is defined as M = I – me
k
T
. Vector m used
here is called the Gauss vector and I is the n-by-n identity matrix. In this section we present two
functions for computations with this transformation. For more information about this
transformation the reader is referred to [3].
5
function m = Gaussv(x, k)
% Gauss vector m from the vector x and the position
% k (k > 0)of the pivot entry.
if x(k) == 0
   error('Wrong vector')
end;
n = length(x);
x = x(:);
if ( k > 0 & k < n )
   m = [zeros(k,1);x(k+1:n)/x(k)];
else
   error('Index k is out of range')
end
Let M be the Gauss transformation. The matrix-vector product M*b can be computed without
forming the matrix M explicitly. Function Gaussprod implements a well-known formula for the
product in question.
function c = Gaussprod(m, k, b)
% Product c = M*b, where M is the Gauss transformation
% determined by the Gauss vector m and its column
% index k.
n = length(b);
if ( k < 0 | k > n-1 )
   error('Index k is out of range')
end
b = b(:);
c = [b(1:k);-b(k)*m(k+1:n)+b(k+1:n)];
Let
x = 1:4; k = 2;
m = Gaussv(x,k)  
m =
         0
         0
    1.5000
    2.0000  
Then
c = Gaussprod(m, k, x)  
c =
     1
     2
     0
     0  
Read More

Top Courses for Computer Science Engineering (CSE)

FAQs on Numerical Linear Algebra - Tutorial 4, Semester, Engineering - Computer Science Engineering (CSE)

1. What is Numerical Linear Algebra?
Ans. Numerical Linear Algebra is a branch of mathematics that focuses on the algorithms and numerical methods used to solve problems related to linear algebra. It involves applying numerical techniques to compute solutions for systems of linear equations, eigenvalues and eigenvectors of matrices, matrix factorizations, and other linear algebra problems. It plays a crucial role in various fields such as computer science, engineering, physics, and data science.
2. What are the applications of Numerical Linear Algebra?
Ans. Numerical Linear Algebra has numerous applications in various fields. Some of the key applications include: - Solving systems of linear equations: Numerical Linear Algebra techniques are used to solve large systems of linear equations, which arise in diverse fields such as circuit analysis, structural engineering, and optimization problems. - Image and signal processing: Techniques from Numerical Linear Algebra are applied in image and signal processing tasks like image compression, noise reduction, and image reconstruction. - Data analysis: It is used in data analysis tasks such as principal component analysis (PCA), linear regression, and data fitting. - Machine learning: Numerical Linear Algebra is a fundamental component of machine learning algorithms, including matrix factorizations, support vector machines, and deep learning. - Scientific computing: It is used in scientific simulations and computations involving large matrices, such as finite element analysis and computational fluid dynamics.
3. What are the common numerical methods used in Numerical Linear Algebra?
Ans. There are several numerical methods employed in Numerical Linear Algebra. Some of the commonly used methods include: - Gaussian elimination: This method is used to solve systems of linear equations by performing row operations on the augmented matrix. - LU decomposition: It decomposes a square matrix into the product of a lower triangular matrix and an upper triangular matrix, which can be used to efficiently solve systems of linear equations. - QR decomposition: It decomposes a matrix into the product of an orthogonal matrix and an upper triangular matrix, which is useful in solving least squares problems and eigenvalue computations. - Singular Value Decomposition (SVD): It decomposes a matrix into the product of three matrices, which is widely used in data analysis, image compression, and dimensionality reduction. - Iterative methods: These methods involve iteratively refining an initial guess to the solution, such as the Jacobi method, Gauss-Seidel method, and Conjugate Gradient method.
4. Can Numerical Linear Algebra be used to solve eigenvalue problems?
Ans. Yes, Numerical Linear Algebra provides various methods to solve eigenvalue problems. One of the commonly used methods is the Power iteration method, which iteratively computes the dominant eigenvalue and eigenvector of a matrix. Other methods include the QR algorithm, Lanczos algorithm, and Arnoldi iteration. These techniques are crucial in applications such as spectral graph theory, quantum mechanics, and data analysis.
5. What is the role of Numerical Linear Algebra in data science?
Ans. Numerical Linear Algebra plays a fundamental role in data science. It provides the necessary tools and techniques to handle and analyze large datasets efficiently. Some of the key roles of Numerical Linear Algebra in data science include: - Dimensionality reduction: Techniques like Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) are used to reduce the dimensionality of data while preserving important information. - Linear regression: Numerical Linear Algebra methods are used to perform linear regression, which is a common technique for modeling relationships between variables. - Matrix factorizations: Matrix factorizations like QR decomposition and SVD are used in collaborative filtering, recommendation systems, and clustering algorithms. - Optimization: Numerical Linear Algebra techniques are used in solving optimization problems, which are crucial in various data science tasks such as model training, parameter estimation, and model selection.
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Engineering - Computer Science Engineering (CSE)

,

video lectures

,

Numerical Linear Algebra - Tutorial 4

,

MCQs

,

Sample Paper

,

Summary

,

past year papers

,

mock tests for examination

,

Semester

,

Engineering - Computer Science Engineering (CSE)

,

Semester Notes

,

Objective type Questions

,

Free

,

ppt

,

Viva Questions

,

practice quizzes

,

Extra Questions

,

Exam

,

shortcuts and tricks

,

pdf

,

Important questions

,

Numerical Linear Algebra - Tutorial 4

,

Numerical Linear Algebra - Tutorial 4

,

Semester

,

Previous Year Questions with Solutions

,

study material

,

Engineering - Computer Science Engineering (CSE)

,

Semester

;