Short Notes: Algorithms-Divide & Conquer | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE) PDF Download

Download, print and study this document offline
Please wait while the PDF view is loading
 Page 1


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Page 2


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
Page 3


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
= 2k _ 1 T(2 ) + £(1SISk-1)2k 
_ 2^-1 + 2k — 2 
= 3n/2 - 2 = 0(n)
Matrix Multiplication:
Simple approach of multiplication:
The product of two n x n matrices X and Y is a third n x n matrix Z = X Y , with (i, j)th 
entry:
n
Z ij ^ ^ Xu-Yfcj.
k = i
• Input: X, Y: Array(1 ... n) of number
• Output: Z: Array(1 ... n) := X x Y
• Obvious algorithm: Z has n2 entries, each of which multiples n pairs
° Zij=ZXik Y k j
• Break X into X II, X12, X21, X22
• Break Y into Y11, Y12, Y21.Y22
• Break Z into Z11, Z12, Z21.Z22
• Let Z = product of two n/2 by n/2 arrays
Z11=X11xY11+X12xY21
Z12=X11xY12+X12xY22
Z21=X21xY11+X22xY21
Z22=X21xY12+X22xY22
• T(n)=0(1)+8T(n/2)+0(n2 )
o 0 (1 ) time to partition matrices 
° 8 Multiplications of arrays of size n/2n/2 
° 0 (n 2 ) time to add nxn matrices
• T(n)= 0 (n 3 ) by master method
Strassen's Algorithm:
Matrix multiplication is particularly easy to break into subproblems, because it can 
be performed block wise. Here divide X into four n/2 x n/2 blocks, and also Y:
A B 
C D ’
Y
E F 
G H
Therefore, the product of X and Y is as follows: •
A B ~E F AE + BG AF + BE
C D G H CE + DG CF + DH
• Uses 7 rather than 8 multiplications of n/2 by n/2 arrays
• Has more additions and subtractions of n/2 by n/2 arrays
Page 4


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
= 2k _ 1 T(2 ) + £(1SISk-1)2k 
_ 2^-1 + 2k — 2 
= 3n/2 - 2 = 0(n)
Matrix Multiplication:
Simple approach of multiplication:
The product of two n x n matrices X and Y is a third n x n matrix Z = X Y , with (i, j)th 
entry:
n
Z ij ^ ^ Xu-Yfcj.
k = i
• Input: X, Y: Array(1 ... n) of number
• Output: Z: Array(1 ... n) := X x Y
• Obvious algorithm: Z has n2 entries, each of which multiples n pairs
° Zij=ZXik Y k j
• Break X into X II, X12, X21, X22
• Break Y into Y11, Y12, Y21.Y22
• Break Z into Z11, Z12, Z21.Z22
• Let Z = product of two n/2 by n/2 arrays
Z11=X11xY11+X12xY21
Z12=X11xY12+X12xY22
Z21=X21xY11+X22xY21
Z22=X21xY12+X22xY22
• T(n)=0(1)+8T(n/2)+0(n2 )
o 0 (1 ) time to partition matrices 
° 8 Multiplications of arrays of size n/2n/2 
° 0 (n 2 ) time to add nxn matrices
• T(n)= 0 (n 3 ) by master method
Strassen's Algorithm:
Matrix multiplication is particularly easy to break into subproblems, because it can 
be performed block wise. Here divide X into four n/2 x n/2 blocks, and also Y:
A B 
C D ’
Y
E F 
G H
Therefore, the product of X and Y is as follows: •
A B ~E F AE + BG AF + BE
C D G H CE + DG CF + DH
• Uses 7 rather than 8 multiplications of n/2 by n/2 arrays
• Has more additions and subtractions of n/2 by n/2 arrays
XY
+ Pi ~ P-i + ¦ F ’ f i P\ + P 2 
Pi + A . Pi + Ps - P3 - P 7
where
Pi =
A{F - H)
P -2 =
(A-t-B)H
Pi - (1 0 + D )E
Pi -
D(G - E)
P> = ( 4 + £ > )(£ + # ) 
Pfi = (B -D )(G + H) 
P7 = (>1 — C )(E + F)
T(n) = 7T(n/2) + 0 (n 4 ),
• T(n)=7T(n/2)+0(n2 ) if n>1
• Solution (by Master Method): T(n )= 0(n 1 0 9 7 ) =0(n2-81)
• Note: There are other matrix multiplication algorithms which can run even 
faster with T(n )= 0(n 2-38)
Maximal Subarray:
Divide and conquer approach to find the maximal subarray: 
maxsub(int[] S; low, high: int)
if low = high then return (low, high, S(low)); // return (lowlndex, highlndex, sum) 
else mid = (low + high) / 2 ;
(How, Ihigh, Isum) = maxsub(S, low, m id );
(rlow, rhigh, rsum) = maxsub(S, mid+1, high );
(mlow, mhigh, msum) = middlemaxsub(S, low, mid, high ); 
end if;
return triple with highest sum 
end maxsub
middlemaxsub(int[] S; low, high, int) return (lowlndex, highlndex, sum)
start at mid and find bestleft and leftsum
start at mid and find bestright and rightsum
return (bestleft, bestright, rightsum+leftsum)
end middlemaxsub
Closest Pair Algorithm:
Input: P = (p(1), p ( 2 ) p ( n ) } where p(i) = ( x(i), y (i)). A set of n points in the 
plane.
Output: The distance between the two points that are closest.
Note: The distance DELTA( / , j ) between p(i) and p(j) is defined by the expression: 
Square root of { (x(i)-x(j))2 + (y(i)-y(j))2 } 
float function closest_pair (P: point set; n: integer) 
float DELTA-LEFT, DELTA-RIGHT; 
float DELTA;
Page 5


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
= 2k _ 1 T(2 ) + £(1SISk-1)2k 
_ 2^-1 + 2k — 2 
= 3n/2 - 2 = 0(n)
Matrix Multiplication:
Simple approach of multiplication:
The product of two n x n matrices X and Y is a third n x n matrix Z = X Y , with (i, j)th 
entry:
n
Z ij ^ ^ Xu-Yfcj.
k = i
• Input: X, Y: Array(1 ... n) of number
• Output: Z: Array(1 ... n) := X x Y
• Obvious algorithm: Z has n2 entries, each of which multiples n pairs
° Zij=ZXik Y k j
• Break X into X II, X12, X21, X22
• Break Y into Y11, Y12, Y21.Y22
• Break Z into Z11, Z12, Z21.Z22
• Let Z = product of two n/2 by n/2 arrays
Z11=X11xY11+X12xY21
Z12=X11xY12+X12xY22
Z21=X21xY11+X22xY21
Z22=X21xY12+X22xY22
• T(n)=0(1)+8T(n/2)+0(n2 )
o 0 (1 ) time to partition matrices 
° 8 Multiplications of arrays of size n/2n/2 
° 0 (n 2 ) time to add nxn matrices
• T(n)= 0 (n 3 ) by master method
Strassen's Algorithm:
Matrix multiplication is particularly easy to break into subproblems, because it can 
be performed block wise. Here divide X into four n/2 x n/2 blocks, and also Y:
A B 
C D ’
Y
E F 
G H
Therefore, the product of X and Y is as follows: •
A B ~E F AE + BG AF + BE
C D G H CE + DG CF + DH
• Uses 7 rather than 8 multiplications of n/2 by n/2 arrays
• Has more additions and subtractions of n/2 by n/2 arrays
XY
+ Pi ~ P-i + ¦ F ’ f i P\ + P 2 
Pi + A . Pi + Ps - P3 - P 7
where
Pi =
A{F - H)
P -2 =
(A-t-B)H
Pi - (1 0 + D )E
Pi -
D(G - E)
P> = ( 4 + £ > )(£ + # ) 
Pfi = (B -D )(G + H) 
P7 = (>1 — C )(E + F)
T(n) = 7T(n/2) + 0 (n 4 ),
• T(n)=7T(n/2)+0(n2 ) if n>1
• Solution (by Master Method): T(n )= 0(n 1 0 9 7 ) =0(n2-81)
• Note: There are other matrix multiplication algorithms which can run even 
faster with T(n )= 0(n 2-38)
Maximal Subarray:
Divide and conquer approach to find the maximal subarray: 
maxsub(int[] S; low, high: int)
if low = high then return (low, high, S(low)); // return (lowlndex, highlndex, sum) 
else mid = (low + high) / 2 ;
(How, Ihigh, Isum) = maxsub(S, low, m id );
(rlow, rhigh, rsum) = maxsub(S, mid+1, high );
(mlow, mhigh, msum) = middlemaxsub(S, low, mid, high ); 
end if;
return triple with highest sum 
end maxsub
middlemaxsub(int[] S; low, high, int) return (lowlndex, highlndex, sum)
start at mid and find bestleft and leftsum
start at mid and find bestright and rightsum
return (bestleft, bestright, rightsum+leftsum)
end middlemaxsub
Closest Pair Algorithm:
Input: P = (p(1), p ( 2 ) p ( n ) } where p(i) = ( x(i), y (i)). A set of n points in the 
plane.
Output: The distance between the two points that are closest.
Note: The distance DELTA( / , j ) between p(i) and p(j) is defined by the expression: 
Square root of { (x(i)-x(j))2 + (y(i)-y(j))2 } 
float function closest_pair (P: point set; n: integer) 
float DELTA-LEFT, DELTA-RIGHT; 
float DELTA;
begin
if n = 2 then
return distance from p(1) to p(2); 
else
P-LEFT := ( p(1), p (2 ).....p(n/2));
P-RIGHT := ( p(n/2+1), p(n/2+2).....p (n ));
DELTA-LEFT := closestpair( P-LEFT, n/2 );
DELTA-RIGHT := closestpair( P-RIGHT, n/2 );
DELTA := minimum ( DELTA-LEFT, DELTA-RIGHT);
/* Determine whether there are points p(l) in P-LEFT and p(r) in P-RIGHT with
distance( p(l), p (r )) < DELTA. If there are such points, set DELTA to be the 
smallest
distance.*/ 
return DELTA; 
end if;
end closest_pair;
The section between the two comment lines is the 'combine' stage of the Divide- 
and-Conquer algorithm.
If there are points p(l) and p(r) whose distance apart is less than DELTA then it 
must be the case that
• The x-coordinates of p(l) and p(r) differ by at most DELTA.
• The y-coordinates of p(l) and p(r) differ by at most DELTA.
Integer Multiplication:
Multiplying two n-bit integers algorithm using divide and conquer approach is given 
below:
multiply
Input: Positive integers r and y , in binary
Output: Their product
n = max(size of x , size of y) 
if n= 1: return
XLf Xu = leftmost fn/2], rightmost [n/2J bits of a : 
yt., >j b — leftmost fn/2], rightmost [u/2j bits of y
P\ = m u ltip ly ^ , y,.}
P2 = m u ltip ly ^ * , ;;*)
P3 = multiply^ + yL + yK)
return Pt x 2" + - P2) x 2’1 /2 + A
T(n) = 3T(n/2) + 0(n).
Summary:
Algorithm
Worst-case running Best-case Average-case
time running time running time
Read More
90 docs

Top Courses for Computer Science Engineering (CSE)

FAQs on Short Notes: Algorithms-Divide & Conquer - Short Notes for Computer Science Engineering - Computer Science Engineering (CSE)

1. What is the divide and conquer algorithm in computer science engineering?
Ans. The divide and conquer algorithm is a problem-solving approach in computer science engineering that involves breaking down a problem into smaller subproblems, solving them recursively, and then combining the solutions to solve the original problem.
2. How does the divide and conquer algorithm work?
Ans. The divide and conquer algorithm works by dividing the problem into smaller subproblems until they become trivial to solve. These subproblems are then solved independently, and their solutions are combined to solve the original problem. This approach is typically implemented using recursion.
3. What are the advantages of using the divide and conquer algorithm?
Ans. The divide and conquer algorithm offers several advantages. It allows for efficient problem-solving by breaking down complex problems into smaller, more manageable subproblems. It can also lead to parallelism, as the subproblems can be solved concurrently. Additionally, it allows for code reuse, as the same algorithm can be used to solve similar problems by modifying the base case and combining step.
4. What are some examples of problems that can be solved using the divide and conquer algorithm?
Ans. The divide and conquer algorithm can be used to solve various problems. Some examples include sorting algorithms like merge sort and quicksort, searching algorithms like binary search, matrix multiplication, finding the maximum subarray sum, and constructing efficient algorithms for problems like the traveling salesman problem and closest pair of points problem.
5. What is the time complexity of the divide and conquer algorithm?
Ans. The time complexity of the divide and conquer algorithm depends on the specific problem and the efficiency of the recursive steps. In general, if the problem is divided into two subproblems of equal size, and the time complexity of combining the solutions is O(1), then the time complexity can be expressed as O(n log n), where n is the size of the input. However, this can vary for different problems and algorithms.
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Important questions

,

Short Notes: Algorithms-Divide & Conquer | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE)

,

Previous Year Questions with Solutions

,

Sample Paper

,

mock tests for examination

,

Viva Questions

,

Short Notes: Algorithms-Divide & Conquer | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE)

,

Exam

,

Short Notes: Algorithms-Divide & Conquer | Short Notes for Computer Science Engineering - Computer Science Engineering (CSE)

,

Summary

,

practice quizzes

,

shortcuts and tricks

,

Semester Notes

,

video lectures

,

Extra Questions

,

MCQs

,

study material

,

Free

,

ppt

,

pdf

,

Objective type Questions

,

past year papers

;