Computer Science Engineering (CSE) Exam  >  Computer Science Engineering (CSE) Notes  >  Algorithms  >  Short Notes: Algorithms-Divide & Conquer

Short Notes: Algorithms-Divide & Conquer

Download, print and study this document offline
Please wait while the PDF view is loading
 Page 1


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Page 2


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
Page 3


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
= 2k _ 1 T(2 ) + £(1SISk-1)2k 
_ 2^-1 + 2k — 2 
= 3n/2 - 2 = 0(n)
Matrix Multiplication:
Simple approach of multiplication:
The product of two n x n matrices X and Y is a third n x n matrix Z = X Y , with (i, j)th 
entry:
n
Z ij ^ ^ Xu-Yfcj.
k = i
• Input: X, Y: Array(1 ... n) of number
• Output: Z: Array(1 ... n) := X x Y
• Obvious algorithm: Z has n2 entries, each of which multiples n pairs
° Zij=ZXik Y k j
• Break X into X II, X12, X21, X22
• Break Y into Y11, Y12, Y21.Y22
• Break Z into Z11, Z12, Z21.Z22
• Let Z = product of two n/2 by n/2 arrays
Z11=X11xY11+X12xY21
Z12=X11xY12+X12xY22
Z21=X21xY11+X22xY21
Z22=X21xY12+X22xY22
• T(n)=0(1)+8T(n/2)+0(n2 )
o 0 (1 ) time to partition matrices 
° 8 Multiplications of arrays of size n/2n/2 
° 0 (n 2 ) time to add nxn matrices
• T(n)= 0 (n 3 ) by master method
Strassen's Algorithm:
Matrix multiplication is particularly easy to break into subproblems, because it can 
be performed block wise. Here divide X into four n/2 x n/2 blocks, and also Y:
A B 
C D ’
Y
E F 
G H
Therefore, the product of X and Y is as follows: •
A B ~E F AE + BG AF + BE
C D G H CE + DG CF + DH
• Uses 7 rather than 8 multiplications of n/2 by n/2 arrays
• Has more additions and subtractions of n/2 by n/2 arrays
Page 4


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
= 2k _ 1 T(2 ) + £(1SISk-1)2k 
_ 2^-1 + 2k — 2 
= 3n/2 - 2 = 0(n)
Matrix Multiplication:
Simple approach of multiplication:
The product of two n x n matrices X and Y is a third n x n matrix Z = X Y , with (i, j)th 
entry:
n
Z ij ^ ^ Xu-Yfcj.
k = i
• Input: X, Y: Array(1 ... n) of number
• Output: Z: Array(1 ... n) := X x Y
• Obvious algorithm: Z has n2 entries, each of which multiples n pairs
° Zij=ZXik Y k j
• Break X into X II, X12, X21, X22
• Break Y into Y11, Y12, Y21.Y22
• Break Z into Z11, Z12, Z21.Z22
• Let Z = product of two n/2 by n/2 arrays
Z11=X11xY11+X12xY21
Z12=X11xY12+X12xY22
Z21=X21xY11+X22xY21
Z22=X21xY12+X22xY22
• T(n)=0(1)+8T(n/2)+0(n2 )
o 0 (1 ) time to partition matrices 
° 8 Multiplications of arrays of size n/2n/2 
° 0 (n 2 ) time to add nxn matrices
• T(n)= 0 (n 3 ) by master method
Strassen's Algorithm:
Matrix multiplication is particularly easy to break into subproblems, because it can 
be performed block wise. Here divide X into four n/2 x n/2 blocks, and also Y:
A B 
C D ’
Y
E F 
G H
Therefore, the product of X and Y is as follows: •
A B ~E F AE + BG AF + BE
C D G H CE + DG CF + DH
• Uses 7 rather than 8 multiplications of n/2 by n/2 arrays
• Has more additions and subtractions of n/2 by n/2 arrays
XY
+ Pi ~ P-i + ¦ F ’ f i P\ + P 2 
Pi + A . Pi + Ps - P3 - P 7
where
Pi =
A{F - H)
P -2 =
(A-t-B)H
Pi - (1 0 + D )E
Pi -
D(G - E)
P> = ( 4 + £ > )(£ + # ) 
Pfi = (B -D )(G + H) 
P7 = (>1 — C )(E + F)
T(n) = 7T(n/2) + 0 (n 4 ),
• T(n)=7T(n/2)+0(n2 ) if n>1
• Solution (by Master Method): T(n )= 0(n 1 0 9 7 ) =0(n2-81)
• Note: There are other matrix multiplication algorithms which can run even 
faster with T(n )= 0(n 2-38)
Maximal Subarray:
Divide and conquer approach to find the maximal subarray: 
maxsub(int[] S; low, high: int)
if low = high then return (low, high, S(low)); // return (lowlndex, highlndex, sum) 
else mid = (low + high) / 2 ;
(How, Ihigh, Isum) = maxsub(S, low, m id );
(rlow, rhigh, rsum) = maxsub(S, mid+1, high );
(mlow, mhigh, msum) = middlemaxsub(S, low, mid, high ); 
end if;
return triple with highest sum 
end maxsub
middlemaxsub(int[] S; low, high, int) return (lowlndex, highlndex, sum)
start at mid and find bestleft and leftsum
start at mid and find bestright and rightsum
return (bestleft, bestright, rightsum+leftsum)
end middlemaxsub
Closest Pair Algorithm:
Input: P = (p(1), p ( 2 ) p ( n ) } where p(i) = ( x(i), y (i)). A set of n points in the 
plane.
Output: The distance between the two points that are closest.
Note: The distance DELTA( / , j ) between p(i) and p(j) is defined by the expression: 
Square root of { (x(i)-x(j))2 + (y(i)-y(j))2 } 
float function closest_pair (P: point set; n: integer) 
float DELTA-LEFT, DELTA-RIGHT; 
float DELTA;
Page 5


Algorithms : Divide and Conquer
Divide and Conquer Design Technique:
Divides the problem into smaller but similar sub problems (divide), solve it 
(conquer), and (combine) these solutions to create a solution to the original 
problem.
• Divide: Breaking the problem into several sub-problems that are similar to the 
original problem but smaller in size,
• Conquer: Solve the sub-problem recursively (successively and 
independently). If the sub problems are small enough, solve them in brute 
force fashion.
• Combine: Combine these solutions to sub-problems to create a solution to the 
original problem.
Divide & Conquer Algorithms:
• Merge sort
• Quicksort
• Binary search
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair algorithms
Divide & Conquer Algorithms can be solved using master theorem to find the 
performance of algorithms.
Master theorem.
T (n ) = a T ( ^ \ + f(n),
for constants a (> 1) ) and 6 (> 1) with / asymptotically positive, 
the following statements are true:
Case 1, if f(n ) = 0 ( n io&,n_'') for some e > 0, then r ( n ) — 0 (n 1 ^ 0}. 
Case 2. If f(n ) = 0 (n lo g * “ ) , then T ’(n ) = 0 (n lo & > “ log n ) .
Case 3. If f(n ) = ft ( t i1 " ® * a+< j for some e > 0
(and « / ( j ) < cf(n) far some c < 1 for all n sufficiently large). then 
T ( n ) = 0 ( / ( n ) ) .
Maximum and Minimum:
MaxMin(i, j, max, min)
{
if (i=j) then max := min := a[i]; 
else if (i=j-1) then 
{
if (a[i] < a[j]) then max := a[j]; min := a[i]; 
else max := a[i]; min := a[j];
}
else
{
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, maxi, mini );
// Combine the solutions.
if (max < m axi) then max := maxi;
if (min > m ini) then min := mini;
}
}
Time Complexity:
T(n ) = 0; if n=1 
= 1; if n=2 
=2T(n/2)=2; if n>2
When n is a power of two, n = 2k , for some positive integer k, then 
T(n ) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2 
= 4T(n/4) + 4 + 2
= 2k _ 1 T(2 ) + £(1SISk-1)2k 
_ 2^-1 + 2k — 2 
= 3n/2 - 2 = 0(n)
Matrix Multiplication:
Simple approach of multiplication:
The product of two n x n matrices X and Y is a third n x n matrix Z = X Y , with (i, j)th 
entry:
n
Z ij ^ ^ Xu-Yfcj.
k = i
• Input: X, Y: Array(1 ... n) of number
• Output: Z: Array(1 ... n) := X x Y
• Obvious algorithm: Z has n2 entries, each of which multiples n pairs
° Zij=ZXik Y k j
• Break X into X II, X12, X21, X22
• Break Y into Y11, Y12, Y21.Y22
• Break Z into Z11, Z12, Z21.Z22
• Let Z = product of two n/2 by n/2 arrays
Z11=X11xY11+X12xY21
Z12=X11xY12+X12xY22
Z21=X21xY11+X22xY21
Z22=X21xY12+X22xY22
• T(n)=0(1)+8T(n/2)+0(n2 )
o 0 (1 ) time to partition matrices 
° 8 Multiplications of arrays of size n/2n/2 
° 0 (n 2 ) time to add nxn matrices
• T(n)= 0 (n 3 ) by master method
Strassen's Algorithm:
Matrix multiplication is particularly easy to break into subproblems, because it can 
be performed block wise. Here divide X into four n/2 x n/2 blocks, and also Y:
A B 
C D ’
Y
E F 
G H
Therefore, the product of X and Y is as follows: •
A B ~E F AE + BG AF + BE
C D G H CE + DG CF + DH
• Uses 7 rather than 8 multiplications of n/2 by n/2 arrays
• Has more additions and subtractions of n/2 by n/2 arrays
XY
+ Pi ~ P-i + ¦ F ’ f i P\ + P 2 
Pi + A . Pi + Ps - P3 - P 7
where
Pi =
A{F - H)
P -2 =
(A-t-B)H
Pi - (1 0 + D )E
Pi -
D(G - E)
P> = ( 4 + £ > )(£ + # ) 
Pfi = (B -D )(G + H) 
P7 = (>1 — C )(E + F)
T(n) = 7T(n/2) + 0 (n 4 ),
• T(n)=7T(n/2)+0(n2 ) if n>1
• Solution (by Master Method): T(n )= 0(n 1 0 9 7 ) =0(n2-81)
• Note: There are other matrix multiplication algorithms which can run even 
faster with T(n )= 0(n 2-38)
Maximal Subarray:
Divide and conquer approach to find the maximal subarray: 
maxsub(int[] S; low, high: int)
if low = high then return (low, high, S(low)); // return (lowlndex, highlndex, sum) 
else mid = (low + high) / 2 ;
(How, Ihigh, Isum) = maxsub(S, low, m id );
(rlow, rhigh, rsum) = maxsub(S, mid+1, high );
(mlow, mhigh, msum) = middlemaxsub(S, low, mid, high ); 
end if;
return triple with highest sum 
end maxsub
middlemaxsub(int[] S; low, high, int) return (lowlndex, highlndex, sum)
start at mid and find bestleft and leftsum
start at mid and find bestright and rightsum
return (bestleft, bestright, rightsum+leftsum)
end middlemaxsub
Closest Pair Algorithm:
Input: P = (p(1), p ( 2 ) p ( n ) } where p(i) = ( x(i), y (i)). A set of n points in the 
plane.
Output: The distance between the two points that are closest.
Note: The distance DELTA( / , j ) between p(i) and p(j) is defined by the expression: 
Square root of { (x(i)-x(j))2 + (y(i)-y(j))2 } 
float function closest_pair (P: point set; n: integer) 
float DELTA-LEFT, DELTA-RIGHT; 
float DELTA;
begin
if n = 2 then
return distance from p(1) to p(2); 
else
P-LEFT := ( p(1), p (2 ).....p(n/2));
P-RIGHT := ( p(n/2+1), p(n/2+2).....p (n ));
DELTA-LEFT := closestpair( P-LEFT, n/2 );
DELTA-RIGHT := closestpair( P-RIGHT, n/2 );
DELTA := minimum ( DELTA-LEFT, DELTA-RIGHT);
/* Determine whether there are points p(l) in P-LEFT and p(r) in P-RIGHT with
distance( p(l), p (r )) < DELTA. If there are such points, set DELTA to be the 
smallest
distance.*/ 
return DELTA; 
end if;
end closest_pair;
The section between the two comment lines is the 'combine' stage of the Divide- 
and-Conquer algorithm.
If there are points p(l) and p(r) whose distance apart is less than DELTA then it 
must be the case that
• The x-coordinates of p(l) and p(r) differ by at most DELTA.
• The y-coordinates of p(l) and p(r) differ by at most DELTA.
Integer Multiplication:
Multiplying two n-bit integers algorithm using divide and conquer approach is given 
below:
multiply
Input: Positive integers r and y , in binary
Output: Their product
n = max(size of x , size of y) 
if n= 1: return
XLf Xu = leftmost fn/2], rightmost [n/2J bits of a : 
yt., >j b — leftmost fn/2], rightmost [u/2j bits of y
P\ = m u ltip ly ^ , y,.}
P2 = m u ltip ly ^ * , ;;*)
P3 = multiply^ + yL + yK)
return Pt x 2" + - P2) x 2’1 /2 + A
T(n) = 3T(n/2) + 0(n).
Summary:
Algorithm
Worst-case running Best-case Average-case
time running time running time
Read More

FAQs on Short Notes: Algorithms-Divide & Conquer

1. What's the basic idea behind divide and conquer algorithms and how does it actually work?
Ans. Divide and conquer breaks a problem into smaller independent subproblems, solves each recursively, then combines their solutions into the final answer. It works by dividing the input, conquering each part separately, and merging results efficiently. This approach powers algorithms like merge sort and quicksort, reducing overall time complexity significantly through recursive decomposition.
2. How do merge sort and quicksort use the divide and conquer strategy differently?
Ans. Merge sort divides the array equally, sorts each half, then merges sorted subarrays-consistent O(n log n) performance. Quicksort partitions around a pivot, recursively sorts partitions, then combines them without extra merging. Quicksort's average case matches merge sort but uses in-place sorting, while merge sort guarantees O(n log n) but requires additional space for combining subproblems.
3. Why is divide and conquer better than just solving problems the straightforward way?
Ans. Divide and conquer reduces time complexity by breaking exponential problems into polynomial solutions through recursive decomposition. For instance, finding maximum and minimum in an array drops from 2n comparisons to 1.5n using this technique. The strategy exploits subproblem overlaps and minimises redundant calculations, making algorithms vastly more efficient for large datasets than naive iteration or brute-force approaches.
4. What are the main steps I need to follow when designing a divide and conquer solution?
Ans. Designing a divide and conquer algorithm requires three sequential steps: identify how to divide the problem into independent subproblems of similar structure, establish base cases for termination, and define the combine operation merging subproblem solutions. Recursion trees help visualise time complexity. Students can refer to mind maps and flashcards on EduRev to understand recurrence relations and master the design methodology for various algorithm types.
5. When should I use divide and conquer instead of dynamic programming or greedy algorithms?
Ans. Divide and conquer suits problems with non-overlapping subproblems and optimal substructure lacking memoization benefits. Use it when combining independent solutions efficiently solves the original problem, unlike dynamic programming's repeated overlapping calls. Greedy works for local optimisation; divide and conquer excels at balanced decomposition. Binary search, merge sort, and matrix multiplication exemplify problems where divide and conquer outperforms alternative algorithmic paradigms strategically.
Explore Courses for Computer Science Engineering (CSE) exam
Related Searches
Short Notes: Algorithms-Divide & Conquer, ppt, Objective type Questions, pdf , Free, Previous Year Questions with Solutions, Short Notes: Algorithms-Divide & Conquer, MCQs, practice quizzes, Important questions, Extra Questions, Viva Questions, study material, Short Notes: Algorithms-Divide & Conquer, shortcuts and tricks, Exam, Semester Notes, Summary, mock tests for examination, video lectures, past year papers, Sample Paper;