All questions of Asymptotic Analysis of Algorithms for Computer Science Engineering (CSE) Exam

When n = 22k for some k ≥ 0, the recurrence relation
T(n) = √(2) T(n/2) + √n, T(1) = 1
evaluates to :
  • a)
    √(n) (log n + 1)
  • b)
    √(n) (log n )
  • c)
    √(n) log √(n)
  • d)
    n log √(n)
Correct answer is option 'A'. Can you explain this answer?

Ravi Singh answered
Please note that the question is asking about exact solution. Master theorem provides results in the form of asymptotic notations. So we can't apply Master theorem here. We can solve this recurrence using simple expansion or recurrence tree method.
T(n) = √(2) T(n/2) + √n = √(2) [√(2) T(n/4) + √(n/2) ] + √n = 2 T(n/4) + √2 √(n/2) +√n = 2[ √2 T(n/8) + √(n/4) ]+√2 √(n/2)+√n = √(2^3) T(n/8)+ 2 √(n/4) + √2 √(n/2) +√n = √(2^3) T(n/8)+√n +√n +√n = √(2^3) T(n/(2^3))+3√n ............................................. = √(2^k) T(n/(2^k))+k√n = √(2^logn) + logn √n = √n + logn √n = √n(logn +1)
Alternate Solution : This question can be easily done by substitution method look: T(1)= 1; GIVEN. Now use n=2 in the given recurrence relation which gives 2*(1.414) (since value of root over 2 is 1.414) now by looking at the options use n=2 which satisfies option A.

Suppose T(n) = 2T(n/2) + n, T(0) = T(1) = 1 Which one of the following is FALSE?
  • a)
    T(n) = 0(n2)
  • b)
  • c)
    T(n) - Ω(n2)
  • d)
    T(n) = O(nlogn)
Correct answer is option 'C'. Can you explain this answer?

Yash Patel answered
T(0)= T(1) = 1 
T(n) = 2T(n/2) + n
T(n) be computed as follows:
[T(n) can also proved by Master Theorem]
if  then it is also 0(n logn) and O(n2) but it is not Ω(n2).

What is the worst case time complexity of insertion sort where position of the data to be inserted is calculated using binary search?
  • a)
    N
  • b)
    NlogN
  • c)
    N^2
  • d)
    N(logN)^2
Correct answer is option 'C'. Can you explain this answer?

Prisha Desai answered
Explanation:

Binary Search for Insertion:
- In insertion sort, when using binary search to find the correct position for insertion, the worst case time complexity is O(N^2).
- This is because the binary search takes O(logN) time to find the correct position for insertion.

Insertion Time Complexity:
- Once the correct position is found using binary search, shifting all the elements to make space for insertion takes O(N) time.
- Thus, the total time complexity for insertion in this case is O(N) + O(logN) = O(N^2).

Overall Time Complexity:
- Since we have to find the correct position for each element in the array, the overall worst case time complexity of insertion sort using binary search is O(N^2).
Therefore, the worst case time complexity of insertion sort with binary search for insertion is O(N^2).

The minimum number of record movements required to merge five files A (with 10 records), B (with 20 records), C (with 15 records), D (with 5 records) and E (with 25 records) is:
  • a)
    165
  • b)
    90
  • c)
    75
  • d)
    65
Correct answer is option 'A'. Can you explain this answer?

Using optimal merge pattern algorithm arrange files in increasing order of records:
D  A   C     B   E
5  10  15   20  25 
Now, minimum number of record movements required = sum of internal node's value = 15 + 30 + 45 + 75 = 165 So, option (A) is correct.

For parameters a and b, both of which are ω(1), T(n)=T(n1/a)+1, and T(b)=1. Then T(n) is
  • a)
    Θ(logalogbn)
  • b)
    Θ(logabn)
  • c)
    Θ(logblogan)
  • d)
    Θ(log2log2n)
Correct answer is option 'A'. Can you explain this answer?

Prerna Joshi answered
Explanation:
T(n) = T(n1/a) + 1 represents a recursive function where the input size decreases by a factor of a each time, and the function makes constant time operations at each step.

Using Master Theorem:
- We can apply the Master Theorem to analyze the time complexity of the given recursive function T(n).
- According to the Master Theorem, if T(n) = aT(n/b) + f(n), where a >= 1, b > 1, and f(n) is a polynomial function, then:
- If f(n) = Θ(nc) where c < />ba, then T(n) = Θ(nlogba).
- If f(n) = Θ(nc) where c = logba, then T(n) = Θ(nc * log n).
- If f(n) = Θ(nc) where c > logba, then T(n) = Θ(f(n)).

Applying Master Theorem:
- In this case, a = 1, b = a, and f(n) = 1.
- We have c = 0, which is less than logab = log11 = 0.
- Therefore, according to the Master Theorem, T(n) = Θ(logan) = Θ(log n).

Conclusion:
- The correct answer is option 'A' (Θ(log n)), as the time complexity of the given recursive function T(n) is logarithmic in nature.

If b is the branching factor and m is the maximum depth of the search tree, what is the space complexity of greedy search?
  • a)
    O(b+m)
  • b)
    O(bm)
  • c)
    O(bm)
  • d)
    O(mm)
Correct answer is option 'C'. Can you explain this answer?

Swati Patel answered
The space complexity of a search algorithm refers to the amount of memory required to execute the algorithm. In the case of greedy search, the space complexity can be determined by considering the number of nodes that need to be stored in memory during the search process.

The space complexity of greedy search can be analyzed by considering two factors: the branching factor (b) and the maximum depth of the search tree (m).

- **Branching Factor (b):** The branching factor represents the maximum number of child nodes that can be generated from a single node in the search tree. In other words, it determines the number of possible choices or actions available at each level of the search tree.

- **Maximum Depth of the Search Tree (m):** The maximum depth of the search tree represents the length of the longest path from the root node to any leaf node in the tree. It determines the maximum number of levels or steps required to reach a solution.

The space complexity of greedy search can be determined by considering the maximum number of nodes that need to be stored in memory at any given time during the search process.

- At each level of the search tree, the number of nodes can be at most equal to the branching factor raised to the power of the level. So, at level 1, there can be at most b^1 nodes, at level 2 there can be at most b^2 nodes, and so on.

- Since the maximum depth of the search tree is m, the total number of nodes in the search tree can be at most b^m.

Therefore, the space complexity of greedy search is given by O(b^m), where O represents the upper bound or worst-case scenario.

Hence, the correct option is (c) O(b^m)

What is recurrence for worst case of QuickSort and what is the time complexity in Worst case?
  • a)
    Recurrence is T(n) = T(n-2) + O(n) and time complexity is O(n^2)
  • b)
    Recurrence is T(n) = T(n-1) + O(n) and time complexity is O(n^2)
  • c)
    Recurrence is T(n) = 2T(n/2) + O(n) and time complexity is O(nLogn)
  • d)
    Recurrence is T(n) = T(n/10) + T(9n/10) + O(n) and time complexity is O(nLogn)
Correct answer is option 'B'. Can you explain this answer?

Harshitha Kaur answered
The worst case of QuickSort occurs when the picked pivot is always one of the corner elements in sorted array. In worst case, QuickSort recursively calls one subproblem with size 0 and other subproblem with size (n-1). So recurrence is T(n) = T(n-1) + T(0) + O(n) The above expression can be rewritten as T(n) = T(n-1) + O(n) 1
void exchange(int *a, int *b)
{
  int temp;
  temp = *a;
  *a   = *b;
  *b   = temp;
}
int partition(int arr[], int si, int ei)
{
  int x = arr[ei];
  int i = (si - 1);
  int j;
  for (j = si; j <= ei - 1; j++)
  {
    if(arr[j] <= x)
    {
      i++;
      exchange(&arr[i], &arr[j]);
    }
  }    
  exchange (&arr[i + 1], &arr[ei]);
  return (i + 1);
}    
/* Implementation of Quick Sort
arr[] --> Array to be sorted
si  --> Starting index
ei  --> Ending index
*/
void quickSort(int arr[], int si, int ei)
{
  int pi;    /* Partitioning index */
  if(si < ei)
  {
    pi = partition(arr, si, ei);
    quickSort(arr, si, pi - 1);
    quickSort(arr, pi + 1, ei);
  }
}

Consider the following segment of C-code:
int j, n;
j = 1;
while (j <= n)    
j = j * 2;
  • a)
      [logn] + 1
  • b)
    n
  • c)
    [log2 n]
  • d)
    [log2 n] + 2
Correct answer is option 'A'. Can you explain this answer?

Swati Kaur answered
The given C code segment is as follows:

```
int j, n;
j = 1;
while (j = n)
j = j * 2;
```

This code segment initializes two variables, `j` and `n`, and assigns the value 1 to `j`. It then enters a while loop that continues as long as the value of `j` is equal to the value of `n`. Within the loop, the value of `j` is multiplied by 2.

The correct answer to determine the time complexity of this code segment is option 'A' - [log2n] 1. Let's understand why this is the case.

1. Analysis of the code segment:
- The variable `j` is initially set to 1.
- The while loop condition checks if `j` is equal to `n`.
- If `j` is equal to `n`, `j` is multiplied by 2.
- The loop continues as long as `j` is equal to `n`.

2. Execution of the code segment:
- Since `j` is initialized as 1, the loop will execute at least once.
- In each iteration of the loop, `j` is multiplied by 2.
- Therefore, the value of `j` will keep increasing exponentially.

3. Termination condition:
- The loop will terminate when the value of `j` is no longer equal to `n`.
- Since `j` is increasing exponentially, it will eventually become larger than `n`, breaking the loop.

4. Time complexity analysis:
- The loop will execute until `j` becomes larger than `n`.
- The value of `j` in each iteration can be represented as 2^k, where k is the number of iterations.
- The loop will terminate when 2^k > n.
- Taking the logarithm base 2 on both sides of the inequality, we get k > log2n.
- Therefore, the loop will execute log2n + 1 times.

5. Conclusion:
- The time complexity of this code segment is O(log2n) because the loop executes log2n + 1 times, which can be approximated as O(log2n).
- Hence, the correct answer is option 'A' - [log2n] 1.

In the following C function, let n ≥ m
int gcd(n,m) {
if (n%m == 0) return m;    
n = n%m;
return gcd(m,n);
}
How many recursive calls are made by this function?
  • a)
  • b)
  • c)
  • d)
Correct answer is option 'A'. Can you explain this answer?

Above code is implementation of the Euclidean algorithm for finding Greatest Common Divisor (GCD).
Please see http://mathworld.wolfram.com/EuclideanAlgorithm.html for time complexity.

Given an unsorted array. The array has this property that every element in array is at most k distance from its position in sorted array where k is a positive integer smaller than size of array. Which sorting algorithm can be easily modified for sorting this array and what is the obtainable time complexity?
  • a)
    Insertion Sort with time complexity O(kn)
  • b)
    Heap Sort with time complexity O(nLogk)
  • c)
    Quick Sort with time complexity O(kLogk)
  • d)
    Merge Sort with time complexity O(kLogk)
Correct answer is option 'B'. Can you explain this answer?

Arka Basak answered
Understanding the Problem
The array has a unique property where each element is at most k positions away from its sorted position. This allows us to use a more efficient sorting algorithm than general-purpose ones.
Why Heap Sort?
Heap Sort can be adapted to efficiently sort this nearly sorted array:
- Heap Structure: The elements in the array can be viewed as a nearly complete binary tree. We can maintain a min-heap (or max-heap) to keep track of the smallest (or largest) elements within the k distance.
- K-distance Handling: As we iterate through the array, we can insert elements into the heap, ensuring that we only maintain a heap of size k. This allows us to efficiently find the minimum element in O(1) time.
Time Complexity Breakdown
- Building the Heap: Inserting n elements into a heap of size k takes O(k) time for each insertion. Thus, for n elements, the total time complexity is O(n*k).
- Extracting Elements: Once we have our heap populated, we can extract the minimum element k times, which takes O(log k) time for each extraction. Therefore, extracting all n elements results in O(n log k) time complexity.
Final Complexity
The overall time complexity for sorting the array using this modified Heap Sort approach is O(n log k). This is significantly more efficient than the O(n^2) time complexity of a naïve sorting algorithm like Bubble Sort or Insertion Sort applied to a completely unsorted array.
Conclusion
Thus, the best choice for sorting an array with the described property is Heap Sort, yielding a time complexity of O(n log k).

Which of the following sorting algorithms has the lowest worst-case complexity?
  • a)
    Merge Sort
  • b)
    Bubble Sort
  • c)
    Quick Sort
  • d)
    Selection Sort
Correct answer is option 'A'. Can you explain this answer?

Sanya Agarwal answered
Worst case complexities for the above sorting algorithms are as follows: Merge Sort — nLogn Bubble Sort — n^2 Quick Sort — n^2 Selection Sort — n^2

A list of n string, each of length n, is sorted into lexicographic order using the merge-sort algorithm. The worst case running time of this computation is
  • a)
    O(nlogn)
  • b)
    O(n2logn)
  • c)
    O(n2+logn)
  • d)
    O(n2)
Correct answer is option 'B'. Can you explain this answer?

Shivam Sharma answered
When we are sorting an array of n integers, Recurrence relation for Total number of comparisons involved will be,

T(n) = 2T(n/2) + (n) where (n) is the number of comparisons in order to merge 2 sorted subarrays of size n/2.
= (nlog2n)

Instead of integers whose comparison take O(1) time, we are given n strings. We can compare 2 strings in O(n) worst case. Therefore, Total number of comparisons now will be (n2log2n) where each comparison takes O(n) time now.

In general, merge sort makes (nlog2n) comparisons, and runs in (nlog2n) time if each comparison can be done in O(1) time.

To learn more about time complexity:

What is the size of the smallest MIS(Maximal Independent Set) of a chain of nine nodes?
  • a)
    5
  • b)
    4
  • c)
    3
  • d)
    2
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
A set of vertices is called independent set such that no two vertices in the set are adjacent. A maximal independent set (MIS) is an independent set which is not subset of any other independent set. The question is about smallest MIS. We can see in below diagram, the three highlighted vertices (2nd, 5th and 8th) form a maximal independent set (not subset of any other MIS) and smallest MIS.
0----0----0----0----0----0----0----0----0

Match the following with respect to algorithm paradigms :
  • a)
    (1)
  • b)
    (2)
  • c)
    (3)
  • d)
    (4)
Correct answer is option 'D'. Can you explain this answer?

Yash Patel answered
  1. Merge sort is a Divide and conquer algorithm
  2. Huffman coding is a Greedy approach
  3. Optimal polygon triangulation is Dynamic programming algorithm
  4. Subset sum problem is a Back tracking algorithm
So, option (D) is correct.

Consider the following functions:
f(n) = 2^n
g(n) = n!
h(n) = n^logn
Q. Which of the following statements about the asymptotic behavior of f(n), g(n), and h(n) is true?
  • a)
     f(n) = O(g(n)); g(n) = O(h(n))
  • b)
    f(n) = Ω(g(n)); g(n) = O(h(n))
  • c)
    g(n) = O(f(n)); h(n) = O(f(n))
  • d)
    h(n) = O(f(n)); g(n) = Ω(f(n))
Correct answer is option 'D'. Can you explain this answer?

Abhay Ghoshal answered
According to order of growth: h(n) < f(n) < g(n) (g(n) is asymptotically greater than f(n) and f(n) is asymptotically greater than h(n) ) We can easily see above order by taking logs of the given 3 functions
lognlogn < n < log(n!) (logs of the given f(n), g(n) and h(n)).
Note that log(n!) =

A recursive function h, is defined as follows :
h(m) = k, if m = 0
= 1, if m = 1
= 2 h(m – 1) + 4h(m – 2), if m ≥ 2
If the value of h(4) is 88 then the value of k is :
  • a)
    0
  • b)
    1
  • c)
    2
  • d)
    -1
Correct answer is option 'C'. Can you explain this answer?

Ravi Singh answered
According to given question:
h(4) = 88
88 = 2 h(3) + 4 h(2)
= 2 [2 h(2) + 4 h(1)] + 4 h(2)
= 8 h(2) + 8 h(1)
= 8 (2 + 4 k) + 8
= 24 + 32 k
 i.e.   k = 2
So, option (C) is corrct.

The minimum number of comparisons required to find the minimum and the maximum of 100 numbers is ______________.
  • a)
    148
  • b)
    147
  • c)
    146
  • d)
    140
Correct answer is option 'A'. Can you explain this answer?

Sanya Agarwal answered
Steps to find minimum and maximum element out of n numbers:
1. Pick 2 elements(a, b), compare them. (say a > b)
2. Update min by comparing (min, b)
3. Update max by comparing (max, a)
Therefore, we need 3 comparisons for each 2 elements, so total number of required comparisons will be (3n)/2 - 2, because we do not need to update min or max in the very first step. Recurrence relation will be:
T(n) = T(⌈n/2⌉)+T(⌊n/2⌋)+2 = 2T(n/2)+2 = ⌈3n/2⌉-2
By putting the value n=100, (3*100/2)-2 = 148 which is answer. 

Consider the following recurrence relation
The value of T(m2) for m ≥ 1 is
  • a)
    (m/6) (21m - 39) + 4
  • b)
    (m/6) (4m2 - 3m + 5)
  • c)
    (m/2) (m2.5 - 11m + 20) - 5
  • d)
    (m/6) (5m3 - 34m2 + 137m - 104) + (5/6)
Correct answer is option 'B'. Can you explain this answer?

One easy way to solve this is to try putting different 
values of m.
For example, we know T(1) = 1. If we put m = 1, only A
and B satisfy the result.
m = 2
T(2) = T(1) + 1 = 2
T(3) = T(2) + 1 = 3
T(4) = T(3) + 2 = 5
Both A & B produce 5
m = 3
T(9) = T(4) + 2*5 + 1 = 5 + 10 + 1 = 16
Both A & B produce 16
m = 4
T(16) = T(9) + 3*7 + 1 = 16 + 21 + 1 = 38
Only B produces 38, A produces 34 which doesn't match 

The solution to the recurrence equation T(2k) = 3 T(2k-1) + 1, T (1) = 1, is:
  • a)
    2k
  • b)
    (3k + 1 - 1)/2
  • c)
    3log2k
  • d)
    2log3k
Correct answer is option 'B'. Can you explain this answer?

Neha Mishra answered
Solution:

To solve the recurrence equation T(2k) = 3 T(2k-1) - 1, we need to find a pattern in the terms and then find a closed-form solution.

Pattern of Terms:
Let's find the values of T(1), T(2), T(4), T(8), T(16), and T(32) to observe a pattern:

T(1) = 1
T(2) = 3 T(1) - 1 = 3(1) - 1 = 2
T(4) = 3 T(2) - 1 = 3(2) - 1 = 5
T(8) = 3 T(4) - 1 = 3(5) - 1 = 14
T(16) = 3 T(8) - 1 = 3(14) - 1 = 41
T(32) = 3 T(16) - 1 = 3(41) - 1 = 122

We can observe that the terms are increasing rapidly, but there doesn't seem to be a clear pattern.

Finding a Closed-Form Solution:
Let's try to find a closed-form solution for T(k) by substituting k = 2^m:

T(2^m) = 3 T(2^m-1) - 1

Let's divide both sides by T(2^m-1):

T(2^m) / T(2^m-1) = 3 - 1 / T(2^m-1)

Since T(2^m) / T(2^m-1) is a ratio of consecutive terms, it should approach a constant value as m increases.

Let's assume the ratio approaches a constant value R:

R = 3 - 1 / R

Simplifying the equation:

R^2 = 3 - 1

R^2 = 2

Taking the square root of both sides:

R = √2

Now, let's express T(2^m) in terms of m using the formula T(k) = R^(m-1) T(2):

T(2^m) = √2^(m-1) T(2)

Substituting k = 2^m:

T(k) = √2^(log2(k)-1) T(2)

Simplifying:

T(k) = 2^(log2(k)/2 - 1/2) T(2)

Since T(2) = 2 - 1 / 2 = 1/2:

T(k) = 2^(log2(k)/2 - 1/2) / 2 = (2^(log2(k)/2 - 1/2) - 1) / 2

Simplifying further:

T(k) = (2^(log2(k)/2) - 2^(-1/2)) / 2

T(k) = (2^(log2(k)/2) - 1) / 2

Finally, let's substitute k = 2k to get the solution in terms of k:

T(2k) = (2^(log2(2k)/2) - 1) / 2

T(

Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then
  • a)
    T(n) <= 2T(n/5) + n
  • b)
    T(n) <= T(n/5) + T(4n/5) + n
  • c)
    T(n) <= 2T(4n/5) + n
  • d)
    T(n) <= 2T(n/2) + n
Correct answer is option 'B'. Can you explain this answer?

Nishanth Mehta answered
For the case where n/5 elements are in one subset, T(n/5) comparisons are needed for the first subset with n/5 elements, T(4n/5) is for the rest 4n/5 elements, and n is for finding the pivot.

If there are more than n/5 elements in one set then other set will have less than 4n/5 elements and time complexity will be less than T(n/5) + T(4n/5) + n because recursion tree will be more balanced.

Match the following:
  • a)
    (1)
  • b)
    (2)
  • c)
    (3)
  • d)
    (4)
Correct answer is option 'C'. Can you explain this answer?

  1. Prim’s algorithm takes O(E lgV) time.
  2. Bellman-Ford algorithm takes O(V2E) time.
  3. Floyd-Warshall algorithm takes O(V3) time.
  4. Johnson’s algorithm takes O(VE lgV) time.
So, option (C) is correct.

What is the worst-case number of arithmetic operations performed by recursive binary search on a sorted array of size n?
  • a)
    Θ(√n)
  • b)
    Θ(log2(n))
  • c)
    Θ(n2)
  • d)
    Θ(n)
Correct answer is option 'C'. Can you explain this answer?

Yash Verma answered
Worst-case number of arithmetic operations in recursive binary search:
Binary search is a divide and conquer algorithm that searches for a target value within a sorted array. In the worst-case scenario, the target value is not present in the array, leading to the maximum number of comparisons.

Reasoning:
- In each recursive call of binary search, the array is divided into two halves.
- This process continues until the target value is found or the subarray size becomes 0.
- In the worst-case scenario, the target value is not present in the array, and the recursion reaches the base case with a single element or an empty subarray.

Analysis:
- At each step of the recursion, the algorithm performs a constant number of arithmetic operations to calculate the mid-point and compare the target value.
- The number of recursive calls required to reduce the array size to 1 in the worst-case scenario is logarithmic with respect to the size of the array (log2(n)).
- Hence, the worst-case number of arithmetic operations performed by recursive binary search on a sorted array of size n is Theta(n) due to Theta(n) comparisons in the worst case.
Therefore, the correct answer is option 'C' - Theta(n^2).

Consider the following three functions.
f1 = 10n
f2 = nlogn
f3 = n√n
Which one of the following options arranges the functions in the increasing order of asymptotic growth rate?
  • a)
    f3,f2,f1
  • b)
    f2,f1,f3
  • c)
    f1,f2,f3
  • d)
    f2,f3,f1
Correct answer is option 'D'. Can you explain this answer?

Understanding Asymptotic Growth Rates
To compare the growth rates of the functions f1, f2, and f3, we need to analyze them as n approaches infinity.
Functions Overview
- f1 = 10n: This function grows linearly with n.
- f2 = n log n: This function grows faster than linear but slower than polynomial functions. The logarithmic factor increases the growth rate compared to f1.
- f3 = n√n: This can be rewritten as n^(3/2), a polynomial function that grows faster than both f1 and f2.
Comparing Growth Rates
- f1 vs f2: As n becomes very large, log n grows; thus, n log n grows faster than 10n. Therefore, f2 is greater than f1 for large n.
- f2 vs f3: f3 = n√n = n^(3/2) grows faster than n log n. For sufficiently large n, the polynomial term dominates the logarithmic term, making f3 greater than f2.
- f1 vs f3: Since f3 is n^(3/2) and f1 is linear, f3 will always grow faster than f1 as n increases.
Conclusion
Arranging the functions in increasing order of growth rates, we have:
- f2 (n log n)
- f3 (n√n)
- f1 (10n)
Thus, the correct order is f2, f3, f1, which corresponds to option 'D'.

The following statement is valid. log(n!) = θ(n log n).
  • a)
    True
  • b)
    False
Correct answer is option 'A'. Can you explain this answer?

Order of growth of [Tex]log n![/Tex] and [Tex]nlog n[/Tex] is same for large values of [Tex]n[/Tex], i.e., [Tex] heta (log n!) = heta (nlog n)[/Tex]. So time complexity of fun() is [Tex] heta (nlog n)[/Tex]. The expression [Tex] heta (log n!) = heta (nlog n)[/Tex] can be easily derived from following Stirling's approximation (or Stirling's formula). [Tex] log n! = nlog n - n +O(log(n)) [/Tex]

Consider the recurrence relation a1 = 8, an = 6n2 + 2n + an-1. Let a99 = k x 104. The value of K is _____   Note : This question was asked as Numerical Answer Type.
  • a)
    190
  • b)
    296
  • c)
    198
  • d)
    200
Correct answer is option 'C'. Can you explain this answer?

Arka Dasgupta answered
To solve the given recurrence relation and find the value of k in a99 = k x 104, we can use the method of iteration.

1. Finding the pattern:
Let's find the values of a2, a3, a4, and a5 using the given recurrence relation:
a2 = 6(2^2) - 2(2) + a1 = 24 - 4 + 8 = 28
a3 = 6(3^2) - 2(3) + a2 = 54 - 6 + 28 = 76
a4 = 6(4^2) - 2(4) + a3 = 96 - 8 + 76 = 164
a5 = 6(5^2) - 2(5) + a4 = 150 - 10 + 164 = 304

By observing the pattern, we can see that the values of a2, a3, a4, a5 are 28, 76, 164, and 304 respectively.

2. Writing the general term:
From the pattern, we can deduce that the general term of the given recurrence relation can be written as:
an = 6n^2 - 2n + (n-1)^2 + 4

3. Finding the value of a99:
Using the general term, we can find the value of a99:
a99 = 6(99^2) - 2(99) + (99-1)^2 + 4
= 6(9801) - 198 + 98^2 + 4
= 58806 - 198 + 9604 + 4
= 68816

4. Finding the value of k:
Given that a99 = k x 104, we can equate the two expressions:
68816 = k x 104
k = 68816 / 104
k ≈ 661.538

Rounded to the nearest whole number, k ≈ 662.

Therefore, the value of k is approximately 662, which corresponds to option C.

Which of the given options provides the increasing order of asymptotic complexity of functions f1, f2, f3 and f4?
f1(n) = 2^n
f2(n) = n^(3/2)
f3(n) = nLogn
f4(n) = n^(Logn)
  • a)
    f3, f2, f4, f1
  • b)
    f3, f2, f1, f4
  • c)
    f2, f3, f1, f4
  • d)
    f2, f3, f4, f1
Correct answer is option 'A'. Can you explain this answer?

Abhiram Goyal answered
f1(n) = 2^n
f2(n) = n^(3/2)
f3(n) = nLogn
f4(n) = n^(Logn)
Except f3, all other are exponential. So f3 is definitely first in output. Among remaining, n^(3/2) is next. One way to compare f1 and f4 is to take Log of both functions. Order of growth of Log(f1(n)) is Θ(n) and order of growth of Log(f4(n)) is Θ(Logn * Logn). Since Θ(n) has higher growth than Θ(Logn * Logn), f1(n) grows faster than f4(n). Following is another way to compare f1 and f4. Let us compare f4 and f1. Let us take few values to compare

Chapter doubts & questions for Asymptotic Analysis of Algorithms - 6 Months Preparation for GATE CSE 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Asymptotic Analysis of Algorithms - 6 Months Preparation for GATE CSE in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Top Courses Computer Science Engineering (CSE)