Test: Asymptotic Analysis of Algorithms- 1


15 Questions MCQ Test Algorithms | Test: Asymptotic Analysis of Algorithms- 1


Description
Attempt Test: Asymptotic Analysis of Algorithms- 1 | 15 questions in 35 minutes | Mock test for Computer Science Engineering (CSE) preparation | Free important questions MCQ to study Algorithms for Computer Science Engineering (CSE) Exam | Download free PDF with solutions
QUESTION: 1

Given an unsorted array. The array has this property that every element in array is at most k distance from its position in sorted array where k is a positive integer smaller than size of array. Which sorting algorithm can be easily modified for sorting this array and what is the obtainable time complexity?

Solution:

1) to sort the array firstly create a min-heap with first k+1 elements and a separate array as resultant array.

2) because elements are at most k distance apart from original position so, it is guranteed that the smallest element will be in this K+1 elements.

3) remove the smallest element from the min-heap(extract min) and put it in the result array.

4) Now,insert another element from the unsorted array into the mean-heap, now,the second smallest element will be in this, perform extract min and continue this process until no more elements are in the unsorted array.finally, use simple heap sort for the remaining elements Time Complexity ------------------------ 1) O(k) to build the initial min-heap 2) O((n-k)logk) for remaining elements... 3) 0(1) for extract min so overall O(k) + O((n-k)logk) + 0(1) = O(nlogk)

QUESTION: 2

Consider the following program fragment for reversing the digits in a given integer to obtain a new integer. Let n = D1D2…Dm
int n, rev; 
rev = 0; 
while (n > 0) 

   rev = rev*10 + n%10; 
   n = n/10; 
}

The loop invariant condition at the end of the ith iteration is: 

Solution:

We can get it by taking an example like n = 54321. After 2 iterations, rev would be 12 and n would be 543.

QUESTION: 3

What is the worst case time complexity of insertion sort where position of the data to be inserted is calculated using binary search?

Solution:

Applying binary search to calculate the position of the data to be inserted doesn't reduce the time complexity of insertion sort. This is because insertion of a data at an appropriate position involves two steps:

1. Calculate the position.

2. Shift the data from the position calculated in step #1 one step right to create a gap where the data will be inserted. Using binary search reduces the time complexity in step #1 from O(N) to O(logN). But, the time complexity in step #2 still remains O(N). So, overall complexity remains O(N^2).

QUESTION: 4

In a modified merge sort, the input array is splitted at a position one-third of the length(N) of the array. Which of the following is the tightest upper bound on time complexity of this modified Merge Sort.

Solution:

The time complexity is given by: T(N) = T(N/3) + T(2N/3) + N Solving the above recurrence relation gives, T(N) = N(logN base 3/2)

QUESTION: 5

In a competition, four different functions are observed. All the functions use a single for loop and within the for loop, same set of statements are executed. Consider the following for loops:

A) for(i = 0; i < n; i++)
B) for(i = 0; i < n; i += 2)
C) for(i = 1; i < n; i *= 2)
D) for(i = n; i > -1; i /= 2)

If n is the size of input(positive), which function is most efficient(if the task to be performed is not an issue)?

Solution:

The time complexity of first for loop is O(n). The time complexity of second for loop is O(n/2), equivalent to O(n) in asymptotic analysis. The time complexity of third for loop is O(logn). The fourth for loop doesn't terminate.

QUESTION: 6

What is the time complexity of Floyd–Warshall algorithm to calculate all pair shortest path in a graph with n vertices?

Solution:

Floyd–Warshall algorithm uses three nested loops to calculate all pair shortest path. So, time complexity is Thete(n^3). Read here for more details.

QUESTION: 7

Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then

Solution:

For the case where n/5 elements are in one subset, T(n/5) comparisons are needed for the first subset with n/5 elements, T(4n/5) is for the rest 4n/5 elements, and n is for finding the pivot. If there are more than n/5 elements in one set then other set will have less than 4n/5 elements and time complexity will be less than T(n/5) + T(4n/5) + n because recursion tree will be more balanced.

QUESTION: 8

A list of n string, each of length n, is sorted into lexicographic order using the merge-sort algorithm. The worst case running time of this computation is

Solution:

The recurrence tree for merge sort will have height Log(n). And O(n^2) work will be done at each level of the recurrence tree (Each level involves n comparisons and a comparison takes O(n) time in worst case). So time complexity of this Merge Sort will be O (n2 log n) .

QUESTION: 9

Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then

Solution:

For the case where n/5 elements are in one subset, T(n/5) comparisons are needed for the first subset with n/5 elements, T(4n/5) is for the rest 4n/5 elements, and n is for finding the pivot. If there are more than n/5 elements in one set then other set will have less than 4n/5 elements and time complexity will be less than T(n/5) + T(4n/5) + n because recursion tree will be more balanced.

QUESTION: 10

The minimum number of comparisons required to find the minimum and the maximum of 100 numbers is ______________.

Solution:

Steps to find minimum and maximum element out of n numbers:
1. Pick 2 elements(a, b), compare them. (say a > b)
2. Update min by comparing (min, b)
3. Update max by comparing (max, a)
Therefore, we need 3 comparisons for each 2 elements, so total number of required comparisons will be (3n)/2 - 2, because we do not need to update min or max in the very first step. Recurrence relation will be:
T(n) = T(⌈n/2⌉)+T(⌊n/2⌋)+2 = 2T(n/2)+2 = ⌈3n/2⌉-2
By putting the value n=100, (3*100/2)-2 = 148 which is answer. 

QUESTION: 11

You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is

Solution:

For any input, there are some permutations for which worst case will be O(n2).  In some case, choosing the middle element minimizes the chances of encountering O(n2), but in worst case it can go to O(n2). Whichever element we take as Pivot, either first or middle, worst case will be O(n2) since Pivot is fixed in position. While choosing a random pivot minimizes the chances of encountering worst case i.e. O(n2). Refer this article on Quick Sort.

QUESTION: 12

Consider the following C-function:
double foo (int n)
{
    int i;
    double sum;
    if (n = = 0) return 1.0;
    else
    {
        sum = 0.0;
        for (i = 0; i < n; i++)
            sum += foo (i);
        return sum;
    }
}
Suppose we modify the above function foo() and store the values of foo (i), 0 < = i < n, as and when they are computed. With this modification, the time complexity for function foo() is significantly reduced. The space complexity of the modified function would be:

Solution:

Space complexity now is also O(n). We would need an array of size O(n). The space required for recursive calls would be O(1) as the values would be taken from stored array rather than making function calls again and again.

QUESTION: 13

Two matrices M1 and M2 are to be stored in arrays A and B respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute M1 × M2 will be

Solution:

This is a trick question. Note that the questions asks about time complexity, not time taken by the program. for time complexity, it doesn't matter how we store array elements, we always need to access same number of elements of M1 and M2 to multiply the matrices. It is always constant or O(1) time to do element access in arrays, the constants may differ for different schemes, but not the time complexity.

QUESTION: 14

Let A[1, ..., n] be an array storing a bit (1 or 0) at each location, and f(m) is a function whose time complexity is θ(m). Consider the following program fragment written in a C like language:
counter = 0;
for (i = 1; i < = n; i++)

      if (A[i] == 1) 
         counter++;
      else {
         f(counter); 
         counter = 0;
      }
}
The complexity of this program fragment is

Solution:

Please note that inside the else condition, f() is called first, then counter is set to 0. Consider the following cases:
a) All 1s in A[]: Time taken is Θ(n) as only counter++ is executed n times.

b) All 0s in A[]: Time taken is Θ(n) as only f(0) is called n times

c) Half 1s, then half 0s: Time taken is  Θ(n) as only f(n/2) is called once.

QUESTION: 15

In a permutation a1.....an of n distinct integers, an inversion is a pair (ai, aj) such that i < j and ai > aj. What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of 1.....n with at most n inversions?

Solution:

Insertion sort runs in Θ(n + f(n)) time, where f(n) denotes the number of inversion initially present in the array being sorted.

Use Code STAYHOME200 and get INR 200 additional OFF
Use Coupon Code