All Exams  >   Software Development  >   DSA in C++  >   All Questions

All questions of Time and Space Complexity for Software Development Exam

Which of the following represents the best-case time complexity of an algorithm? a) O
  • a)
    O(1)
  • b)
    O(n)
  • c)
    O(n2)
  • d)
    O(log n)
Correct answer is option 'A'. Can you explain this answer?

Anand Malik answered
Best-case time complexity of an algorithm

The best-case time complexity of an algorithm represents the minimum amount of time it would take for the algorithm to run on a given input. It indicates the best possible scenario in terms of time efficiency.

Time complexity notations
In computer science, time complexity is often expressed using Big O notation. The Big O notation provides an upper bound on the growth rate of the algorithm's time complexity. The following notations are commonly used:

- O(1): Constant time complexity. The algorithm takes the same amount of time regardless of the size of the input.
- O(n): Linear time complexity. The algorithm's running time increases linearly with the size of the input.
- O(n^2): Quadratic time complexity. The algorithm's running time increases quadratically with the size of the input.
- O(log n): Logarithmic time complexity. The algorithm's running time increases logarithmically with the size of the input.

Explanation of the answer
The best-case time complexity of an algorithm is the minimum amount of time it takes to execute, regardless of the input size. In this case, the correct answer is option 'A' - O(1), which represents constant time complexity.

Constant time complexity (O(1))
An algorithm with constant time complexity will always take the same amount of time to execute, regardless of the input size. It means that the execution time does not depend on the size of the input. This is achievable when the algorithm performs a fixed number of operations, regardless of the input size.

Example
An example of an algorithm with constant time complexity is accessing an element from an array by its index. Whether the array has 10 elements or 10,000 elements, accessing a specific index will take the same amount of time.

Conclusion
In summary, the best-case time complexity of an algorithm represents the minimum amount of time it takes to execute, regardless of the input size. The correct answer to the given question is option 'A' - O(1), which represents constant time complexity. An algorithm with constant time complexity will always take the same amount of time to execute, regardless of the input size.

Which of the following represents the worst-case time complexity of an algorithm?
  • a)
    O(1)
  • b)
    O(n)
  • c)
    O(n2)
  • d)
    O(log n)
Correct answer is option 'C'. Can you explain this answer?

Time Complexity of an Algorithm

The time complexity of an algorithm is a measure of the amount of time it takes to run based on the input size. It helps in determining how the algorithm's performance scales with increasing input size. The worst-case time complexity represents the maximum amount of time the algorithm takes to run for any input of size 'n'.

Options for Worst-Case Time Complexity
The given options are:
a) O(1)
b) O(n)
c) O(n^2)
d) O(log n)

a) O(1)
An algorithm with a time complexity of O(1) means that its running time is constant, regardless of the input size. It implies that the algorithm takes the same amount of time to execute, regardless of the size of the input. This is the best-case scenario for time complexity.

b) O(n)
An algorithm with a time complexity of O(n) means that its running time is directly proportional to the input size 'n'. It implies that the algorithm's execution time increases linearly with the increase in input size. Although it is not the best-case scenario, it is still considered efficient.

c) O(n^2)
An algorithm with a time complexity of O(n^2) means that its running time is directly proportional to the square of the input size 'n'. It implies that the algorithm's execution time increases exponentially with the increase in input size. This is considered an inefficient scenario as the execution time grows rapidly as the input size increases.

d) O(log n)
An algorithm with a time complexity of O(log n) means that its running time increases logarithmically with the increase in input size 'n'. It implies that the execution time grows slowly even when the input size increases significantly. This is considered an efficient scenario.

Conclusion
The worst-case time complexity represents the maximum execution time an algorithm can have for any given input size. Among the provided options, O(n^2) has the highest growth rate and is considered the worst-case time complexity. As the input size increases, the execution time of an algorithm with O(n^2) complexity increases rapidly, making it inefficient for large input sizes.

What is the space complexity of the following code snippet?
int n = 10;
int* arr = new int[n];
for (int i = 0; i < n; i++) {
    arr[i] = i;
}
delete[] arr;
  • a)
    O(1)
  • b)
    O(n)
  • c)
    O(n^2)
  • d)
    O(log n)
Correct answer is option 'A'. Can you explain this answer?

Abhijeet Basu answered
The space complexity of the code snippet is O(n), where n is the value of the variable n. This is because the code creates an array of size n using dynamic memory allocation (new int[n]), which requires O(n) space.

Which of the following asymptotic notations represents the worst-case time complexity of an algorithm?
  • a)
    O(1)
  • b)
    O(n)
  • c)
    Θ(n)
  • d)
    Ω(n)
Correct answer is option 'B'. Can you explain this answer?

Anil Kumar answered
The O-notation represents the worst-case time complexity of an algorithm. It indicates that the time taken by the algorithm grows linearly with the input size.

What is the time complexity of the following code snippet?
int count = 0;
for (int i = 1; i <= n; i *= 2) {
    for (int j = 0; j < n; j++) {
        count++;
    }
}
cout << count;
  • a)
    O(n)
  • b)
    O(log n)
  • c)
    O(n log n)
  • d)
    O(n^2)
Correct answer is option 'C'. Can you explain this answer?

Manasa Iyer answered
Understanding the Code Snippet
The provided code snippet consists of nested loops, and analyzing their behavior will help determine the time complexity.

Outer Loop Analysis
- The outer loop iterates with `i` starting from 1 and doubling each time (`i *= 2`).
- This means `i` takes values: 1, 2, 4, 8, ..., up to `n`.
- The number of iterations of the outer loop can be calculated as follows:
- Let `k` be the number of iterations; then, `2^k ≤ n`.
- Taking logarithm base 2 of both sides, we find `k ≤ log_2(n)`.
- Thus, the outer loop runs `O(log n)` times.

Inner Loop Analysis
- The inner loop runs from `j = 0` to `j < n`,="" which="" means="" it="" iterates="" `n`="" times="" for="" each="" iteration="" of="" the="" outer="" />

Combining the Two Loops
- For each of the `O(log n)` iterations of the outer loop, the inner loop executes `n` iterations.
- Therefore, the total number of iterations across both loops is:
- Total iterations = (Number of outer loop iterations) * (Number of inner loop iterations)
- This results in: `O(log n) * O(n) = O(n log n)`.

Final Conclusion
The correct time complexity of the given code snippet is:
- **O(n log n)**
Thus, the answer is option 'C'.

What is the time complexity of the following code snippet?
int count = 0;
for (int i = 1; i <= n; i *= 2) {
    for (int j = 1; j <= i; j++) {
        for (int k = 1; k <= n; k *= 2) {
            count++;
        }
    }
}
cout << count;
  • a)
    O(n)
  • b)
    O(log n)
  • c)
    O(n log n)
  • d)
    O(n^2)
Correct answer is option 'D'. Can you explain this answer?

Manasa Iyer answered
Understanding the Code Snippet
The given code snippet consists of three nested loops, which we will analyze to determine its time complexity.

Outer Loop Analysis
- The outer loop runs with `i` starting from 1 and doubling each iteration until `i` exceeds `n`.
- The iterations for `i` will be: 1, 2, 4, 8, ..., up to the largest power of 2 less than or equal to `n`.
- This results in approximately `log2(n)` iterations.

Middle Loop Analysis
- The middle loop iterates from `j = 1` to `i`.
- The number of iterations depends on the current value of `i`, which varies from 1 to `n`.
- In the worst case, when `i` is at its maximum value (i.e., `n`), the middle loop runs `n` times.

Inner Loop Analysis
- The inner loop runs with `k`, starting from 1 and doubling each iteration until it exceeds `n`.
- Similar to the outer loop, this loop also runs for approximately `log2(n)` iterations.

Combining the Complexity
- The total count of operations can be calculated as follows:
- For each value of `i`, we have:
- Middle loop executes `i` times.
- Inner loop executes `log2(n)` times.
- The total operations can be expressed as:
\[
Total = \sum_{i=1}^{n} (i \cdot log2(n))
\]
- The sum of `i` from 1 to `n` is \(\frac{n(n+1)}{2}\), which simplifies to \(O(n^2)\).

Final Complexity
- Thus, the overall time complexity of the entire code snippet is:
\[
O(n^2 \cdot log2(n)) = O(n^2)
\]
- Therefore, the correct answer is option **'D'**: O(n²).

What does space complexity in algorithms measure?
  • a)
    It measures the amount of time taken by an algorithm to run.
  • b)
    It measures the efficiency of an algorithm in terms of input size.
  • c)
    It measures the amount of memory used by an algorithm.
  • d)
    It measures the number of steps executed by an algorithm.
Correct answer is option 'C'. Can you explain this answer?

Pankaj Verma answered
Space complexity in algorithms measures the amount of memory used by an algorithm.


Space complexity is an important concept in algorithm analysis as it helps us understand the memory requirements of an algorithm. It refers to the amount of memory an algorithm needs to allocate and use during its execution. This memory can be in the form of variables, data structures, or any other resources that the algorithm requires.

Understanding Space Complexity:

When analyzing the space complexity of an algorithm, we consider the additional memory required apart from the input space. The space complexity is calculated as a function of the input size. It helps us estimate the maximum amount of memory an algorithm may require to solve a problem.

Importance of Space Complexity:

Space complexity is crucial because it allows us to evaluate the efficiency and practicality of an algorithm. By understanding the amount of memory an algorithm consumes, we can make informed decisions about its implementation.

Factors Affecting Space Complexity:

Several factors influence the space complexity of an algorithm:

1. Variables: The space needed to store variables and constants used in the algorithm.
2. Data Structures: The space required by data structures such as arrays, linked lists, trees, or hash tables.
3. Recursion: Recursive algorithms may require additional space to store the recursive function calls in the call stack.
4. Auxiliary Space: Additional space required by temporary variables, buffers, or other resources used during the algorithm's execution.

Measuring Space Complexity:

To determine the space complexity of an algorithm, we count the amount of space used by the algorithm as a function of the input size. We typically express space complexity using Big O notation, such as O(1), O(n), O(n^2), etc. This notation helps us understand how the space requirements grow with the input size.

Conclusion:

In summary, space complexity in algorithms measures the amount of memory used by an algorithm. It helps us evaluate the efficiency of an algorithm in terms of memory consumption and allows us to make informed decisions about its implementation. By understanding the space complexity, we can optimize the memory usage of algorithms and design more efficient solutions to problems.

What is the space complexity of the following code snippet?
void func(int n) {
    if (n <= 0) {
        return;
    }
    cout << n << " ";
    func(n - 1);
    func(n - 1);
}
func(4);
  • a)
    O(n)
  • b)
    O(log n)
  • c)
    O(sqrt(n))
  • d)
    O(n log n)
Correct answer is option 'A'. Can you explain this answer?

Manasa Iyer answered
Understanding Space Complexity
Space complexity measures the amount of memory required by an algorithm in relation to the input size. In recursive functions, it's essential to consider both the space used by the function's stack frames and any additional data structures.

Analyzing the Function
The provided function `func(int n)` performs the following:
- **Base Case**: It checks if `n` is less than or equal to 0. If so, it returns without further action.
- **Recursive Calls**: If `n` is greater than 0, it prints `n` and then makes two recursive calls to itself with `n - 1`.

Space Complexity Breakdown
- **Recursive Call Stack**: Each time `func` is called, a new stack frame is created. The maximum depth of the recursion is determined by the value of `n`. Therefore, the maximum depth of recursion is `n`, leading to `O(n)` stack space.
- **No Additional Data Structures**: The function does not use any additional data structures that would increase space complexity.

Conclusion
Given that the most significant contributor to space complexity in this scenario is the depth of the recursion (the call stack), the overall space complexity of the function is:
- **Final Space Complexity**: `O(n)`
Thus, the correct answer is **option 'A'**.

What is the space complexity of the following code snippet?
int fib(int n) {
    if (n <= 1) {
        return n;
    }
    return fib(n - 1) + fib(n - 2);
}
int result = fib(5);
  • a)
    O(n)
  • b)
    O(log n)
  • c)
    O(sqrt(n))
  • d)
    O(2^n)
Correct answer is option 'D'. Can you explain this answer?

Codebreakers answered
The given code implements the Fibonacci sequence using recursion. It recursively calls the function twice for each value until n <= 1. The number of function calls grows exponentially with n. Hence, the space complexity is O(2^n).

What is the time complexity of the following code snippet?
int sum = 0;
for (int i = 1; i <= n; i++) {
    for (int j = 1; j <= n; j = j * 2) {
        sum++;
    }
}
  • a)
    O(n)
  • b)
    O(log n)
  • c)
    O(n^2)
  • d)
    O(n log n)
Correct answer is option 'C'. Can you explain this answer?

Tanuja Mishra answered
The outer loop runs n times, and the inner loop doubles the value of j in each iteration until it reaches n. Therefore, the total number of iterations is n * log(n). Hence, the time complexity is O(n^2).

What will be the output of the following code?
int func(int n) {
    if (n <= 0) {
        return 1;
    }
    return func(n / 2) + func(n / 2);
}
int result = func(8);
cout << result;
  • a)
    256
  • b)
    128
  • c)
    64
  • d)
    16
Correct answer is option 'A'. Can you explain this answer?

Diksha Sharma answered
The function recursively calls itself twice for each value. Since the input is halved at each step, the total number of recursive calls is 2^3 = 8. Therefore, the output is 2^8 = 256.

What is the time complexity of the following code snippet?
int count = 0;
for (int i = 0; i < n; i++) {
    for (int j = i; j < n; j++) {
        count++;
    }
}
cout << count;
  • a)
    O(n)
  • b)
    O(n^2)
  • c)
    O(n log n)
  • d)
    O(1)
Correct answer is option 'B'. Can you explain this answer?

The outer loop runs n times, and the inner loop runs n - i times. The total number of iterations is 1 + 2 + 3 + ... + n, which is equivalent to (n * (n + 1)) / 2. Hence, the time complexity is O(n^2).

What will be the output of the following code?
int n = 4;
int count = 0;
for (int i = n; i >= 1; i = i / 2) {
    for (int j = 1; j <= i; j++) {
        count++;
    }
}
cout << count;
  • a)
    6
  • b)
    7
  • c)
    8
  • d)
    9
Correct answer is option 'D'. Can you explain this answer?

The outer loop starts with n and is divided by 2 in each iteration until it becomes 1. The number of outer loop iterations is log2(n) + 1. Therefore, for n = 4, log2(4) + 1 = 2 + 1 = 3. The inner loop runs 1, 2, 4 times. Hence, the total count is 3 * 4 = 12.

What is the output of the following code snippet?
int count = 0;
for (int i = 1; i <= n; i *= 2) {
    count++;
}
cout << count;
  • a)
    n
  • b)
    log(n)
  • c)
    sqrt(n)
  • d)
    n^2
Correct answer is option 'B'. Can you explain this answer?

The loop iterates until i exceeds n, where i is doubled in each iteration. The number of iterations can be calculated as log2(n), which represents the logarithmic time complexity.

What will be the output of the following code?
int n = 100;
int count = 0;
while (n > 0) {
    n = n / 2;
    count++;
}
cout << count;
  • a)
    6
  • b)
    7
  • c)
    8
  • d)
    9
Correct answer is option 'B'. Can you explain this answer?

Diksha Sharma answered
The given code divides n by 2 repeatedly until n becomes 0. The number of divisions required is log2(n) + 1. Therefore, for n = 100, log2(100) + 1 = 6 + 1 = 7.

Chapter doubts & questions for Time and Space Complexity - DSA in C++ 2025 is part of Software Development exam preparation. The chapters have been prepared according to the Software Development exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Software Development 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Time and Space Complexity - DSA in C++ in English & Hindi are available as part of Software Development exam. Download more important topics, notes, lectures and mock test series for Software Development Exam by signing up for free.

DSA in C++

152 videos|115 docs|24 tests

Top Courses Software Development