Computer Science Engineering (CSE) Exam  >  Computer Science Engineering (CSE) Notes  >  Algorithms  >  Asymptotic Worst Case Time & Space Complexity

Asymptotic Worst Case Time & Space Complexity

Analysis of Algorithms | (Asymptotic Analysis)

Why performance analysis?

Many desirable properties of software - user-friendliness, modularity, security and maintainability - are important. Performance is central because, without acceptable performance, these properties are difficult to deliver in practice. Performance is the currency that permits us to buy other qualities. A second practical reason to study performance is simple: faster programs are more usable and, often, more enjoyable to work with.

Asymptotic Analysis - the central idea

One naive approach to compare two algorithms is to implement both and measure running time on a given machine for various inputs. This empirical approach has several drawbacks:

  1. Different inputs may favour different algorithms: algorithm A may be faster for some inputs while algorithm B is faster for others.
  2. Machine-dependent factors (processor speed, compiler optimisations, libraries, caches) affect observed running time; an algorithm that is faster on one machine may be slower on another for the same input.

Asymptotic analysis addresses these issues by studying how the running time (or space) of an algorithm grows with the input size n, ignoring machine-dependent constants and low-order terms. Asymptotic analysis therefore gives a machine-independent, input-size dependent measure of efficiency.

Example. Searching for a value in a sorted array can be done by linear search (order of growth: linear) or binary search (order of growth: logarithmic). Even if linear search runs on a much faster machine, for sufficiently large n binary search will outperform linear search because logarithmic growth eventually becomes smaller than linear growth regardless of constant factors.

Limitations of asymptotic analysis

Asymptotic analysis is not perfect. Two algorithms with the same asymptotic growth may differ by large constant factors. For example, if Algorithm A takes 1000·n·log n time and Algorithm B takes 2·n·log n time, both are Θ(n log n) but Algorithm B is significantly faster for all realistic n. Also, asymptotic claims are about sufficiently large n; if your application only ever processes small inputs, an asymptotically slower algorithm may be practically better.

Worst, Average and Best Case Analyses

When we analyse an algorithm we may consider three kinds of inputs to determine the running time:

  1. Worst case
  2. Average case
  3. Best case

Worked example: Linear Search

#include <stdio.h> // Linearly search x in arr[]. 
If x is present then return the index, 
// otherwise return -1 int search(int arr[], int n, int x) 
{ int i; for (i = 0; i < n; i++) 
{ if (arr[i] == x) return i; } 
return -1; }
 /* Driver program to test above function*/ 
int main()
 { int arr[] = {1, 10, 30, 15}; 
int x = 30; 
int n = sizeof(arr)/sizeof(arr[0]);
printf("%d is present at index %d", x, search(arr, n, x)); 
getchar(); 
return 0; 

Worst Case Analysis (usually preferred)

Worst-case analysis provides an upper bound on running time. For the linear search above, the worst case occurs when x is not present (or is at the last position). The function compares x with all n elements, so the worst-case time complexity is Θ(n). Worst-case guarantees are useful because they bound the maximum cost.

Average Case Analysis

Average-case analysis computes the expected running time over a specified distribution of inputs. One must assume or know the input distribution to compute the average. For linear search, if we assume that x is equally likely to be at any of the n positions and also allow the possibility that x is not present with equal probability, then the average requires summing the cost over all these cases and dividing by the number of cases (n + 1). Average-case analysis is informative but often hard because it requires a realistic probability model for inputs.

Average Case Analysis

Best Case Analysis

Best-case analysis gives a lower bound on running time. For linear search the best case occurs when x is at the first position, giving Θ(1) time. Best-case bounds are of limited practical use because they do not guarantee performance on typical or adversarial inputs.

Some algorithms have the same asymptotic cost in all cases. For example, Merge Sort performs Θ(n log n) work in best, average and worst cases. Other algorithms exhibit different behaviours depending on the input: QuickSort can be Θ(n^2) in the worst case (for some pivot choices) and Θ(n log n) on average; Insertion Sort is Θ(n) in the best case (already sorted input) and Θ(n^2) in the worst case (reverse sorted input).

Asymptotic Notations

Asymptotic notations are mathematical tools used to express how a function grows with input size n. The three standard notations are:

  1. Θ (Theta) - tight bound (upper and lower)
  2. O (Big O) - asymptotic upper bound
  3. Ω (Big Omega) - asymptotic lower bound

1) Θ-notation

Theta notation describes exact asymptotic behaviour by providing both an upper and a lower bound. Informally, f(n) = Θ(g(n)) if f(n) grows at the same rate as g(n) up to constant factors for sufficiently large n.

Formal definition:

Θ(g(n)) = { f(n) : there exist positive constants c1, c2 and n0 such that 0 ≤ c1·g(n) ≤ f(n) ≤ c2·g(n) for all n ≥ n0 }.

Example. 3n3 + 6n2 + 6000 = Θ(n3). Drop lower-order terms and ignore constant multipliers to get the dominant term.

2) O-notation (Big O)

Big O notation gives an asymptotic upper bound: f(n) = O(g(n)) means f(n) does not grow faster than g(n) up to a constant factor for large n.

Formal definition:

O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c·g(n) for all n ≥ n0 }.

Example. Insertion Sort runs in O(n2) time because its worst-case running time is proportional to n2. Note that O(n2) also includes faster behaviours like linear time; O-notation is an upper bound and is often used when the worst-case cost is of interest.

3) Ω-notation (Big Omega)

Big Omega provides an asymptotic lower bound: f(n) = Ω(g(n)) means f(n) grows at least as fast as g(n) up to constant factors for sufficiently large n.

Formal definition:

Ω(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ c·g(n) ≤ f(n) for all n ≥ n0 }.

Example. Insertion Sort is Ω(n) because in the best case it runs in linear time. Ω-notation is used less frequently than O-notation because lower bounds (best-case behaviour) are less useful for guaranteeing performance.

Common growth rates (from smallest to largest, for large n)

  • 1 (constant)
  • log log n
  • log n
  • nc for constant c > 0 (polynomial)
  • n·log n
  • n2, n3, ...
  • cn (exponential)
  • n! (factorial)
Common growth rates (from smallest to largest, for large n)
Common growth rates (from smallest to largest, for large n)
Common growth rates (from smallest to largest, for large n)

Exercises

Which of the following statements is/are valid?

1. Time Complexity of QuickSort is Θ(n^2)

2. Time Complexity of QuickSort is O(n^2)

3. For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).

4. Time complexity of all computer algorithms can be written as Ω(1)

Analysis of Loops and Simple Program Fragments

We summarise common loop patterns and how to reason about their time complexity.

O(1)

  • A block of statements without loops or recursion is O(1).
  • Loops that execute a constant number of times (independent of n) are O(1). Example: for (i = 1; i <= c; i++) { ... } for constant c.

O(n)

  • A loop that increments or decrements its index by a constant amount and runs Θ(n) iterations is O(n). Example: for (i = 1; i <= n; i += c) { ... }.

O(nc)

  • Nested loops typically multiply their iteration counts. Two nested loops each running Θ(n) times give O(n2).
  • Example: double loop where inner loop runs for each value of the outer index results in O(n2).

O(log n)

  • If the loop variable is multiplied or divided by a constant factor each iteration, the loop runs Θ(log n) times. Example: for (i = 1; i <= n; i *= c) { ... }.
  • Binary Search (iterative) is O(log n).

O(log log n)

  • If the loop variable changes by repeated exponentiation or repeated root extraction, the number of iterations can be Θ(log log n). Example: for (i = 2; i <= n; i = ic) { ... } or repeated square roots.

Combining consecutive loops

If loops are consecutive (not nested), their costs add. Example:

for (int i = 1; i <= m; i += c)
{ // O(1) } 
for (int i = 1; i <= n; i += c) 
{ // O(1) } 

The total cost is O(m) + O(n) = O(m + n). If m = n this simplifies to O(n).

If/else blocks inside loops

When analysing worst-case time complexity, consider branchings that cause the maximum number of operations. If the code is too complex, obtain an upper bound by assuming the most expensive branch is taken each iteration.

Recursive functions and recurrences

Recursive procedures are analysed by writing a recurrence relation for their running time and solving the recurrence. Standard methods to solve recurrences include substitution, recursion-tree method and the Master Theorem (for divide-and-conquer recurrences of the form T(n) = a·T(n/b) + f(n)).

Space Complexity

Auxiliary space is the extra temporary space used by an algorithm (excluding the space used by the inputs).

Space complexity of an algorithm is the total space used by the algorithm with respect to input size; it includes auxiliary space plus the space required to store the inputs.

Example. When comparing sorting algorithms by memory usage, auxiliary space is the relevant measure. Merge Sort uses O(n) auxiliary space, while Insertion Sort and Heap Sort use O(1) auxiliary space. However, the space complexity of all these algorithms, considering input storage, is O(n).

The document Asymptotic Worst Case Time & Space Complexity is a part of the Computer Science Engineering (CSE) Course Algorithms.
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)

FAQs on Asymptotic Worst Case Time & Space Complexity

1. What is the meaning of asymptotic worst-case time complexity in computer science engineering?
Ans. Asymptotic worst-case time complexity in computer science engineering refers to the estimation of the maximum amount of time required by an algorithm to run, as the input size approaches infinity. It helps in analyzing the efficiency and performance of algorithms by providing an upper bound on the execution time.
2. How is asymptotic worst-case space complexity different from time complexity in computer science engineering?
Ans. Asymptotic worst-case space complexity in computer science engineering refers to the estimation of the maximum amount of memory space required by an algorithm to run, as the input size approaches infinity. It measures the amount of memory space used by an algorithm. On the other hand, asymptotic worst-case time complexity measures the amount of time taken by an algorithm to run. Both complexities help in analyzing the efficiency of algorithms, but they focus on different resources.
3. Why is it important to analyze the asymptotic worst-case time complexity of an algorithm?
Ans. Analyzing the asymptotic worst-case time complexity of an algorithm is important because it provides an understanding of how the algorithm's performance scales with increasing input size. It helps in identifying algorithms that are efficient for large inputs and avoids the selection of algorithms that may become impractical as the input size grows. By analyzing the time complexity, engineers can make informed decisions regarding algorithm selection and optimization.
4. How can the asymptotic worst-case time complexity of an algorithm be represented?
Ans. The asymptotic worst-case time complexity of an algorithm is commonly represented using Big O notation. It provides an upper bound on the growth rate of an algorithm's execution time as the input size increases. For example, if an algorithm has a time complexity of O(n), it means that the algorithm's execution time grows linearly with the input size. Similarly, if an algorithm has a time complexity of O(n^2), it means that the execution time grows quadratically with the input size.
5. Can the asymptotic worst-case time complexity of an algorithm change for different inputs?
Ans. Yes, the asymptotic worst-case time complexity of an algorithm can vary for different inputs. An algorithm may have different time complexities depending on the characteristics of the input data. For example, an algorithm may have a different time complexity for sorted input compared to unsorted input. However, when analyzing the asymptotic worst-case time complexity, it is assumed that the input is in the worst-case scenario, which represents the maximum amount of time required for any input of a given size.
Explore Courses for Computer Science Engineering (CSE) exam
Get EduRev Notes directly in your Google search
Related Searches
Sample Paper, Semester Notes, Objective type Questions, past year papers, shortcuts and tricks, pdf , Asymptotic Worst Case Time & Space Complexity, Asymptotic Worst Case Time & Space Complexity, Extra Questions, Exam, study material, Asymptotic Worst Case Time & Space Complexity, MCQs, practice quizzes, Previous Year Questions with Solutions, Summary, Viva Questions, video lectures, Important questions, mock tests for examination, ppt, Free;