Why performance analysis?
Many desirable properties of software - user-friendliness, modularity, security and maintainability - are important. Performance is central because, without acceptable performance, these properties are difficult to deliver in practice. Performance is the currency that permits us to buy other qualities. A second practical reason to study performance is simple: faster programs are more usable and, often, more enjoyable to work with.
One naive approach to compare two algorithms is to implement both and measure running time on a given machine for various inputs. This empirical approach has several drawbacks:
Asymptotic analysis addresses these issues by studying how the running time (or space) of an algorithm grows with the input size n, ignoring machine-dependent constants and low-order terms. Asymptotic analysis therefore gives a machine-independent, input-size dependent measure of efficiency.
Example. Searching for a value in a sorted array can be done by linear search (order of growth: linear) or binary search (order of growth: logarithmic). Even if linear search runs on a much faster machine, for sufficiently large n binary search will outperform linear search because logarithmic growth eventually becomes smaller than linear growth regardless of constant factors.
Limitations of asymptotic analysis
Asymptotic analysis is not perfect. Two algorithms with the same asymptotic growth may differ by large constant factors. For example, if Algorithm A takes 1000·n·log n time and Algorithm B takes 2·n·log n time, both are Θ(n log n) but Algorithm B is significantly faster for all realistic n. Also, asymptotic claims are about sufficiently large n; if your application only ever processes small inputs, an asymptotically slower algorithm may be practically better.
When we analyse an algorithm we may consider three kinds of inputs to determine the running time:
#include <stdio.h> // Linearly search x in arr[].
If x is present then return the index,
// otherwise return -1 int search(int arr[], int n, int x)
{ int i; for (i = 0; i < n; i++)
{ if (arr[i] == x) return i; }
return -1; }
/* Driver program to test above function*/
int main()
{ int arr[] = {1, 10, 30, 15};
int x = 30;
int n = sizeof(arr)/sizeof(arr[0]);
printf("%d is present at index %d", x, search(arr, n, x));
getchar();
return 0;
}
Worst-case analysis provides an upper bound on running time. For the linear search above, the worst case occurs when x is not present (or is at the last position). The function compares x with all n elements, so the worst-case time complexity is Θ(n). Worst-case guarantees are useful because they bound the maximum cost.
Average-case analysis computes the expected running time over a specified distribution of inputs. One must assume or know the input distribution to compute the average. For linear search, if we assume that x is equally likely to be at any of the n positions and also allow the possibility that x is not present with equal probability, then the average requires summing the cost over all these cases and dividing by the number of cases (n + 1). Average-case analysis is informative but often hard because it requires a realistic probability model for inputs.
Best-case analysis gives a lower bound on running time. For linear search the best case occurs when x is at the first position, giving Θ(1) time. Best-case bounds are of limited practical use because they do not guarantee performance on typical or adversarial inputs.
Some algorithms have the same asymptotic cost in all cases. For example, Merge Sort performs Θ(n log n) work in best, average and worst cases. Other algorithms exhibit different behaviours depending on the input: QuickSort can be Θ(n^2) in the worst case (for some pivot choices) and Θ(n log n) on average; Insertion Sort is Θ(n) in the best case (already sorted input) and Θ(n^2) in the worst case (reverse sorted input).
Asymptotic notations are mathematical tools used to express how a function grows with input size n. The three standard notations are:
Theta notation describes exact asymptotic behaviour by providing both an upper and a lower bound. Informally, f(n) = Θ(g(n)) if f(n) grows at the same rate as g(n) up to constant factors for sufficiently large n.
Formal definition:
Θ(g(n)) = { f(n) : there exist positive constants c1, c2 and n0 such that 0 ≤ c1·g(n) ≤ f(n) ≤ c2·g(n) for all n ≥ n0 }.
Example. 3n3 + 6n2 + 6000 = Θ(n3). Drop lower-order terms and ignore constant multipliers to get the dominant term.
Big O notation gives an asymptotic upper bound: f(n) = O(g(n)) means f(n) does not grow faster than g(n) up to a constant factor for large n.
Formal definition:
O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c·g(n) for all n ≥ n0 }.
Example. Insertion Sort runs in O(n2) time because its worst-case running time is proportional to n2. Note that O(n2) also includes faster behaviours like linear time; O-notation is an upper bound and is often used when the worst-case cost is of interest.
Big Omega provides an asymptotic lower bound: f(n) = Ω(g(n)) means f(n) grows at least as fast as g(n) up to constant factors for sufficiently large n.
Formal definition:
Ω(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ c·g(n) ≤ f(n) for all n ≥ n0 }.
Example. Insertion Sort is Ω(n) because in the best case it runs in linear time. Ω-notation is used less frequently than O-notation because lower bounds (best-case behaviour) are less useful for guaranteeing performance.
Which of the following statements is/are valid?
1. Time Complexity of QuickSort is Θ(n^2)
2. Time Complexity of QuickSort is O(n^2)
3. For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
4. Time complexity of all computer algorithms can be written as Ω(1)
We summarise common loop patterns and how to reason about their time complexity.
If loops are consecutive (not nested), their costs add. Example:
for (int i = 1; i <= m; i += c)
{ // O(1) }
for (int i = 1; i <= n; i += c)
{ // O(1) }
The total cost is O(m) + O(n) = O(m + n). If m = n this simplifies to O(n).
When analysing worst-case time complexity, consider branchings that cause the maximum number of operations. If the code is too complex, obtain an upper bound by assuming the most expensive branch is taken each iteration.
Recursive procedures are analysed by writing a recurrence relation for their running time and solving the recurrence. Standard methods to solve recurrences include substitution, recursion-tree method and the Master Theorem (for divide-and-conquer recurrences of the form T(n) = a·T(n/b) + f(n)).
Auxiliary space is the extra temporary space used by an algorithm (excluding the space used by the inputs).
Space complexity of an algorithm is the total space used by the algorithm with respect to input size; it includes auxiliary space plus the space required to store the inputs.
Example. When comparing sorting algorithms by memory usage, auxiliary space is the relevant measure. Merge Sort uses O(n) auxiliary space, while Insertion Sort and Heap Sort use O(1) auxiliary space. However, the space complexity of all these algorithms, considering input storage, is O(n).
| 1. What is the meaning of asymptotic worst-case time complexity in computer science engineering? | ![]() |
| 2. How is asymptotic worst-case space complexity different from time complexity in computer science engineering? | ![]() |
| 3. Why is it important to analyze the asymptotic worst-case time complexity of an algorithm? | ![]() |
| 4. How can the asymptotic worst-case time complexity of an algorithm be represented? | ![]() |
| 5. Can the asymptotic worst-case time complexity of an algorithm change for different inputs? | ![]() |
![]() | Explore Courses for Computer Science Engineering (CSE) exam |
![]() | Get EduRev Notes directly in your Google search |