Comprehensive Algorithm Complexity Analysis Cheat Sheet: Big O Notation Guide

Introduction: What is Complexity Analysis and Why It Matters

Complexity analysis is a framework for measuring algorithm efficiency in terms of time (execution speed) and space (memory usage). It provides a mathematical way to describe how the performance of an algorithm scales with input size, regardless of hardware or implementation details.

Why Complexity Analysis Matters:

  • Helps predict performance on large datasets
  • Enables informed algorithm selection for specific problems
  • Serves as a universal language for comparing algorithms
  • Critical for optimizing applications and systems
  • Essential knowledge for technical interviews and professional software development

Core Concepts: Understanding Big O Notation

Asymptotic Notation Types

NotationNameDescriptionPractical Meaning
O(f(n))Big OUpper boundAlgorithm performs at most this well
Ω(f(n))Big OmegaLower boundAlgorithm performs at least this well
Θ(f(n))Big ThetaTight boundAlgorithm performs exactly this well
o(f(n))Little oStrict upper boundAlgorithm performs strictly better than this
ω(f(n))Little omegaStrict lower boundAlgorithm performs strictly worse than this

Common Complexity Classes (From Best to Worst)

ComplexityNameDescriptionExample
O(1)ConstantPerformance is independent of input sizeArray access, hash table lookup
O(log n)LogarithmicPerformance increases logarithmically with inputBinary search, balanced BST operations
O(n)LinearPerformance scales linearly with inputLinear search, single pass through array
O(n log n)LinearithmicPerformance is between linear and quadraticEfficient sorting (merge, heap, quick)
O(n²)QuadraticPerformance scales with square of input sizeNested loops, bubble sort
O(n³)CubicPerformance scales with cube of input sizeSimple matrix multiplication
O(2ⁿ)ExponentialPerformance doubles with each additional inputRecursive Fibonacci, generating subsets
O(n!)FactorialPerformance grows factorially with inputGenerating permutations, brute force TSP

Key Rules for Big O Calculation

  1. Drop Constants: O(2n) → O(n), O(500) → O(1)
  2. Drop Lower-Order Terms: O(n² + n) → O(n²), O(n + log n) → O(n)
  3. Variables in Complexity: If an algorithm processes inputs of sizes m and n, the complexity may be expressed as O(m × n)
  4. Consider Worst Case: By default, Big O represents worst-case performance
  5. Variables Matter: O(n + m) is not the same as O(n × m)

Step-by-Step Process for Analyzing Algorithm Complexity

  1. Identify the Input Size(s)

    • Determine what variable(s) represent the input size (n, m, etc.)
    • For multiple inputs, track each size separately
  2. Count Basic Operations

    • Identify the fundamental operations (comparisons, assignments, arithmetic)
    • Determine how many times each operation is executed
    • Focus on operations inside the innermost loops
  3. Express as a Function of Input Size

    • Write a function that relates operation count to input size
    • Account for best, worst, and average cases when applicable
  4. Simplify Using Big O Rules

    • Remove constants: O(10n) → O(n)
    • Keep only the highest-order term: O(n² + 3n) → O(n²)
    • Combine variables appropriately: O(n + m) or O(n × m)
  5. Verify Your Analysis

    • Test with small examples
    • Consider edge cases (empty input, single element, etc.)
    • Cross-check with known complexities of similar algorithms

Complexity Analysis by Algorithm Type

Sorting Algorithms

AlgorithmBest CaseAverage CaseWorst CaseSpace ComplexityStableNotes
Bubble SortO(n)O(n²)O(n²)O(1)YesSimple but inefficient
Selection SortO(n²)O(n²)O(n²)O(1)NoAlways makes n(n-1)/2 comparisons
Insertion SortO(n)O(n²)O(n²)O(1)YesEfficient for small or nearly sorted data
Merge SortO(n log n)O(n log n)O(n log n)O(n)YesDivide and conquer approach
Quick SortO(n log n)O(n log n)O(n²)O(log n)NoFast but worst case can be avoided with proper pivoting
Heap SortO(n log n)O(n log n)O(n log n)O(1)NoIn-place sorting with guaranteed performance
Radix SortO(nk)O(nk)O(nk)O(n+k)YesNon-comparison sort, k is number of digits
Counting SortO(n+k)O(n+k)O(n+k)O(n+k)YesNon-comparison sort, k is range of input

Data Structure Operations

Data StructureAccessSearchInsertionDeletionSpaceNotes
ArrayO(1)O(n)O(n)O(n)O(n)Contiguous memory
Linked ListO(n)O(n)O(1)*O(1)*O(n)*With pointer to node
StackO(n)O(n)O(1)O(1)O(n)LIFO principle
QueueO(n)O(n)O(1)O(1)O(n)FIFO principle
Hash TableN/AO(1)*O(1)*O(1)*O(n)*Amortized, can be O(n) worst case
Binary Search TreeO(log n)*O(log n)*O(log n)*O(log n)*O(n)*If balanced, O(n) worst case
AVL/Red-Black TreeO(log n)O(log n)O(log n)O(log n)O(n)Self-balancing BSTs
B-TreeO(log n)O(log n)O(log n)O(log n)O(n)Self-balancing, good for disk storage
HeapO(1)*O(n)O(log n)O(log n)O(n)*For max/min element only
TrieO(k)O(k)O(k)O(k)O(n×k)k is key length, good for strings

Graph Algorithms

AlgorithmTime ComplexitySpace ComplexityNotes
Breadth-First SearchO(V + E)O(V)Uses a queue, finds shortest path in unweighted graphs
Depth-First SearchO(V + E)O(V)Uses a stack or recursion, helpful for topological sorting
Dijkstra’s AlgorithmO(V² + E) or O((V+E)log V)*O(V)*With priority queue, finds shortest path in weighted graphs
Bellman-FordO(V × E)O(V)Works with negative weights, detects negative cycles
Floyd-WarshallO(V³)O(V²)All-pairs shortest paths
Prim’s AlgorithmO(E log V)O(V)Minimum spanning tree
Kruskal’s AlgorithmO(E log E) or O(E log V)O(V)Minimum spanning tree
Topological SortO(V + E)O(V)Only works on DAGs (Directed Acyclic Graphs)

Dynamic Programming Problems

Problem TypeTime ComplexitySpace ComplexityExample Problems
1D DPO(n) to O(n²)O(n) or O(1)*Fibonacci, Maximum Subarray, Climbing Stairs
2D DPO(n × m)O(n × m) or O(min(n,m))*Longest Common Subsequence, Edit Distance, Knapsack
Grid-based DPO(n × m)O(n × m) or O(m)*Unique Paths, Minimum Path Sum
Interval DPO(n³)O(n²)Matrix Chain Multiplication, Optimal BST
Tree DPO(n)O(n) or O(height)Diameter of Binary Tree, House Robber III
Bitmask DPO(n × 2ⁿ)O(2ⁿ)Traveling Salesman Problem, Subset Sum

*With space optimization techniques

Common Complexity Analysis Challenges and Solutions

Challenge 1: Recursive Algorithm Analysis

Problem: Difficulty determining complexity of recursive algorithms Solution:

  • Use the Master Theorem for divide-and-conquer algorithms
  • Write and solve recurrence relations
  • Draw recursion trees to visualize calls

Master Theorem for Recurrences of Form T(n) = aT(n/b) + f(n)

Where a ≥ 1, b > 1, and f(n) is asymptotically positive:

  1. If f(n) = O(n^(log_b(a)-ε)) for some ε > 0, then T(n) = Θ(n^(log_b a))
  2. If f(n) = Θ(n^(log_b a)), then T(n) = Θ(n^(log_b a) × log n)
  3. If f(n) = Ω(n^(log_b(a)+ε)) for some ε > 0, and if a×f(n/b) ≤ c×f(n) for some c < 1, then T(n) = Θ(f(n))

Challenge 2: Amortized Analysis

Problem: Some operations have varying costs depending on previous operations Solution: Use amortized analysis techniques:

  • Aggregate Method: Total cost averaged over sequence of operations
  • Accounting Method: Charge extra for cheap operations to pay for expensive ones
  • Potential Method: Define a potential function to measure “stored work”

Challenge 3: Multiple Variables

Problem: Algorithms with multiple input parameters Solution:

  • Express complexity in terms of all relevant variables
  • Consider the relationship between variables
  • Determine which variables dominate performance

Challenge 4: Space Complexity

Problem: Overlooking memory usage Solution:

  • Track auxiliary space (extra space used beyond input)
  • Account for recursion stack space
  • Consider in-place optimization opportunities

Best Practices and Practical Tips

For Algorithm Design

  • Start Simple: Begin with a working solution before optimizing
  • Pre-processing: Sometimes spending O(n) time to organize data saves more time later
  • Data Structure Selection: Choose data structures based on the most frequent operations needed
  • Space-Time Tradeoff: Consider trading memory for speed (and vice versa) depending on constraints
  • Problem Reduction: Relate new problems to known problems with established solutions
  • Divide and Conquer: Break complex problems into smaller, manageable subproblems

For Complexity Analysis

  • Focus on Dominant Factors: Identify the most expensive operations
  • Consider All Cases: Analyze best, average, and worst-case scenarios
  • Account for Hidden Costs: Be aware of library function costs (e.g., Python’s sort is O(n log n))
  • Document Assumptions: State what factors your analysis depends on
  • Empirical Verification: Test with varying input sizes to confirm analytical predictions
  • Watch for Logarithmic Factors: They significantly improve scalability (O(n) vs O(n log n))

Optimization Strategies

  • Algorithm Selection: Sometimes changing the algorithm approach entirely gives the best improvements
  • Early Termination: Exit loops as soon as a solution is found
  • Caching/Memoization: Store and reuse results of expensive function calls
  • Lazy Evaluation: Compute values only when needed
  • Preprocessing: Organize data upfront to enable faster operations later
  • Parallel Processing: Divide work across multiple cores when applicable

Performance Visualization

Growth Rate Comparison

To visualize the dramatic differences between complexity classes, consider the time required for an algorithm to process inputs of increasing size:

Input Size (n)O(1)O(log n)O(n)O(n log n)O(n²)O(2ⁿ)O(n!)
101310331001,0243,628,800
201420864001,048,5762.4×10¹⁸
5016502822,5001.1×10¹⁵3.0×10⁶⁴
1001710066410,0001.3×10³⁰9.3×10¹⁵⁷
1,0001101,0009,9661,000,0001.1×10³⁰¹

*Values represent relative number of operations, not actual time units

Resources for Further Learning

Books

  • “Introduction to Algorithms” by Cormen, Leiserson, Rivest, and Stein
  • “Algorithm Design Manual” by Steven Skiena
  • “Algorithms” by Robert Sedgewick and Kevin Wayne
  • “Grokking Algorithms” by Aditya Bhargava (beginner-friendly)

Online Courses

  • MIT OpenCourseWare: “Introduction to Algorithms”
  • Coursera: “Algorithms Specialization” by Stanford University
  • edX: “Algorithms and Data Structures” by Microsoft

Websites and Interactive Tools

  • LeetCode, HackerRank, and CodeSignal for practice problems
  • VisuAlgo for algorithm visualization
  • Big-O Cheat Sheet (bigocheatsheet.com)
  • GeeksforGeeks for algorithm explanations

Research Papers

  • “A Note on Two Problems in Connexion with Graphs” by Edsger W. Dijkstra
  • “An Efficient Algorithm for Determining Whether a Given Binary Tree is a BST” by Valiente

Online Communities

  • Stack Overflow for specific questions
  • Computer Science Stack Exchange for theoretical discussions
  • r/algorithms and r/compsci on Reddit for community discussions
Scroll to Top