Master Theorem Explained

The Master Theorem is a formula that provides a way to solve recurrence relations of the form:

$T(n) = aT(n/b) + f(n)$

where:

  • $T(n)$ is the time complexity of the problem of size $n$.
  • $a \geq 1$ is the number of subproblems in the recursion.
  • $b > 1$ is the factor by which the subproblem size is reduced in each recursive call.
  • $f(n)$ is a function that represents the cost of the work done outside the recursive calls (e.g., the cost of dividing the problem and combining the subproblem solutions).

The Master Theorem is particularly useful for analyzing the time complexity of divide-and-conquer algorithms.

Algorithm

The Master Theorem doesn't provide an algorithm in the traditional sense, but rather a method for determining the time complexity of a recursive algorithm given its recurrence relation. The "algorithm" is the application of the theorem itself.

Here's how to apply the Master Theorem:

  1. Identify a, b, and f(n): Determine the values of $a$, $b$, and the function $f(n)$ from the given recurrence relation.
  2. Compare f(n) with nlogba: Compare the asymptotic growth rate of $f(n)$ with that of $n^{\log_b a}$.
  3. Apply the appropriate case: Based on the comparison, apply one of the three cases of the Master Theorem to determine the time complexity $T(n)$.

Mathematics

The Master Theorem is based on the idea of comparing the cost of the work done at each level of the recursion with the cost of the work done in the leaves of the recursion tree. The term $n^{\log_b a}$ represents the cost of the work done at the leaves.

The theorem has three cases:

  1. Case 1: If $f(n) = O(n^{\log_b a - \epsilon})$ for some constant $\epsilon > 0$, then $T(n) = \Theta(n^{\log_b a})$.
    • In this case, the work done at the leaves dominates the overall cost.
  2. Case 2: If $f(n) = \Theta(n^{\log_b a})$, then $T(n) = \Theta(n^{\log_b a} \log n)$.
    • In this case, the work done at each level is roughly the same, and the total cost is the cost at one level multiplied by the number of levels (which is $\log_b n$).
  3. Case 3: If $f(n) = \Omega(n^{\log_b a + \epsilon})$ for some constant $\epsilon > 0$, and if $af(n/b) \leq cf(n)$ for some constant $c < 1$ and all sufficiently large $n$ (the regularity condition), then $T(n) = \Theta(f(n))$.
    • In this case, the work done at the root of the recursion dominates the overall cost. The regularity condition ensures that the cost decreases geometrically as we go down the recursion tree.

Time and Memory Complexity

The Master Theorem itself does not have time or memory complexity in the traditional sense. It's a tool for *analyzing* the time complexity of recursive algorithms. The time complexity it provides is the result of the analysis.

  • Time Complexity: The output of the Master Theorem is the time complexity, $T(n)$, of the algorithm described by the recurrence relation. This will be one of the theta cases described above: $\Theta(n^{\log_b a})$, $\Theta(n^{\log_b a} \log n)$, or $\Theta(f(n))$.
  • Memory Complexity: The Master Theorem doesn't directly tell us about the memory complexity. The memory complexity of the algorithm being analyzed would need to be determined separately, often by considering the recursion depth and the amount of memory used at each level of the recursion.

Easy Problems

Problem 1: Binary Search (Recurrence Analysis)

Problem Statement (Hypothetical): Analyze the time complexity of the classic binary search algorithm using the Master Theorem. The binary search algorithm works on a sorted array. It divides the array in half, and searches one half.

Input Example:

Binary Search on an array of size n.

Output Example:

Time Complexity: $\Theta(\log n)$

C++ Code (for Binary Search - for context, not directly related to Master Theorem):


#include <iostream>
#include <vector>

int binarySearch(const std::vector<int>& arr, int target) {
    int left = 0;
    int right = arr.size() - 1;

    while (left <= right) {
        int mid = left + (right - left) / 2;
        if (arr[mid] == target) {
            return mid;
        } else if (arr[mid] < target) {
            left = mid + 1;
        } else {
            right = mid - 1;
        }
    }
    return -1;
}

int main() {
    std::vector<int> arr = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91};
    int target = 23;
    int index = binarySearch(arr, target);
    std::cout << "Index of " << target << ": " << index << std::endl;
    return 0;
}

Master Theorem Analysis:

The recurrence relation for Binary Search is: $T(n) = T(n/2) + O(1)$.

Here, $a = 1$, $b = 2$, and $f(n) = O(1)$.

$n^{\log_b a} = n^{\log_2 1} = n^0 = 1$.

Since $f(n) = O(1) = \Theta(n^{\log_b a})$, we are in Case 2 of the Master Theorem.

Therefore, $T(n) = \Theta(n^{\log_b a} \log n) = \Theta(1 \cdot \log n) = \Theta(\log n)$.

Problem 2: Merge Sort (Recurrence Analysis)

Problem Statement (Hypothetical): Analyze the time complexity of the Merge Sort algorithm using the Master Theorem. Merge sort divides the array into two halves, recursively sorts them, and merges the sorted halves.

Input Example:

Merge Sort on an array of size n.

Output Example:

Time Complexity: $\Theta(n \log n)$

C++ Code (for Merge Sort - for context):


#include <iostream>
#include <vector>

void merge(std::vector<int>& arr, int left, int mid, int right) {
    int n1 = mid - left + 1;
    int n2 = right - mid;

    std::vector<int> L(n1);
    std::vector<int> R(n2);

    for (int i = 0; i < n1; i++)
        L[i] = arr[left + i];
    for (int j = 0; j < n2; j++)
        R[j] = arr[mid + 1 + j];

    int i = 0, j = 0, k = left;
    while (i < n1 && j < n2) {
        if (L[i] <= R[j]) {
            arr[k] = L[i];
            i++;
        } else {
            arr[k] = R[j];
            j++;
        }
        k++;
    }

    while (i < n1) {
        arr[k] = L[i];
        i++;
        k++;
    }

    while (j < n2) {
        arr[k] = R[j];
        j++;
        k++;
    }
}

void mergeSort(std::vector<int>& arr, int left, int right) {
    if (left < right) {
        int mid = left + (right - left) / 2;
        mergeSort(arr, left, mid);
        mergeSort(arr, mid + 1, right);
        merge(arr, left, mid, right);
    }
}

int main() {
    std::vector<int> arr = {12, 11, 13, 5, 6, 7};
    mergeSort(arr, 0, arr.size() - 1);
    for (int num : arr)
        std::cout << num << " ";
    std::cout << std::endl;
    return 0;
}

Master Theorem Analysis:

The recurrence relation for Merge Sort is: $T(n) = 2T(n/2) + O(n)$.

Here, $a = 2$, $b = 2$, and $f(n) = O(n)$.

$n^{\log_b a} = n^{\log_2 2} = n^1 = n$.

Since $f(n) = O(n) = \Theta(n^{\log_b a})$, we are in Case 2 of the Master Theorem.

Therefore, $T(n) = \Theta(n^{\log_b a} \log n) = \Theta(n \log n)$.

Intermediate Problems

Problem 3: Karatsuba Multiplication (Recurrence Analysis)

Problem Statement (Hypothetical): Analyze the time complexity of the Karatsuba multiplication algorithm using the Master Theorem. Karatsuba multiplication divides the n-digit numbers into smaller parts and performs three multiplications of n/2 digit numbers.

Input Example:

Karatsuba multiplication of two n-digit numbers.

Output Example:

Time Complexity: $\Theta(n^{\log_2 3})$

C++ Code (for Karatsuba - for context):


#include <iostream>
#include <string>
#include <algorithm>

std::string addStrings(const std::string& a, const std::string& b) {
    std::string result = "";
    int carry = 0;
    int i = a.length() - 1;
    int j = b.length() - 1;

    while (i >= 0 || j >= 0 || carry > 0) {
        int digitA = (i >= 0) ? a[i--] - '0' : 0;
        int digitB = (j >= 0) ? b[j--] - '0' : 0;
        int sum = digitA + digitB + carry;
        carry = sum / 10;
        result += std::to_string(sum % 10);
    }
    std::reverse(result.begin(), result.end());
    return result;
}

std::string subtractStrings(const std::string& a, const std::string& b) {
    std::string result = "";
    int borrow = 0;
    int i = a.length() - 1;
    int j = b.length() - 1;

    while (i >= 0 || j >= 0) {
        int digitA = (i >= 0) ? a[i--] - '0' : 0;
        int digitB = (j >= 0) ? b[j--] - '0' : 0;
        int diff = digitA - digitB - borrow;
        if (diff < 0) {
            diff += 10;
            borrow = 1;
        } else {
            borrow = 0;
        }
        result += std::to_string(diff);
    }
    std::reverse(result.begin(), result.end());
    size_t firstNonZero = result.find_first_not_of('0');
    if (firstNonZero == std::string::npos) {
        return "0";
    }
    return result.substr(firstNonZero);
}

std::string multiplyByPowerOf10(const std::string& s, int n) {
    if (s == "0") return "0";
    std::string result = s;
    for (int i = 0; i < n; ++i) {
        result += '0';
    }
    return result;
}

std::string karatsuba(const std::string& x, const std::string& y) {
    if (x.length() <= 1 || y.length() <= 1) {
        return std::to_string(std::stoll(x) * std::stoll(y));
    }

    int n = std::max(x.length(), y.length());
    int half = n / 2;

    std::string a = (x.length() > half) ? x.substr(0, x.length() - half) : "0";
    std::string b = (x.length() > half) ? x.substr(x.length() - half) : x;
    std::string c = (y.length() > half) ? y.substr(0, y.length() - half) : "0";
    std::string d = (y.length() > half) ? y.substr(y.length() - half) : y;


    std::string p1 = karatsuba(a, c);
    std::string p2 = karatsuba(b, d);
    std::string p3 = karatsuba(addStrings(a, b), addStrings(c, d));

    std::string step1 = multiplyByPowerOf10(p1, 2 * half);
    std::string step2 = multiplyByPowerOf10(subtractStrings(subtractStrings(p3, p1), p2), half);
    std::string finalResult = addStrings(addStrings(step1, step2), p2);

    return finalResult;
}

int main() {
    std::string x = "1234";
    std::string y = "5678";
    std::string result = karatsuba(x, y);
    std::cout << result << std::endl;
    return 0;
}

Master Theorem Analysis:

The recurrence relation for Karatsuba multiplication is: $T(n) = 3T(n/2) + O(n)$.

Here, $a = 3$, $b = 2$, and $f(n) = O(n)$.

$n^{\log_b a} = n^{\log_2 3} \approx n^{1.585}$.

Since $f(n) = O(n) = O(n^{1.585 - \epsilon})$ for some $\epsilon > 0$ (e.g., $\epsilon = 0.585$), we are in Case 1 of the Master Theorem.

Therefore, $T(n) = \Theta(n^{\log_b a}) = \Theta(n^{\log_2 3}) \approx \Theta(n^{1.585})$.

Problem 4: Divide and Conquer Matrix Multiplication (Recurrence Analysis)

Problem Statement (Hypothetical): Consider a divide-and-conquer algorithm for matrix multiplication that divides each $n \times n$ matrix into four $n/2 \times n/2$ submatrices and performs eight multiplications of these submatrices, along with some additions. The time taken for the additions is $O(n^2)$. Analyze the time complexity of this algorithm using the Master Theorem.

Input Example:

Divide and conquer matrix multiplication of two n x n matrices.

Output Example:

Time Complexity: $\Theta(n^3)$

C++ Code (for Divide and Conquer Matrix Multiplication - for context, this is complex to implement fully):


#include <iostream>
#include <vector>

// This is a simplified representation.  A full implementation is quite involved.
// It's included to give context to the Master Theorem analysis.

// Assume we have functions to:
// - splitMatrix(matrix, submatrices)
// - multiplySubmatrices(submatrices)
// - combineSubmatrices(submatrices, result)

// The core recursive function would look something like this (pseudocode):
// void multiplyMatrices(matrix A, matrix B, matrix C) {
//     if (A is small) {
//         // Perform standard matrix multiplication
//         return;
//     }
//     // 1. Divide A and B into submatrices
//     // 2. Recursively multiply submatrices (8 multiplications)
//     // 3. Combine the results to form C
// }

int main() {
    //  Illustrative example.  Full matrix multiplication is not performed here.
    std::cout << "Matrix multiplication analysis using Master Theorem" << std::endl;
    return 0;
}

Master Theorem Analysis:

The recurrence relation for this divide-and-conquer matrix multiplication is: $T(n) = 8T(n/2) + O(n^2)$.

Here, $a = 8$, $b = 2$, and $f(n) = O(n^2)$.

$n^{\log_b a} = n^{\log_2 8} = n^3$.

Since $f(n) = O(n^2) = O(n^{3 - \epsilon})$ for some $\epsilon > 0$ (e.g., $\epsilon = 1$), we are in Case 1 of the Master Theorem.

Therefore, $T(n) = \Theta(n^{\log_b a}) = \Theta(n^3)$.

Hard Problem

Problem 5: Strassen's Matrix Multiplication (Recurrence Analysis)

Problem Statement (Hypothetical): Strassen's algorithm is a more efficient divide-and-conquer algorithm for matrix multiplication. It divides each $n \times n$ matrix into four $n/2 \times n/2$ submatrices but performs only *seven* multiplications of these submatrices, along with some additions and subtractions. The time taken for the additions and subtractions is still $O(n^2)$. Analyze the time complexity of Strassen's algorithm using the Master Theorem.

Input Example:

Strassen's matrix multiplication of two n x n matrices.

Output Example:

Time Complexity: $\Theta(n^{\log_2 7})$

C++ Code (for Strassen's - for context, this is very complex):


#include <iostream>
#include <vector>

//  Simplified representation.  A full implementation is extremely involved.
//  Included to give context to the Master Theorem analysis.

//  Strassen's algorithm involves a complex series of matrix additions, subtractions,
//  and seven recursive multiplications.

//  The core recursive function would have a structure like this (pseudocode):
//  void strassenMultiply(matrix A, matrix B, matrix C) {
//      if (A is small) {
//          // Perform standard matrix multiplication
//          return;
//      }
//      // 1. Divide A and B into submatrices
//      // 2. Calculate seven product matrices (M1 to M7) using a specific formula
//      // 3. Combine M1 to M7 to form the submatrices of C
//  }

int main() {
    // Illustrative example.  Full Strassen's algorithm not implemented here.
    std::cout << "Strassen's Matrix multiplication analysis using Master Theorem" << std::endl;
    return 0;
}

Master Theorem Analysis:

The recurrence relation for Strassen's algorithm is: $T(n) = 7T(n/2) + O(n^2)$.

Here, $a = 7$, $b = 2$, and $f(n) = O(n^2)$.

$n^{\log_b a} = n^{\log_2 7} \approx n^{2.807}$.

Since $f(n) = O(n^2) = O(n^{\log_2 7 - \epsilon})$ for some $\epsilon > 0$ (e.g., $\epsilon \approx 0.807$), we are in Case 1 of the Master Theorem.

Therefore, $T(n) = \Theta(n^{\log_b a}) = \Theta(n^{\log_2 7}) \approx \Theta(n^{2.807})$.