CS 312 Lecture 18
Substitution method for recurrence relations

Here is another way to compute the asymptotic complexity: guess the answer (In this case, O(n lg n)), and plug it directly into the recurrence relation. By looking at what happens we can see whether the guess was correct or whether it needs to be increased to a higher order of growth (or can be decreased to a lower order). This works as long as the recurrence equations are monotonic in n, which is usually the case. By monotonic, we mean that increasing n does not cause the right-hand side of any recurrence equation to decrease.

For example, consider our recurrence relation for merge sort. To show T(n) is O(n lg n), we need to show that  T(n) kn lg n for large n and some choice of k. Define F(n) = n lg n, so we are trying to show that T(n) kF(n). This turns out to be true if we can plug kF(n) into the recurrence relation for T(n) and show that the recurrence equations hold as "≥" inequalities. Here, we plug the expression kn lg n into the merge-sort recurrence relation:

kn lg n ≥ 2k(n/2) lg (n/2) + c4n
           = kn
lg (n/2)  + c4n
           = kn
(lg n −1)  + c4n
           = kn
lg nkn  + c4n
           = kn
lg n + (c4k)n

Can we pick a k that makes this inequality come true for sufficiently large n? Certainly; it holds if kc4. Therefore this function is O(n lg n). In fact, we can make the two sides exactly equal by choosing k=c4, which tells us that it is Θ(n lg n) as well.

More generally, if we want to show that a recurrence relation solution is O(F(n)), we show that we can choose k so that for each recurrence equation with kF(n) substituted for T(n), LHS ≥ RHS for all sufficiently large n. If we want to show that a recurrence relation is Θ(F(n)), we need to show that there is also a k such that LHS ≤ RHS for all sufficiently large n. In the case above, it happens that we can choose the same k.

Why does this work? It's really another use of strong induction where the proposition to be proved is that T(n) kF(n) for all sufficiently large n. We ignore the base case because we can always choose a large enough k to make the inequality work for small n. Now we proceed to the inductive step. We want to show that T(n+1) kF(n+1) assuming that for all mn we have T(m) kF(m). We have

T(n+1)   =   2T((n+1)/2) + c4n   ≤   2kF((n+1)/2) + c4n   ≤ kF(n+1)

so by transitivity T(n) F(n). The middle inequality follows from the induction hypothesis T((n+1)/2) ≤ F((n+1)/2) and from the monotonicity of the recurrence equation. The last step is what we showed by plugging kF(n) into the recurrence and checking that it holds for any sufficiently large n.

To see another example, we know that any function that is O(n lg n) is also O(n2) though not Θ(n2). If we hadn't done the iterative analysis above, we could still verify that merge sort is at least as good as insertion sort (asymptotically) by plugging kn2 into the recurrence and showing that the inequality holds for it as well:

kn2 ≥ 2k(n/2)2 + c4n
      =
½kn2 + c4n

For sufficiently large n, this inequality holds for any k. Therefore, the algorithm is  O(n2). Because it holds for any k, the algorithm is in fact o(n2). Thus, we can use recurrences to show upper bounds that are not tight as well as upper bounds that are tight.

On the other hand, suppose we had tried to plug in kn instead of kn2. Then we'd have:

kn ≥? 2k(n/2) + c4n
          = kn
+ c4n

Because c4 is positive, the inequality doesn't hold for any k ; therefore, the algorithm is not O(n). In fact, we see that the inequality always holds in the opposite direction (<); therefore kn is a strict lower bound on the running time of the algorithm; its running time is more than linear.

Thus, reasonable guesses about the complexity of an algorithm can be plugged into a recurrence and used not only to find the complexity, but also to obtain information about its solution.

Example: Another sorting algorithm

The following function sorts the first two-thirds of a list, then the second two-thirds, then the first two-thirds again:

fun sort3(a: int list): int list =
  case a of
    nil => nil
  | [x] => [x]
  | [x,y] => [Int.min(x,y), Int.max(x,y)]
  | a => let
      val n = List.length(a)
      val m = (2*n+2) div 3
      val res1 = sort3(List.take(a, m))
      val res2 = sort3(List.drop(res1, n-m) @
                       List.drop(a, m))
      val res3 = sort3(List.take(res1, n-m) @
                       List.take(res2, 2*m-n))
    in
      res3 @ List.drop(res2,2*m-n)
    end

Perhaps surprisingly, this algorithm actually does sort the list. We will leave the proof that it sorts correctly as an exercise to the reader. Its run time, on the other hand, we can derive from its recurrence. The routine does some O(n) work and then makes three recursive calls on lists of length 2n/3. Therefore its recurrence is:

T(n) = cn + 3T(2n/3)

Let's try plugging in possible solutions. How about F(n) = n lg n? Substituting into the right side we have

   cn + 3kF(2n/3)
= cn + 3k(2n/3) lg (2n/3)
= cn + 2kn lg n − 2kn lg (2/3)

= cn + 2kn lg n + 2kn lg (3/2)

There is no way to choose k to make the left side (kn lg n) larger, so the algorithm is not O(n lg n); we must try a higher order of growth.

By plugging in kn2 and kn3 for T(n), we find that kn2 grows strictly more slowly than T(n) and kn3 grows strictly more quickly. We can solve for the correct exponent x by plugging in knx:

    cn + 3T(2n/3)
cn + 3k(2/3)xnx

This will be asymptotically less than knx as long as 3(2/3)x > 1 , which requires x > lg3/2 3 = 2.7095. Define this as a = lg3/2 3. Then we can see that the algorithm is O(na) for any positive ε. Let's try O(na) itself. The RHS after substituting kna is cn + 3(2/3)akna = cn + kna kna. This tells us that  kna is an asymptotic lower bound on T(n): T(n) is Ω(na). So the complexity is somewhere between Ω(na) and O(na). It is in fact Θ(na).

To show that the complexity is O(na), we need to use a refinement of the substitution method. Rather than trying F(n) = na, we will try  F(n) = na + bn where b is a constant to be filled in later. The idea is to pick a b so that bn will compensate for the cn  term that shows up in the recurrence. Because bn is O(na), showing T(n) is O(na + bn) is the same as showing that it is O(na). Substituting kF(n)for T(n) in the RHS of the recurrence, we obtain:

   cn + 3kF(2n/3)
= cn + 3k((2n/3)a + b(2n/3))
= cn + 3k(2n/3)a + 3kb(2n/3)
= cn + kna + 2kbn
= kna + (3kb+c)n

The substituted LHS of the recurrence is  kna + kbn, which is larger than  kna + (2kb+c)n as long as kb>2kb+c, or b<−c/k. There is no requirement that b be positive, so choosing k=1, b= −1 satisfies the recurrence. Therefore T(n) = O(na + bn) = O(na), and since T(n) is both O(na) and Ω(na), it is Θ(na).


Lower bounds on sorting performance

It turns out that no sorting algorithm can have asymptotic running time lower than O(n lg n), and thus other than constant factors in running time, merge sort is as good an algorithm as we can expect for sorting general data. Its constant factors are also pretty good, so it's a useful algorithm in practice. We can see that O(n lg n) time is needed by thinking about sorting a list of n distinct numbers. There are n! = n×(n1)×(n2)×...×3×2×1 possible lists, and the sorting algorithm needs to map all of them to the same sorted list by applying an appropriate inverse permutation. For general data, the algorithm must make enough observations about the input list (by comparing list elements pairwise) to determine which of the n! permutations was given as input, so that the appropriate inverse permutation can be applied and sort the list. Each comparison of two elements to see which is greater generates one bit of information about which permutation was given; at least lg(n!) bits of information are needed. Therefore the algorithm must take at least O(lg(n!)) time. It can be seen easily that n! is O(nn); note that lg nn=n lg n. With a bit more difficulty a stronger result can be shown: lg(n!) is Θ(n lg n). Therefore merge sort is not only much faster than insertion sort on large lists, it is actually optimal to within a constant factor! This shows the value of designing algorithms carefully.

Note: there are sorting algorithms for specialized inputs that have better than O(n lg n) performance: for example, radix sort. This is possible because radix sort doesn't work by comparing elements pairwise; it extracts information about the permutation by using the element itself as an index into an array. This indexing operation can be done in constant time and on average extracts lg n bits of information about the permutation. Thus, radix sort can be performed using O(n) time, assuming that the list is densely populated by integers or by elements that can be mapped monotonically and densely onto integers. By densely, we mean that the largest integer in a list of length n is O(n) in size. By monotonically we mean that the ordering of the integers is the same as the ordering of the corresponding data to be sorted. In general we can't find a dense monotonic mapping, so Θ(n lg n) is the best we can do for sorting arbitrary data.

The master method

The “master method” is a cookbook method for solving recurrences that is very handy for dealing with many recurrences seen in practice. Suppose you have a recursive function that makes a recursive calls and reduces the problem size by at least a factor of b on each call, and suppose each call takes time h(n).

We can visualize this as a tree of calls, where the nodes in the tree have a branching factor of a. The top node has work h(n) associated with it, the next level has work h(n/b) associated with each nodes, the next level h(n/b2), and so on. The tree has logb n level, so the total number of leaves in the tree is alogb n = nlogb a.

The time taken is just the sum of the terms h(n/bi) at all the nodes. What this sum looks like depends on how the asymptotic growth of h(n) compares to the asymptotic growth of the number of leaves. There are three cases:

If we apply the master method to the sort3 algorithm, we see easily that we are in case 1, so the algorithm is O(nlog3/23).