CS312 Lecture 13: Complexity and analysis of algorithms

  • Asymptotic Analysis
  • Worst-Case and Average-Case Analysis
  • Order of Growth and Big-O Notation
  • Comparing Orders of Growth
  • Analyzing Running Times of Simple Procedures
  • Analysis of Red-Black Trees

  • Asymptotic Analysis

    When analyzing the running time or space usage of programs, we usually try to estimate the time or space as function of the input size. For example, when analyzing the worst case running time of a function that sorts a list of numbers, we will be concerned with how long it takes as a function of the length of the input list.  For example, we say the standard insertion sort takes time T(n) where T(n)= c*n2+k for some constants c and k.  In contrast, merge sort takes time T'(n) = c'*n*log2(n) + k'.

    The asymptotic behavior of a function f(n) (such as T(n)=c*n or S(n)=c*n2, etc.) refers to the growth of f(n) as n gets large. We typically ignore small values of n, since we are usually interested in estimating how slow the program will be on large inputs. A good rule of thumb is: the slower the asymptotic growth rate, the better the algorithm (although this is often not the whole story).

    By this measure, a linear algorithm (i.e., f(n)=d*n+k) is always asymptotically better than a quadratic one (e.g., f(n)=c*n2+q). That is because for any given (positive) c,k,d, and q there is always some n at which the magnitude of c*n2+q overtakes d*n+k. For moderate values of n, the quadratic algorithm could very well take less time than the linear one, for example if c is significantly smaller than d and/or k is significantly smaller than q. However, the linear algorithm will always be better for sufficiently large inputs. Remember to THINK BIG when working with asymptotic rates of growth.

    Worst-Case and Average-Case Analysis

    When we say that an algorithm runs in time T(n), we mean that T(n) is an upper bound on the running time that holds for all inputs of size n. This is called worst-case analysis. The algorithm may very well take less time on some inputs of size n, but it doesn't matter. If an algorithm takes T(n)=c*n2+k steps on only a single input of each size n and only n steps on the rest, we still say that it is a quadratic algorithm.

    A popular alternative to worst-case analysis is average-case analysis. Here we do not bound the worst case running time, but try to calculate the expected time spent on a randomly chosen input. This kind of analysis is generally harder, since it involves probabilistic arguments and often requires assumptions about the distribution of inputs that may be difficult to justify.

    Order of Growth and Big-O Notation

    In estimating the running time of insert_sort (or any other program) we don't know what the constants c or k are. We know that it is a constant of moderate size, but other than that it is not important; we have enough evidence from the asymptotic analysis to know that a a merge_sort (see below) is faster than the quadratic insert_sort, even though the constants may differ somewhat. (This does not always hold, the constants can sometimes make a difference, but it has been a good enough rule-of-thumb that it is quite widely applied.)

    We may not even be able to measure the constant c directly. For example, we may know that a given expression of the language, such as if takes a constant number of machine instructions, but we may not know exactly how many. Moreover, the same sequence of instructions executed on a Pentium IV will take less time than on a Pentium II (although the difference will be roughly a constant factor). So these estimates are usually only accurate up to a constant factor anyway. For these reasons, we usually ignore constant factors in comparing asymptotic running times.

    Computer scientists have developed a convenient notation for hiding the constant factor. We write O(n) (read: ''order n'') instead of ''cn for some constant c.'' Thus an algorithm is said to be O(n) or linear time if there is a fixed constant c such that for all sufficiently large n, the algorithm takes time at most cn on inputs of size n. An algorithm is said to be O(n2) or quadratic time if there is a fixed constant c such that for all sufficiently large n, the algorithm takes time at most cn2 on inputs of size n. O(1) means constant time.

    Polynomial time means nO(1), or nc for some constant c. Thus any constant, linear, quadratic, or cubic (O(n3)) time algorithm is a polynomial-time algorithm.

    This is called big-O notation. It captures the important differences in the asymptotic growth rates of functions.

    One important advantage of big-O notation is that it makes algorithms much easier to analyze, since we can conveniently ignore low-order terms. For example, an algorithm that runs in time

    10n3 + 24n2 + 3n log n + 144

    is still a cubic algorithm, since

    10n3 + 24n2 + 3n log n + 144
    <= 10n3 + 24n3 + 3n3 + 144n3
    <= (10 + 24 + 3 + 144)n3
    = O(n3)
    .

    Of course, since we are ignoring constant factors, any two linear algorithms will be considered equally good by this measure. There may even be some situations in which the constant is so huge in a linear algorithm that even an exponential algorithm with a small constant may be preferable in practice. This is a valid criticism of asymptotic analysis and big-O notation. However, as a rule of thumb it has served us well. Just be aware that it is only a rule of thumb--the asymptotically optimal algorithm is not necessarily the best one.

    Some common orders of growth seen often in complexity analysis are

    O(1) constant
    O(log n) logarithmic
    O(n) linear
    O(n log n) "n log n"
    O(n2) quadratic
    O(n3) cubic
    nO(1) polynomial
    2O(n) exponential

    Here log means log2 or the logarithm to the base 2, although it doesn't really matter since logarithms to different bases differ by a constant factor. Note also that 2O(n) and O(2n) are not the same!

    Comparing Orders of Growth

    Definition
    Let f and g be functions from positive integers to positive integers. We say f is O(g) (read: ''f is order g'') if there exists a fixed constant c such that for all n,

    f(n) < cg(n).

    Equivalently, f is O(g) if the function f(n)/g(n) is bounded above.

    We say f is o(g) (read: ``f is little-o of g'') if for all arbitrarily small real c > 0, for all but perhaps finitely many n,

    f(n) < cg(n).

    Equivalently, f is o(g) if the function f(n)/g(n) tends to 0 as n tends to infinity.

    Here are some examples:

    We now introduce some convenient rules for manipulating expressions involving order notation. These rules, which we state without proof, are useful for working with orders of growth:

    1. cnm = O(nk) for any constant c and any m <= k.
    2. O(f(n)) + O(g(n)) = O(f(n) + g(n)).
    3. O(f(n))O(g(n)) = O(f(n)g(n)).
    4. O(cf(n)) = O(f(n)) for any constant c.
    5. c is O(1) for any constant c.
    6. logbn = O(log n) for any base b.

    Analyzing Running Times of Simple Procedures

    Now we can use these ideas to analyze the asymptotic running time of SML functions. The use of order notation can greatly simplify our task here. We assume that the primitive operations of our language, such as arithmetic operations and pattern matching, all take constant time (which they do):

    Consider the following multiplication routine:

    fun times1 (a:int, b:int):int = 
      if (b = 0) then 0 else a + times1(a,b-1)

    What is the order of growth of the time required by times1 as a function of n, where n is the magnitude of the parameter b? Note that the ``size'' of a number can be measured either in terms of its magnitude or in terms of the number of digits (the space it takes to write the number down). Often the number of digits is used, but here we use the magnitude. Note that it takes only about log10 x digits to write down a number of magnitude x, thus these two measures are very different. Make sure you know which one is being used.

    We assume that all the primitive operations in the times1 function if, +, =, and -) and the overhead for function calls take constant time. Thus if n = 0, the routine takes constant time. If n > 0, the time taken on an input of magnitude n is constant time plus the time taken by the recursive call on n-1. Thus the running time T(n) of times1 is a solution of

    T(n) = T(n-1) + O(1) for n > 0
    T(0) = O(1)

    This is called a recurrence relation. It simply states that the time to multiply a number a by another number b of size n > 0 is the time required to multiply a by a number of size n-1 plus a constant amount of work (the primitive operations performed). Furthermore, the time to multiply a by zero is a constant (only constant-time primitives are involved). In other words, there are constants c and d such that T(n) satisfies

    T(n) = T(n-1) + c for n > 0
    T(0) = d

    This more specific recurrence relation has a unique closed form solution, namely

    T(n) = d + cn

    which is O(n), so the algorithm is linear in the magnitude of its second argument. One can obtain this equation by generalizing from small values of n, then prove that it is indeed a solution to the recurrence relation by induction on n.

    There are many other functions that behave in a similar way, for instance, appending two lists:

    fun append(x:'a list,y:'a list):'alist=
       case x of
          [] => y
         |h::t => append(t,h::y) 
    
    In this case, n will be the length of the list x. The analysis we did for times1 will be similar to the analysis of append.

    Now consider the following procedure for multiplying two numbers:

    fun times2(a:int, b:int):int = 
        if (b = 0) then
          0
        else if even(b) then
          times2(double(a), half(b))
        else
          a + times2(a, b-1)

    Again we want an expression for the running time in terms of n, the magnitude of the parameter b. We assume that double and half operations are constant time (these could be done in constant time using arithmetic shift) as well as the standard primitives. The recurrence for this problem is more complicated than the previous one:

    T(n) = T(n-1) + O(1) if n > 0 and n is odd
    T(n) = T(n/2) + O(1) if n > 0 and n is even
    T(0) = O(1)

    We somehow need to figure out how often the first versus the second branch of this recurrence will be taken. It's easy if n is a power of two, i.e. if n = 2m for some integer m. In this case, the second branch of will only get taken when n = 1, because 2m is even except when m = 0, i.e. when n = 1. Note further that T(1) = O(1) because T(1) = T(0) + O(1) = O(1) + O(1) = O(1). Thus, for this special case we get the recurrence

    T(n) = T(n/2) + O(1) if n > 0 and n is a power of 2
    T(0) = O(1)

    or

    T(n) = T(n/2) + c for n > 0 and n is a power of 2
    T(0) = d

    for some constants c and d. The closed form solution of this is, for powers of 2,

    T(n) = d + c log2 n

    which is O(log n).

    What if n is not a power of 2? The running time is still O(log n) even in this more general case. Intuitively, this is because if n is odd, then n-1 is even, so on the next recursive call the input will be halved. Thus the input is halved at least once in every two recursive calls, which is all you need to get O(log n).

    A good way to handle this formally is to charge the cost of a call to times2 on an odd input to the recursive call on an even input that must immediately follow it. We reason as follows: on an even input n, the cost is the cost of the recursive call on n/2 plus a constant, or

    T(n) = T(n/2) + O(1)

    as before. On an odd input n, we recursively call the procedure on n-1, which is even, so we immediately call the procedure again on (n-1)/2. Thus the total cost on an odd input is the cost of the recursive call on (n-1)/2 plus a constant. In this case we get

    T(n) = T((n-1)/2) + O(1)

    In either case,

    T(n) <= T(n/2) + O(1)

    which has the solution O(log n). This approach is more or less the same as explicitly unwinding the cond clause that handles odd inputs:

    fun times2(a:int, b:int):int = 
        if (b = 0) then
          0
        else if even(b) then
          times2(double(a), half(b))
        else
          a + times2(double(a), half(b-1))

    then analyzing the rewritten program, without actually doing the rewriting.

    A typical example of a O(n log n) algorithm is merge sort. This algorithm receives a list of elements and sort them by spliting the list in two halves, sorting each one and then merges both together. The following is a possible implementation of mergesort:

    fun mergesort(l:int list):int list =
      let
        fun length(li:int list):int =
          case li of 
           [] => 0
          |x::xs => 1+length(xs)
    
        fun split(li:int list,s:int):(int list)*(int list)=
          let 
           fun splitaux(la:int list,lb:int list,acum:int) =
             if ((acum >= s)orelse (la = nil))  then (la,lb) else 
                splitaux(tl(la),hd(la)::lb,acum+1)
           in
             splitaux(li,[],0)
           end
    
        fun merge(l1:int list,l2:int list):int list =
          case (l1,l2) of
            ([],_) => l2
           |(_,[]) => l1
           |(x::xs,y::ys) => if (x < y) then x::merge(xs,l2) else y::merge(l1,ys)
    
        fun mergesortaux(l:int list,s:int):int list=
          case l of
            [] => []
           |[x] => [x]
           |x::y::[] => if (x < y) then l else y::x::[]
           |x::xs => 
             let
               val (l1,l2) = split(l,s)
              in
               merge(mergesortaux(l1,s div 2),mergesortaux(l2, s div 2))
              end
    
       in
         mergesortaux(l,length(l) div 2)
       end
    
    The analysis of this program is very simmilar to times2, in this case
    T(0) = c
    T(n) = 2*T(n/2) + c*n
         = 2*(2*T(n/4) + c*n/2) + c*n
         = 4*T(n/4) + c*n + c*n
    ...
         = c*n*log n
    
    In general when we find a recurrence T(n) = a T(n/b) + cn, we will have that the algorithm is nc with a > b and c >1, O(n log n) if a = b and O(n) if a < b.

    Analysis of Red-Black Trees

    A balanced tree, as you recall, is a tree that tries to approximate the shape of a complete tree. A complete tree of n nodes has depth at most log n, so the basic tree operations (find, insert, and delete) take only O(log n) time rather than the potentially linear time required for an unbalanced tree. To review:

    These requirements force the longest root-leaf path to be no more than twice the length of the shortest such path, since the shortest possible path is a string of black nodes, and the longest possible path with the same number of black nodes uses alternating red nodes as extensions. But how does this guarantee that we can perform tree operations in O(log n) time? We'd like to prove that the height of a tree of n nodes is at most 2 log(n+1), which we can do by induction.

    First define the black-height of a node x (denoted bh(x)) to be the number of black nodes on any path from, but not including, x to a black leaf. Just as the height of a tree is the maximum number of edges in any root-leaf path, you can think of the black height as the number of edges to black children in any root-leaf path.

    We want to show that the subtree rooted at any node x contains at least 2^(bh(x)) - 1 internal (non-leaf) nodes. We'll do this by induction on the height of x.

    If the height of x is 0, then x is a black leaf and must have black-height 0. 2^(bh(x)) - 1 = 0. The subtree rooted at x has at least 0 internal nodes, so we've proven the base case.

    For the inductive step, consider a node x with positive height. (So x is an internal node.) As we said above, each child has black-height bh(x) or bh(x-1), depending on the child's color. Since the height of a child of x is less than the height of x, the inductive hypothesis says that each child has at least 2^(bh(x)-1) - 1 internal nodes, so the subtree rooted at x has at least 2 * (2^(bh(x)-1) - 1) + 1 = 2^(bh(x)) - 1 internal nodes. This proves the claim.

    We have shown that the subtree rooted at any node x contains at least 2^(bh(x)) - 1 internal nodes. Now let h be the height of a tree. The black-height of the root must be at least h/2 (why?). Applying our result to the root, we see that n >= 2^(h/2) - 1. Algebraic manipulation yields h <= 2 log (n+1), which is what we wanted.

    Thus the height of a red-black tree containing n nodes is O(log n), so we can find an arbitrary element in O(log n) time.

    Charging one operation to another (bounding the number of times one thing can happen by the number of times that another thing happens) is a common technique for analyzing the running time of complicated algorithms.

    Order notation is a useful tool, and should not be thought of as being just a theoretical exercise. For example, the practical difference in running times between the logarithmic times1 and the linear times2 is noticeable even for moderate values of n

    The key points in this handout are:


    CS312  © 2002 Cornell University Computer Science