1. Introduction to Java
2. Reference Types and Semantics
3. Method Specifications and Testing
4. Loop Invariants
5. Analyzing Complexity
6. Recursion
7. Sorting Algorithms
8. Classes and Encapsulation
9. Interfaces and Polymorphism
10. Inheritance
11. Additional Java Features
12. Collections and Generics
13. Linked Data
14. Iterating over Data Structures
15. Stacks and Queues
16. Trees and their Iterators
17. Binary Search Trees
18. Heaps and Priority Queues
19. Sets and Maps
18. Heaps and Priority Queues

18. Heaps and Priority Queues

In the previous lecture, we saw how to leverage the branching structure of a tree to design a data structure, a BST, that can (when actively balanced) support fast add(), remove(), and contains() queries. Today, we’ll introduce another data structure that built atop a binary tree, a heap. Heaps provide \(O(1)\) access to the “smallest” or “largest” element that they store, making them a useful data structure to implement a new ADT called a priority queue. We’ll also see how we can use a heap to define another efficient sorting algorithm, heap sort.

Priority Queues

In an emergency room, there are often more patients who require treatment at any given time than there are resources (bed spaces, operating rooms, hospital staff, etc.) to treat them. For this reason, a queue of patients will often form in a waiting room, and patients are removed from this queue (i.e., admitted to the ER) once there is space for them. This paradigm, adding patients to a collection (the waiting room) and systematically removing them one at a time, is reminiscent of some earlier data structures that we studied, stacks and queues. However, neither of these offers an appropriate solution in this case. Selecting patients based on their arrival order does not ensure the best overall health outcomes. Instead, patients should be admitted based on their urgency to receive care; a patient experiencing cardiac arrest should take priority over a patient with a sprained ankle. To address this, emergency rooms triage incoming patients, assigning them a priority value that induces an ordering over the patients in the waiting room. As soon as resources are freed, the next patient to be admitted should be the one with highest priority (ignoring any application-specific complicating factors like the inability to put certain patients in certain rooms, etc.).

Let’s design an ADT that models the behaviors of a hospital waiting room, which we’ll call a PriorityQueue, a queue-like structure that enforces a notion of priority. Similar to stacks and queues, there are three primary operations: add()ing a new element (now associated with a particular priority), remove()ing a particular element (the one with highest priority), and peek()ing at the highest priority element. Finally, we’ll add an isEmpty() method that allows us to check if the priority queue contains any elements.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/** A queue of elements of type T that are removed in priority order. */
public interface PriorityQueue<T> {
  /**
   * Adds the given `elem` to the queue associated with the given `priority`.
   */
  void add(T elem, double priority);

  /**
   * Removes and returns the element from the queue with the highest priority. 
   * Requires that this queue is not empty.
   */
  T remove();  

  /**
   * Returns the element from the queue with the highest priority without 
   * removing it. Requires that this queue is not empty.
   */
  T peek();

  /**
   * Returns `true` if there are currently no elements stored in this queue, 
   * otherwise returns `false`.
   */
  boolean isEmpty();
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/** A queue of elements of type T that are removed in priority order. */
public interface PriorityQueue<T> {
  /**
   * Adds the given `elem` to the queue associated with the given `priority`.
   */
  void add(T elem, double priority);

  /**
   * Removes and returns the element from the queue with the highest priority. 
   * Requires that this queue is not empty.
   */
  T remove();  

  /**
   * Returns the element from the queue with the highest priority without 
   * removing it. Requires that this queue is not empty.
   */
  T peek();

  /**
   * Returns `true` if there are currently no elements stored in this queue, 
   * otherwise returns `false`.
   */
  boolean isEmpty();
}

List-Based Priority Queues

Because of its similarity to stacks and queues, it may be appealing to implement the PriorityQueue ADT using a List. When we add new elements to the priority queue, they can go at the end of the list. When we want to access the highest-priority element to (either to peek() or remove() it), we can search over the list elements. Take some time to determine the complexity of each PriorityQueue operation under this design.

add() complexity

add() complexity: We are adding an element to the end of a List. This has \(O(1)\) amortized time complexity if this List is implemented with a dynamic array (such as Java's ArrayList) and \(O(1)\) worst-case time complexity if this List is implemented with a linked chain (such as Java's LinkedList).

isEmpty() complexity

isEmpty() complexity: We can access the List's size() and return whether it is equal to 0. This requires \(O(1)\) time.

peek() complexity

peek() complexity: To locate the element with highest priority, we must perform a linear scan over all entries in the list. We can accomplish this by establishing a loop invariant similar to the argmin() method we developed at the start of the course. This will have \(O(N)\) time complexity, where \(N\) is the number of elements in the priority queue.

remove() complexity

remove() complexity: Similar to peek(), we must perform a linear scan to locate the element with highest priority, which requires \(O(N)\) time. Removing this element from the list requires an additional \(O(N)\) time in a dynamic array implementation (due to the memory shifting) or \(O(1)\) in a linked chain implementation (provided we maintain a reference to this highest-priority element). Overall, this gives an \(O(N)\) runtime.

While adding elements to this priority queue implementation is fast, the linear searching in peek() and remove() hurts the overall performance. One way to improve this is to maintain a sorted order invariant on the entries of the list. When we add a new element to this sorted list, we will need to determine its sorted position. Take some time to determine the complexity of each PriorityQueue operation under this alternate design.

add() complexity

add() complexity: As noted above, we will need to determine the correct sorted position of the new element based on its priority. We consider two possibilities based on the List implementation. If the List is implemented with a dynamic array, we can perform an \(O(\log N)\) binary search to locate this sorted position. However, actually inserting the element at this position will have an \(O(N)\) worst-case time complexity because of memory shifting. Alternatively, if the List is implemented with a linked chain, the search for the sorted position will have an \(O(N)\) worst-case complexity; losing the random access guarantee makes a linear scan the most efficient search procedure. In either case, add() has an \(O(N)\) worst-case runtime complexity.

isEmpty() complexity

isEmpty() complexity: We can access the List's size() and return whether it is equal to 0. This requires \(O(1)\) time.

peek() complexity

peek() complexity: Since the list maintains a sorted order by priority, the highest priority element will be at one end of the list: we can choose whichever end lends itself to a more efficient implementation. For a dynamic array, an ascending sort will place the highest-priority element at the end of the list, where it can be accessed in \(O(1)\) time. For a linked chain, a descending sort will place the highest-priority element at the start of the list, where it can be accessed in \(O(1)\) time.

remove() complexity

remove() complexity: Removing the last element of a List backed by a dynamic array requires \(O(1)\) time (amortized with dynamic downward resizing). Removing the first element of a List backed by a linked chain requires \(O(1)\) time.

These implementations make the peek() and remove() operations fast at the expense of making the add() operation slow. Is there a way that we can get good performance for all the operations? We have seen that (actively balanced) BSTs allow us to both add() and remove() elements while maintaining an order invariant; however, active balancing takes a lot of work (and falls outside of our scope). Next, we’ll introduce another tree-based data structure, a max heap, that maintains a different order invariant that is particularly well-suited to locating its largest element. Moreover, this looser order invariant will allow us to more easily enforce an additional balance constraint. In the end, we’ll find that a max heap based PriorityQueue can support add()ing and remove()ing elements in \(O(\log N)\) time and peek()ing in \(O(1)\) time.

Max Heaps

A (binary) max heap is a binary tree that maintains two additional invariants, a shape invariant and an order invariant. Its shape invariant requires that the heap’s elements form a nearly complete binary tree.

Definition: Nearly Complete Binary Tree

In a nearly complete binary tree, every level, except possibly the last (deepest) level, has the maximum possible number of nodes (i.e., there are \(2^i\) nodes with depth \(i\) for all \(i\) less than the tree's height). If the last level is incomplete, its nodes appear in its leftmost positions.

In the following figure, the first tree is nearly complete. The second tree is not nearly complete because it has height 3 but only 3 nodes at depth 2. While the third tree fills all but its last level, it is not nearly complete because its last level is not filled from the left.

The heap order invariant imposes a condition on each parent-child connection in the tree.

Definition: Max Heap Order Invariant

In a max heap the value of every node (excluding the root) is less than or equal to the value of its parent.

In the following figure, the tree shown on the left is a max heap, while the tree shown on the right is not a max heap; the max heap order invariant is violated because the node 10 is the child of the lesser node 8.

The heap order invariant implies a stronger condition about the values of its nodes; each node is less than or equal to all of its ancestors and greater than or equal to all of its descendants. In particular, the heap element with the maximum value will be at the root of this tree (explaining the name of the max heap).

Representing a Max Heap

To develop a MaxHeap class, we must first decide on its state representation. As with all of our other data structures, the MaxHeap will be a generic class with a type parameter T that represents the type of elements that it stores. For the heap order invariant to make sense, we must be able to compare elements of type T, which we can enforce with the generic type bound T extends Comparable<T>.

MaxHeap.java

1
2
3
4
/**
 * A (binary) max heap storing elements of type T.
 */
public class MaxHeap<T extends Comparable<T>> { ... }
1
2
3
4
/**
 * A (binary) max heap storing elements of type T.
 */
public class MaxHeap<T extends Comparable<T>> { ... }

While we can represent the heap using the same recursive linked structure that we developed in our earlier lectures (making MaxHeap a subclass of BinaryTree), a simpler representation is actually possible. Because of the heap’s shape invariant, there is only one possible shape that a heap can have for each size. If a heap contains 10 elements, we know that it must have the shape shown above. It must have height at least 3 since only 7 elements can fit in a binary tree with height 2. Then, it must fill its top 3 levels, with its root node at depth 0, two nodes at depth 1, and 4 nodes at depth 2. Its final three nodes must all be at depth 3, occupying the three leftmost positions. Any deviation of this would cause it not to be a nearly complete binary tree.

We can, therefore, store the elements in a List, whose indices refer to specific positions in the binary tree. We’ll number the positions from top to bottom in the tree and from left to right in each level, which is often referred to as a level-order traversal of the tree. This translation between the tree and List (which we’ll visualize as a row of boxes, like an array) representations of a heap is visualized below.

We’ll use an ArrayList<T> for this backing storage to practice working with a Java library data structure as a client.

MaxHeap.java

1
2
3
4
5
6
/**
 * The backing storage of this heap. Entries are ordered according to a 
 * level-order traversal of a nearly complete binary tree and must satisfy 
 * the max heap order invariant.
 */
private final ArrayList<T> heap;
1
2
3
4
5
6
/**
 * The backing storage of this heap. Entries are ordered according to a 
 * level-order traversal of a nearly complete binary tree and must satisfy 
 * the max heap order invariant.
 */
private final ArrayList<T> heap;

To “navigate” the tree in this ArrayList representation, it will be helpful to have helper methods that can return the indices of the parent(), leftChild(), and rightChild() of the node at a particular index i in this heap. Take some time to study the indices in the above picture to determine the formulas for computing these parent and children’s indices. Then, use this to complete the definition of these helper methods.

parent(), leftChild(), and rightChild() helper methods

MaxHeap.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/**
 * Returns the index that would correspond to the parent of the heap entry at
 * index `i`. Requires that `i > 0` (the root node does not have a parent).
 */
private static int parent(int i) {
  assert i > 0;
  return (i - 1) / 2;
}

/**
 * Returns the index that would correspond to the left child of the heap entry 
 * at index `i`. Requires that `i >= 0`.
 */
private static int leftChild(int i) {
  assert i >= 0;
  return 2 * i + 1;
}

/**
 * Returns the index that would correspond to the right child of the heap entry 
 * at index `i`. Requires that `i >= 0`.
 */
private static int rightChild(int i) {
    assert i >= 0;
    return 2 * i + 2;
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/**
 * Returns the index that would correspond to the parent of the heap entry at
 * index `i`. Requires that `i > 0` (the root node does not have a parent).
 */
private static int parent(int i) {
  assert i > 0;
  return (i - 1) / 2;
}

/**
 * Returns the index that would correspond to the left child of the heap entry 
 * at index `i`. Requires that `i >= 0`.
 */
private static int leftChild(int i) {
  assert i >= 0;
  return 2 * i + 1;
}

/**
 * Returns the index that would correspond to the right child of the heap entry 
 * at index `i`. Requires that `i >= 0`.
 */
private static int rightChild(int i) {
    assert i >= 0;
    return 2 * i + 2;
}
The index of the left child of a node is always one more than twice the index of its parent. For example, the child of the node at index 0 is at index 1, the child of the node at index 1 is at index 3, and the child of the node at index 4 is at index 9.

The index of the right child of a node is always two more than twice the index of its parent. For example, the child of the node at index 0 is at index 2, the child of the node at index 1 is at index 4, and the child of the node at index 3 is at index 8.

The index of the parent of any node is either 0.5 or 1 less than half that node's index, depending on whether this index is even or odd. The formula to calculate this is (i-1)/2, where integer division handles both cases. For example, the parent of the node at index 4 is at index \(\lfloor \tfrac{4-1}{2} \rfloor = \lfloor \tfrac{3}{2} \rfloor = \lfloor 1.5 \rfloor = 1\), and the parent of the node at index 7 is at index \(\lfloor \tfrac{7-1}{2} \rfloor = \lfloor \tfrac{6}{2} \rfloor = \lfloor 3 \rfloor = 3\).

Notice that we have defined these private helper methods to be static since they do not depend on the state of the heap, they are merely evaluating a mathematical expression that depends on their input parameter.

To complete our state representation, we can add an assertInv() method to enforce the max heap order invariant and a MaxHeap() constructor that initializes an empty heap (and asserts the invariant).

MaxHeap.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
/**
 * Asserts that this heap satisfies the max heap order invariant.
 */
private void assertInv() {
  for (int i = 1; i < heap.size(); i++) {
    assert heap.get(i).compareTo(heap.get(parent(i))) <= 0;
  }
}

/**
 * Constructs an initially empty max heap.
 */
public MaxHeap() {
  heap = new ArrayList<>();
  assertInv();
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
/**
 * Asserts that this heap satisfies the max heap order invariant.
 */
private void assertInv() {
  for (int i = 1; i < heap.size(); i++) {
    assert heap.get(i).compareTo(heap.get(parent(i))) <= 0;
  }
}

/**
 * Constructs an initially empty max heap.
 */
public MaxHeap() {
  heap = new ArrayList<>();
  assertInv();
}

We can immediately give the definitions of some simpler heap methods. For example, we can get the size() of the heap by returning heap.size(), and we can peek() at the largest element in the heap by returning its root, accessible through heap.getFirst().

MaxHeap.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
/**
 * Return the number of elements currently contained in this heap.
 */
public int size() {
  return heap.size();
}

/**
 * Returns the largest element from this heap without removing it. Requires 
 * that the heap is not empty.
 */
public T peek() {
  assert size() > 0;
  return heap.getFirst();
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
/**
 * Return the number of elements currently contained in this heap.
 */
public int size() {
  return heap.size();
}

/**
 * Returns the largest element from this heap without removing it. Requires 
 * that the heap is not empty.
 */
public T peek() {
  assert size() > 0;
  return heap.getFirst();
}

The two remaining methods, add() and remove(), will require more work to maintain both heap invariants.

The add() Method

Next, let’s consider the add() method.

1
2
3
4
/**
 * Adds the given `elem` to this heap.
 */
public void add(T elem) { ... }
1
2
3
4
/**
 * Adds the given `elem` to this heap.
 */
public void add(T elem) { ... }

If we naively add elem to the end of the heap list, this will not compromise the shape invariant. The fact that our array representation lists the elements in level-traversal order means that the new element will be placed in the leftmost available spot of the bottom level of the tree (or in the leftmost spot of a new level if the lowest level was previously full). However, if elem is sufficiently large, this may violate the max heap order invariant. For example, if we add 12 to the end of max heap that we’ve been considering, we end up in the following state.

Note that we will continue to draw out the tree representation of a heap (even though its underlying storage is a list), as this makes it easier to reason about the structure of the heap and check the order invariant. In this case, we see that the order invariant is violated because 12 is greater than its parent node 8.

We must find an efficient way to restore the order invariant. We can start by fixing the violation that we just introduced. 12 cannot be a child of the smaller node 8, and we can correct this by swapping 8 and 12. Let’s extract this subroutine into a swap() helper method.

MaxHeap.java

1
2
3
4
5
6
7
8
9
/**
 * Swaps the entries at indices `i` and `j` in this heap. Requires that 
 * `0 <= i < heap.size()` and `0 <= j < heap.size()`.
 */
private void swap(int i, int j) {
  T temp = heap.get(i);
  heap.set(i, heap.get(j));
  heap.set(j, temp);
}
1
2
3
4
5
6
7
8
9
/**
 * Swaps the entries at indices `i` and `j` in this heap. Requires that 
 * `0 <= i < heap.size()` and `0 <= j < heap.size()`.
 */
private void swap(int i, int j) {
  T temp = heap.get(i);
  heap.set(i, heap.get(j));
  heap.set(j, temp);
}

After performing this swap(), our heap is left in the following state:

We have not yet restored the order invariant; 12 is still greater than its parent 10. However, we have moved the problem up higher in the heap. 12 and 8 are now in the correct relative order, as are 12 and its new left child 5. In fact, such a swap can never introduce a problem lower in the tree. Consider the following picture:

If the max heap order invariant was violated by the addition of node \(c\), this must be because \(a < c\). If the heap order invariant held prior to the addition of \(c\), then \(b \leq a\). Combining these inequalities, we find that \(b < c\), so the order invariant is re-established among these three nodes by swapping \(a\) and \(c\). Similar reasoning applies if the violation was between \(a\) and \(b\) rather than \(a\) and \(c\).

Now, we can repeat our reasoning from above to continue restoring the order invariant. 12 is greater than its parent 10, and we can remove this violation by swapping 12 and 10. This leaves our heap in the following state:

Now, 12 is less than its parent 15, which signals to us that the order invariant has been restored. As we explained above, all the new (parent, child) relationships that we formed below 12 satisfy the order invariant, and now 12’s relationship with its parent satisfies it as well. All the other (parent, child) relationships remain unchanged.

We can extract this subroutine, which restores the heap order invariant by repeatedly swapping the new element upward with its parent, into a private helper method bubbleUp() that takes in the index where the new element currently resides. We can formulate this method recursively. There are two stopping conditions for the bubbleUp(), either the new element has bubbled all the way to reside at the root (in which case, it is the maximum element) or the new element is less than its parent. Otherwise, we should swap the new element (at index i) with its parent and then recurse on the parent index to continue bubbling.

MaxHeap.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
/**
 * Repeatedly swaps the element in index `i` of the heap with its parent until 
 * it resides at the heap root or it is less than its new parent.
 */
private void bubbleUp(int i) {
  if (i == 0) {
    return; // we've reached the root, no more swapping is necessary
  }
  int p = parent(i);
  if (heap.get(i).compareTo(heap.get(p)) > 0) { // parent smaller than `i`
    swap(i, p);
    bubbleUp(p); // tail-recursion, can be rewritten as while-loop to save space
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
/**
 * Repeatedly swaps the element in index `i` of the heap with its parent until 
 * it resides at the heap root or it is less than its new parent.
 */
private void bubbleUp(int i) {
  if (i == 0) {
    return; // we've reached the root, no more swapping is necessary
  }
  int p = parent(i);
  if (heap.get(i).compareTo(heap.get(p)) > 0) { // parent smaller than `i`
    swap(i, p);
    bubbleUp(p); // tail-recursion, can be rewritten as while-loop to save space
  }
}

Then, our add() method simply needs to call bubbleUp() after adding elem to the end of the heap list.

MaxHeap.java

1
2
3
4
5
6
7
8
/**
 * Adds the given `elem` to this heap.
 */
public void add(T elem) {
  heap.add(elem);
  bubbleUp(heap.size() - 1);
  assertInv();
}
1
2
3
4
5
6
7
8
/**
 * Adds the given `elem` to this heap.
 */
public void add(T elem) {
  heap.add(elem);
  bubbleUp(heap.size() - 1);
  assertInv();
}

To reason about the complexity of add(), let’s start by analyzing its helper methods. swap() only affects two entries of the ArrayList, so it runs in \(O(1)\) time. Notice that our bubbleUp() operation performs at most one swap per level of the tree, so it runs in in \(O(\textrm{height})\) time. Since the heap is nearly complete, it is a balanced binary tree, and its height will be \(O(\log N)\), in fact exactly \( \lceil \log_2(N) \rceil \). Thus, bubbleUp() runs in \(O(\log N)\) time. This dominates the runtime of add(), which also runs in \(O(\log N)\) time. As written, the space complexity of add() (not counting the possibility of an array resize) is dominated by the \(O(\log N)\) recursive depth of bubbleUp(). Since this is a tail-recursive method, it is easy to re-implement bubbleUp() in a non-recursive manner using a loop, which results in an \(O(1)\) space complexity (see Exercise 18.7).

The remove() Method

By the max heap order invariant, the return value is the element at index 0 of heap. After storing this return value, we’ll need to remove it from the heap and restore both the shape and order invariants. It is not a good idea to simply remove the first entry from heap and shift down the remaining entries. Since each list index refers to a particular location in the tree representation of the heap, this shifting will scramble all the elements in a way that can introduce multiple violations to the heap order invariant. We’ve highlighted the two violations in the following figure.

Instead, we’d like a way to remove the old root 15 while minimally disrupting the rest of the tree. We can do this by copying the last list element to the first index (making it the new root) and then removing it from the end of the list. This results in a heap with the correct set of elements in which the only potential order invariant violations involve the new root element. Then, to restore the heap invariant, we can do a similar process to bubbleUp(), swapping this new root element downward in the heap until the order invariant is restored. We’ll call this subroutine a bubbleDown(). The following animation steps through the full remove() procedure.

previous

next

The code for our remove() and bubbleDown() methods is given below.

MaxHeap.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
/**
 * Removes and returns the largest element from this heap. Requires that the 
 * heap is not empty.
 */
public T remove() {
  assert size() > 0;
  swap(0, heap.size() - 1);
  T removed = heap.removeLast();
  bubbleDown(0);
  assertInv();
  return removed;
}

/**
 * Repeatedly swaps the element in index `i` of this heap with its larger child
 * until it resides at a heap leaf or it is larger than all of its children.
 */
private void bubbleDown(int i) {
  if (leftChild(i) >= heap.size()) {
    return; // `i` is a leaf
  }
  int c = leftChild(i); // index of larger child node
  if (rightChild(i) < heap.size() && heap.get(rightChild(i)).compareTo(heap.get(c)) > 0) {
    // `i` has a right child that is larger than its left child
    c = rightChild(i);
  }
  if (heap.get(i).compareTo(heap.get(c)) < 0) {
    swap(i, c);
    bubbleDown(c); // tail-recursion, can be rewritten as while-loop to save space
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
/**
 * Removes and returns the largest element from this heap. Requires that the 
 * heap is not empty.
 */
public T remove() {
  assert size() > 0;
  swap(0, heap.size() - 1);
  T removed = heap.removeLast();
  bubbleDown(0);
  assertInv();
  return removed;
}

/**
 * Repeatedly swaps the element in index `i` of this heap with its larger child
 * until it resides at a heap leaf or it is larger than all of its children.
 */
private void bubbleDown(int i) {
  if (leftChild(i) >= heap.size()) {
    return; // `i` is a leaf
  }
  int c = leftChild(i); // index of larger child node
  if (rightChild(i) < heap.size() && heap.get(rightChild(i)).compareTo(heap.get(c)) > 0) {
    // `i` has a right child that is larger than its left child
    c = rightChild(i);
  }
  if (heap.get(i).compareTo(heap.get(c)) < 0) {
    swap(i, c);
    bubbleDown(c); // tail-recursion, can be rewritten as while-loop to save space
  }
}

Similar to bubbleUp(), our bubbleDown() method does a constant amount of work (up to two element comparisons and one swap) per level of the tree, so it runs in \(O(\textrm{height}) = O(\log N)\) time. This dominates the runtime of remove(), which also runs in \(O(\log N)\) time. The space complexity is dominated by the \(O(\log N)\) recursive depth of bubbleDown(), and we can again use a non-recursive implementation to achieve an \(O(1)\) space complexity.

Implementing a Priority Queue

We can use our MaxHeap class as the basis for an efficient implementation of a PriorityQueue, which we’ll call a MaxHeapPriorityQueue.

MaxHeapPriorityQueue.java

1
2
/** A priority queue implementation using composition with a max heap. */
public class MaxHeapPriorityQueue<T> implements PriorityQueue<T> { ... }
1
2
/** A priority queue implementation using composition with a max heap. */
public class MaxHeapPriorityQueue<T> implements PriorityQueue<T> { ... }

This class will use a MaxHeap to represent its state. Each node in the heap will need to store both an element (of type T) and its priority (which is used to order the elements). We can collect these two components in a (static) nested record class, Entry<T>. To add these Entrys to a heap, they will need to be Comparable, and we’ll define the compareTo() method to compare the priorities of the entries.

MaxHeapPriorityQueue.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class MaxHeapPriorityQueue<T> implements PriorityQueue<T> {
  /**
   * Represents an association of a `priority` to an `elem`. `Entry`s are compared 
   * using their priorities.
   */
  private record Entry<T>(T elem, double priority) implements Comparable<Entry<T>> {
    @Override
    public int compareTo(Entry<T> other) {
      return (int) Math.signum(priority - other.priority);
    }
  }

  /** The max heap backing this queue. */
  private final MaxHeap<Entry<T>> heap;

  /** Constructs an empty priority queue. */
  public MaxHeapPriorityQueue() {
    heap = new MaxHeap<Entry<T>>();
  }

  // ... priority queue methods 
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class MaxHeapPriorityQueue<T> implements PriorityQueue<T> {
  /**
   * Represents an association of a `priority` to an `elem`. `Entry`s are compared 
   * using their priorities.
   */
  private record Entry<T>(T elem, double priority) implements Comparable<Entry<T>> {
    @Override
    public int compareTo(Entry<T> other) {
      return (int) Math.signum(priority - other.priority);
    }
  }

  /** The max heap backing this queue. */
  private final MaxHeap<Entry<T>> heap;

  /** Constructs an empty priority queue. */
  public MaxHeapPriorityQueue() {
    heap = new MaxHeap<Entry<T>>();
  }

  // ... priority queue methods 
}

Now that we have established a composition relationship between this priority queue class and the MaxHeap class, we can define the PriorityQueue methods to call the respective MaxHeap methods. Take some time to complete these definitions before looking at our implementation.

PriorityQueue methods

MaxHeapPriorityQueue.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
@Override
public void add(T elem, double priority) {
  heap.add(new Entry<>(elem, priority));
}

@Override
public boolean isEmpty() {
  return heap.size() == 0;
}

@Override
public T peek() {
  assert !isEmpty();
  return heap.peek().elem;
}

@Override
public T remove() {
  assert !isEmpty();
  return heap.remove().elem;
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
@Override
public void add(T elem, double priority) {
  heap.add(new Entry<>(elem, priority));
}

@Override
public boolean isEmpty() {
  return heap.size() == 0;
}

@Override
public T peek() {
  assert !isEmpty();
  return heap.peek().elem;
}

@Override
public T remove() {
  assert !isEmpty();
  return heap.remove().elem;
}

The runtimes of these methods directly relate to those of the MaxHeap methods. The isEmpty() and peek() methods both run in \(O(1)\) time. The add() and remove() methods both run in \(O(\log N)\) time. We have achieved our desired complexities.

Sometimes, we might like the ability to update the priorities of elements while they are in the priority queue. For example, this will be necessary for an efficient implementation of Dijkstra’s shortest path algorithm in graphs, which we will study in a few lectures. We’ll discuss soon how to add support for these priority updates while maintaining the same complexity guarantees.

Heap Sort

To conclude today’s lecture, we’ll explore how we can use a heap to develop a new, efficient sorting algorithm. The idea for this algorithm is very straightforward. First, we’ll form a max heap out of all the elements that we wish to sort. We’ll call this the heapify() step. Then, we can remove the elements from this max heap one by one. The first element to be removed will be the largest, followed by the second-largest, followed by the third-largest, etc. We extract the elements in descending sorted order.

We can perform both of these steps in-place within an array if we modify our heap methods to manipulate an array range rather than an ArrayList. Then, both steps amount to a single pass over the array, which we can reason about with a loop invariant.

Forward Pass: heapify()

First, we’ll build up the heap from the front of the array to the end. Initially, we have no knowledge about the contents of the array.

After finishing this forward pass, we’d like the array’s elements to form a valid binary max heap.

During the ith iteration, our invariant will be that the range a[..i) forms a valid binary max heap.

We can initialize i = 0 to establish the invariant (since a[..0) is the empty range, which is trivially a valid (empty) binary max heap), and we can guard the loop on the condition i < a.length (since a[..a.length) is the entire array a). To make progress in each iteration, we must add() a[i] to the heap; this causes it to grow to occupy a[..i+1) allowing us to increment i and preserve the loop invariant.

The main structure for heapify() is shown below. The full implementation, which adapts the logic from earlier in the lecture to work on array ranges, is provided with the lecture release code.

HeapSort.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
/** Rearranges the entries of `a` so that they correspond to a valid binary max heap. */
private static <T extends Comparable<T>> void heapify(T[] a) {
  /* Loop invariant: `a[..i)` is a valid binary max heap. */
  for (int i = 1; i < a.length; i++) {
      add(a, i);
  }
}

/**
 * Adds `a[i]` to the heap `a[..i)` by performing a `bubbleUp()` operation 
 * starting at index `i`. Requires that `i < a.length`.
 */
private static <T extends Comparable<T>> void add(T[] a, int i) { ... }
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
/** Rearranges the entries of `a` so that they correspond to a valid binary max heap. */
private static <T extends Comparable<T>> void heapify(T[] a) {
  /* Loop invariant: `a[..i)` is a valid binary max heap. */
  for (int i = 1; i < a.length; i++) {
      add(a, i);
  }
}

/**
 * Adds `a[i]` to the heap `a[..i)` by performing a `bubbleUp()` operation 
 * starting at index `i`. Requires that `i < a.length`.
 */
private static <T extends Comparable<T>> void add(T[] a, int i) { ... }

An alternate, more efficient version of heapify() is discussed in Exercise 18.8. The overall runtime of heap sort remains the same for this alternate approach, since it is dominated by the backward pass that we will discuss next.

Backward Pass: Iterative Removal

Once we have a max heap, we can iteratively remove the largest element, using these removed elements to fill the array from the back. This will result in a sorted array since the removals happen in descending sorted order. Our precondition for this backward pass is that the entire array is a valid binary max heap.

After finishing the backward pass, our array should be sorted.

During the ith iteration (counting backward), our invariant will be that the range a[..i) forms a valid binary max heap and the range a[i..] is sorted and greater than or equal to a[0] (the maximum element in the heap).

Using these array diagrams, we should initialize i = a.length and guard the loop on the condition i > 0. To make progress in each iteration, we should swap a[0] and a[i-1], removing the largest element a[0] from the now-smaller heap range a[..i-1) and then bubble down the new root. This allows us to decrement i and preserve the loop invariant.

HeapSort.java

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
/**
 * Sorts the entries of `a` by first turning them into a heap and then removing 
 * the entries sequentially in descending sorted order.
 */
public static <T extends Comparable<T>> void heapSort(T[] a) {
  heapify(a);

  /* Loop invariant: `a[..i)` is a valid binary max heap. 
   *                 `a[i..]` is sorted and `>= a[0]`. 
   */
  for (int i = a.length; i > 0; i--) {
      remove(a, i);
  }
}

/**
 * Relocates the maximum entry of the max heap `a[..i)` to `a[i-1]` and restores 
 * the heap order invariant on `a[..i-1)`. Requires that `i > 0`.
 */
public static <T extends Comparable<T>> void remove(T[] a, int i) { ... }
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
/**
 * Sorts the entries of `a` by first turning them into a heap and then removing 
 * the entries sequentially in descending sorted order.
 */
public static <T extends Comparable<T>> void heapSort(T[] a) {
  heapify(a);

  /* Loop invariant: `a[..i)` is a valid binary max heap. 
   *                 `a[i..]` is sorted and `>= a[0]`. 
   */
  for (int i = a.length; i > 0; i--) {
      remove(a, i);
  }
}

/**
 * Relocates the maximum entry of the max heap `a[..i)` to `a[i-1]` and restores 
 * the heap order invariant on `a[..i-1)`. Requires that `i > 0`.
 */
public static <T extends Comparable<T>> void remove(T[] a, int i) { ... }

Visualizing the Heap Sort Algorithm

The following animation walks through one invocation of the heapsort algorithm to sort an array of 7 elements.

previous

next

If you focus on the array entries in this animation, they appear to be jumping around erratically, especially during the forward pass. For this reason, heap sort seems like a somewhat “magical” sorting algorithm; the “magic” is just carefully managing and reasoning about different invariants.

Overall, the runtime of heap sort is dominated by the \(O(N)\) calls to bubbleUp() and bubbleDown() during the forward and backward passes (respectively). Since these bubbling operations have an \(O(\log N)\) runtime, the runtime of heap sort is \(O(N \log N)\). Since the heap construction is handled in-place within the array, heap sort has an \(O(1)\) space complexity (when bubbleUp() and bubbleDown() are implemented iteratively, otherwise \(O(\log N)\)). Heap sort is not a stable sorting algorithm, which we ask you to verify in Exercise 18.9.

Main Takeaways:

  • A priority queue is an ADT that removes its elements in descending priority order. We can give an efficient implementation of a priority queue using a max heap data structure.
  • A max heap is a binary tree with a shape invariant and an order invariant. The shape invariant dictates that a heap must be a nearly complete binary tree, which allows us to represent it using an array.
  • The max heap order invariant dictates that a parent node must always be at least as large as any of its children.
  • To maintain the order invariant while add()ing and remove()ing elements from the heap, we use bubbleUp() and bubbleDown() operations to swap elements. Both of these helper methods do a constant amount of work per level of the tree, so have an \(O(\log N)\) runtime.
  • Heap sort is an \(O(N \log N)\) sorting algorithm that adds and then removes all elements from a max heap.

Exercises

Exercise 18.1: Check Your Understanding
(a)
True or False: Recall the BST that we defined in the lecture notes. When ignoring the empty nodes, it is impossible for a BST to represent a max heap.
Check Answer
(b)

Consider the following sequence of operations performed on an initially empty MaxHeap<Integer> heap.

1
2
3
4
5
6
heap.add(1);
heap.add(3);
heap.add(0);
heap.add(4);
heap.add(2);
heap.remove();
1
2
3
4
5
6
heap.add(1);
heap.add(3);
heap.add(0);
heap.add(4);
heap.add(2);
heap.remove();
Draw the resulting tree. What is the rightmost node on the bottom level of the tree?
Check Answer
(c)

Consider the following sequence of operations performed on an initially empty MaxHeap<Integer> heap.

1
2
3
for (int i = 0; i < 128; i++) {
  heap.add(i);
}
1
2
3
for (int i = 0; i < 128; i++) {
  heap.add(i);
}

What is the value of the root node of the tree?

The value at the root node of the tree is 127.

(d)
Consider a maxheap of size 31 stored in a list heap. Which of the following value(s) of k would guarantee that heap[4] >= heap[k]?
Check Answer
Exercise 18.2: PriorityQueue Implementations
In the above lecture notes, we stated the time complexities of possible implementations of a PriorityQueue ADT with a list. Implement the list-based PriorityQueues.
(a)
1
2
/** A priority queue backed by a list. */
public class ListPriorityQueue<T> implements PriorityQueue<T> { ... }
1
2
/** A priority queue backed by a list. */
public class ListPriorityQueue<T> implements PriorityQueue<T> { ... }
(b)
1
2
/** A priority queue backed by a sorted list. */
public class SortedListPriorityQueue<T> implements PriorityQueue<T> { ... }
1
2
/** A priority queue backed by a sorted list. */
public class SortedListPriorityQueue<T> implements PriorityQueue<T> { ... }
Exercise 18.3: Tree to List Conversions
(a)
Convert [42, 29, 18, 14, 7, 18, 12, 11, 5] to its binary tree representation.
(b)

Complete the definition of the following method, which converts a list to its nearly complete binary tree representation.

1
2
3
4
5
/**
 * Constructs and returns the nearly-complete binary tree whose level-order 
 * traversal is the given `list`.
 */
static <T> BinaryTree<T> listToTree(ArrayList<T> list) { ... }
1
2
3
4
5
/**
 * Constructs and returns the nearly-complete binary tree whose level-order 
 * traversal is the given `list`.
 */
static <T> BinaryTree<T> listToTree(ArrayList<T> list) { ... }
(c)
Convert the following tree to a list.
(d)

Complete the definition of the following method, which converts a nearly complete binary tree to its list representation. View Exercise 16.7 as a hint.

1
2
3
4
/**
 * Converts the given nearly complete binary `tree` to its list representation.
 */
static <T> ArrayList<T> treeToList(BinaryTree<T> tree) { ... }
1
2
3
4
/**
 * Converts the given nearly complete binary `tree` to its list representation.
 */
static <T> ArrayList<T> treeToList(BinaryTree<T> tree) { ... }
Exercise 18.4: Verify Heaps
Given the following binary trees, verify if it represents a max heap. If it doesn't state which invariant is violated.
(a)
(b)
(c)
A character with higher priority is lexicographically later.
(d)

Implement the following instance method that checks whether this given binary tree is a max heap. Consider writing two helper methods to check the two invariants of a max heap. For the order invariant, consider using a Queue to keep track of the visited nodes. What can you guarantee when you find a null child?

BinaryTree.java

1
2
/** Returns whether `this` represents a max heap. */
public boolean isMaxHeap() { ... }
1
2
/** Returns whether `this` represents a max heap. */
public boolean isMaxHeap() { ... }

Exercise 18.5: Element Order Statistics
Consider a max heap priority queue as shown in the lecture notes with \(N\) elements.
(a)
How many indices of the list can hold the largest element? The second-largest? The third-largest?
(b)
Given \(1\le k\le \log_2N\), describe a condition on an index of the list that would imply it could hold the \(k\)-th largest element. Hint: View the heap as a binary tree and consider the relationship between node depth and ordering.
(c)

Complete the definition of the following method that returns the \(k\)-th smallest element. This method should have a runtime strictly better than \(O(N\log N)\) when \(k \leq \log N\).

1
2
3
4
5
6
/**
 * Returns the k-th smallest element in `list`. Formally, this method returns
 * the smallest element `x` in `list` such that there are at least `k-1` elements
 * in `list` <= `x`, excluding `x` itself.
 */
static <T extends Comparable<T>> kthSmallest(CS2110List<T> list, int k) { ... } 
1
2
3
4
5
6
/**
 * Returns the k-th smallest element in `list`. Formally, this method returns
 * the smallest element `x` in `list` such that there are at least `k-1` elements
 * in `list` <= `x`, excluding `x` itself.
 */
static <T extends Comparable<T>> kthSmallest(CS2110List<T> list, int k) { ... } 
Hint: Store the \(k\) smallest elements seen so far in a max heap.
Exercise 18.6: Min Heap
We often associate priority with the largest values. However, sometimes, smaller values should be treated as higher priority. A (binary) min heap maintains the same shape invariant as a max heap. Its order invariant enforces that every child should be at least its parent.
1
2
/** A binary min heap. */
public class MinHeap<T extends Comparable<T>> { ... }
1
2
/** A binary min heap. */
public class MinHeap<T extends Comparable<T>> { ... }
(a)
Again, we’ll use a backing list for our min heap. Add any fields needed to represent our min heap, their specifications, and a constructor to initialize them.
(b)
Modify the helper methods bubbleUp() and bubbleDown() to satisfy the new order invariant.
(c)
Implement the rest of the methods as seen in MaxHeap.
Exercise 18.7: Iterative Bubbling
Re-implement the bubbleUp() and bubbleDown() methods iteratively.
Exercise 18.8: Improved Heapify
Our implementation of heapify starts from the beginning of the list, iteratively adding elements to the heap. It runs in \(O(N\log N)\) time, where \(N\) is the number of elements in the heap. We'll explore how building the heap from the bottom-up can improve this runtime.
(a)

At a high-level, the improved heapify algorithm starts at the bottom right-most non-leaf node of the binary heap and perform any necessary bubbleDown() operations to establish the max heap order invariant on its subtree. Then, it repeats this for each node, moving to the start of the list. Implement this improvedHeapify() method. Be sure to document your loop with a loop invariant comment.

1
private static <T extends Comparable<T>> void improvedHeapify(T[] a) { ... }
1
private static <T extends Comparable<T>> void improvedHeapify(T[] a) { ... }
Let \(h\) denote the height of the heap when represented as a binary tree. Then, for a particular node, let \(i\) denote the number of connections needed to reach its furthest leaf. For example, the root node has \(i=h\).
(b)
What is the maximum number of nodes that can have a particular value of \(i\)? Express your answer in terms of \(i\) and \(h\). We’ll call this quantity \(C(i,h)\).
(c)
In the worst case, what is the maximum number of swaps that will be needed to restore the max heap order invariant for a subtree rooted at a node with a particular value of \(i\)? Express your answer in terms of \(i\). We’ll call this quantity \(W(i)\).
(d)

The worst-case time complexity of improvedHeapify() is

\[ \sum_{i=0}^{h} C(i, h) \cdot W(i) \]

Show that this is in the order of \(O(N)\). You might find the following fact useful as an upper bound:

\[ \sum_{i=0}^\infty\frac{i}{2^i} = 2 \]
Exercise 18.9: HeapSort Stability
Prove by counterexample that heapSort() is not stable. That is, choose a list that when sorted will not result in a stable sorted order. It may help to use multiple colors to distinguish different occurrences of equivalent elements.
(a)
Draw the contents of this list after each iteration of the for-loop when running heapify().
(b)
Draw the contents of this list after each iteration of the for-loop when running heapSort(). Show that this is not a stable sort.
Exercise 18.10: Min-max Heap
In many applications, we need fast access not just to the smallest element or the largest element, but to both extremes. Min-max heaps can be used to implement the double-ended priority queue ADT, used in external sorting where the number of elements to be sorted cannot be fit into memory. Min-max heaps satisfies the same shape invariant. However, its ordering invariant requires that nodes at even depths store the smallest elements within their respective subtrees, while nodes at odd depths store the largest elements within theirs. The following is an example of a min-max heap represented as a binary tree.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
/** A binary min-max heap. */
public class MinMaxHeap<T extends Comparable<T>> {
  // fields and constructor

  /** Adds `elem` to this heap. */
  public void add(T elem) { ... }

  /**
   * Return the number of elements currently contained in this heap.
   */
  public int size() { ... }

  /**
   * Returns the largest element from this heap without removing it. Requires 
   * that the heap is not empty.
   */
  public T peekMax() { ... }

  /**
   * Returns the smallest element from this heap without removing it. Requires 
   * that the heap is not empty.
   */
  public T peekMin() { ... }

  /**
   * Removes and returns the largest element from this heap. Requires that the 
   * heap is not empty.
   */
  public T removeMax() { ... }

  /**
   * Removes and returns the smallest element from this heap. Requires that the 
   * heap is not empty.
   */
  public T removeMin() { ... }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
/** A binary min-max heap. */
public class MinMaxHeap<T extends Comparable<T>> {
  // fields and constructor

  /** Adds `elem` to this heap. */
  public void add(T elem) { ... }

  /**
   * Return the number of elements currently contained in this heap.
   */
  public int size() { ... }

  /**
   * Returns the largest element from this heap without removing it. Requires 
   * that the heap is not empty.
   */
  public T peekMax() { ... }

  /**
   * Returns the smallest element from this heap without removing it. Requires 
   * that the heap is not empty.
   */
  public T peekMin() { ... }

  /**
   * Removes and returns the largest element from this heap. Requires that the 
   * heap is not empty.
   */
  public T removeMax() { ... }

  /**
   * Removes and returns the smallest element from this heap. Requires that the 
   * heap is not empty.
   */
  public T removeMin() { ... }
}
(a)
As with all of our other heaps, we’ll be representing a min-max heap as a list. Define the fields, specifications, and constructors to initialize a min-max heap.
(b)
Implement size(), peekMin(), and peekMax(). All these should run in \(O(1)\) time.
(c)

Implement bubbleUpMin() and bubbleUpMax(). Take note that we are now comparing a node and its grandparent. We’ll be making heavy use of these helper methods for insertion and deletion. Consider creating a helper method to compute the index of a node’s grandparent.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
/**
 * Repeatedly swaps the element in index `i` of the heap with its grandparent 
 * until it resides at the heap root or it is greater than its new grandparent.
 * Requires `i` is on an even level and `0 <= i < size()`.
 */
private void bubbleUpMin(int i) { ... }

/**
 * Repeatedly swaps the element in index `i` of the heap with its grandparent 
 * until it resides at depth 1 or it is less than its new grandparent.
 * Requires `i` is on an odd level and `0 <= i < size()`.
 */
private void bubbleUpMax(int i) { ... }
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
/**
 * Repeatedly swaps the element in index `i` of the heap with its grandparent 
 * until it resides at the heap root or it is greater than its new grandparent.
 * Requires `i` is on an even level and `0 <= i < size()`.
 */
private void bubbleUpMin(int i) { ... }

/**
 * Repeatedly swaps the element in index `i` of the heap with its grandparent 
 * until it resides at depth 1 or it is less than its new grandparent.
 * Requires `i` is on an odd level and `0 <= i < size()`.
 */
private void bubbleUpMax(int i) { ... }
(d)
Taking inspiration from the last subproblem, write specifications for and implement bubbleDownMin().
(e)

The insertion algorithm closely mirrors that of a max heap. We first insert the new element to the last index of our list. We take note of whether this node is on an even or odd level and swap with its parent if needed. For instance, if the new element is on an odd level and it is less than its parent, we must swap the two. Then, bubble up (min/max depending on level parity) the element in the last index.

1
public void add(T elem) { ... }
1
public void add(T elem) { ... }
Consider adding a new field nextDepth that is the depth of the next node to be inserted. This value can be updated after insertions and removals with the expression: \(\lfloor \log_2(N+1) \rfloor\), where \(N\) is the number of elements in the heap.
(f)
The removal methods are effectively the same as that of a max heap. Replace the root node with the last node in the list. Invoke bubbleDownMin() on the root. Beware of the edge case where the root node does not have grandchildren.
(g)
What is the asymptotic runtime complexity of each method?
Exercise 18.11: \(d\)-ary Heap
Heaps need not be represented as a binary tree. They can also be represented as a \(d\)-ary tree, where each node has at most \(d\) children. The shape invariant remains, but the order invariant generalizes to stipulate that a node must be at least as large as all of its children.
(a)
Suppose \(d=3\). Convert the max heap used in section 18.2 to a ternary heap. Repeat for \(d=4,5\).
(b)
What happens to the height as \(d\) approaches \(N\)? Which operations become more efficient as we increase \(d\)? Which ones become less efficient?