We played with an animation of various sorts to see the invariants in action, showing the loop invariants, and relative performance of the sorting algorithms we've seen. We saw the difference between worst-case, best case, and average-case running time.
We briefly discussed heap sort. Heap sort is like selection sort, except that instead of performing a linear-time search at each step to find the minimum value, it builds a heap and uses that to find the minimum value. There are two steps to heap sort. First, we build a heap in the array, using the following loop:
i a: [ max-heap property | ? ]
Then we repeatedly remove the largest element and place it at the end of the array: i a: [ max-heap property | largest elements, sorted ]
You are implementing the two loops (add something to a heap, and remove the max element from the heap) for your project.
You won't be tested on this material, but showing that certain problems are impossible is an important idea in computer science.
We discussed lower bounds on sorting, arguing that the best possible sorting algorithm must take at least Ω(nlog n) time in the worst case, if it is only allowed to compare elements of the array and swap them (Ω is like O, except that saying f is O(g) is like saying f ≤ g, while saying f is Ω(g) is like saying that f ≥ g).
See sections 5.1 and 5.2 of this document for a description of the argument we gave.
We briefly touched on linear sorts, which use additional information about the input to improve on the O(nlog n) bound. For example, bucket sort uses the assumption that the values are evenly distributed within some range to achieve linear-time sorting.