Binomial and Fibonacci Heaps, continued Lazy Merge We can improve the amortized time of merges by a slight modification of the data structure. Instead of keeping the trees B(i) in an array, we'll keep them in a doubly linked list. Also, all children of any node are linked together in a doubly linked list. Now to merge 2 heaps in O(1) time, just concatenate the lists. Compare the two min pointers and let the new min pointer point to the smaller one. The trouble with this is that we no longer have at most 1 tree of each rank. For example, we could create N heaps of 1 element each, then merge them all. We would end up with a heap consisting of N copies of B(0). This won't bother us until we have to do a deletemin. When that happens, we take the time to clean up the data structure. To do a deletemin, we remove the min element, then concatenate the list of children into the main heap. Now we need to find the new min. We create an array and form an eager-merge version of the heap. We pick the trees off the list one by one and enter them into the array. If there is already a tree in the array of that rank, we link the 2 trees and to get a tree of the next rank, then try to enter that one in the array, and so on. We keep linking until we find an empty slot in the array, then we store the tree there. We enter all the trees in the list into the array like this. While we are going through the trees, we look for the new min and remember where it is. It is at the root of some tree, since all the trees are heap ordered. This single operation could take as much as linear time. However, it only takes O(log n) time amortized. To see this, recall our credit invariant: every tree in existence has $1 in its account which we can use to pay for linking. Note that for each tree on the list, entering it into the array took constant time plus time proportional to the number of links we did for that tree. Thus if we started with m trees in the list and did k links in all, the total time spent is O(m+k). By our credit invariant, we have $k to pay for the links, so we will be ok provided m+k is O(k+log n). But if we start with m trees and do k links in all, we have exactly m-k trees when we are done, since with each link we have one less tree. Also, when we are done, all the ranks are distinct, so m-k is no more than the max rank, which is O(log n). Thus m+k = 2k+(m-k) = O(k+log n). Fibonacci Heaps We can do arbitrary deletes in amortized O(log n) time. delete(h,i) -- delete node i from heap h. This is done by cutting the subtree rooted at i out of the tree in which i appears in the heap h, forming a heap h' from the children of i, and concatenating h' onto h. We charge an extra $1 to this operation and save it with the parent of i for later use. The problem with this is that our previous analysis depended on the fact that the size of a tree is exponential in its rank, i.e. the rank of a tree is log of its size; but if we start cutting a lot of subtrees out, we will get scraggly trees and this will no longer be true. Thus we limit the number of cuts of children of any node to 1. If we ever want to cut a second child out of a node j, then we have to cut j from its parent. If j is the second child of its parent to be cut, we cut j's parent from j's grandparent; and so on up the tree. This is called cascading cuts. We pay for all these cuts with the $1 saved at these nodes when we cut their first children out. Thus the amortized cost of this operation, even with cascading cuts, is constant. We also need to find the new min, which takes log n time. Now we will be ok provided the restriction to one child cut from each node means that the trees are still exponential in their ranks, so all of our previous analysis still holds. The smallest tree of rank n you can get by cutting as many subtrees as possible without violating the restriction is of size f_n, the nth Fibonacci number f_n, and f_n grows as phi^n, where phi is the golden ratio (1+sqrt(5))/2 ~ 1.618..., so the size is still exponential in the rank.