Next: Replacement Policy Up: Hybrid Adaptive Caching Previous: Hybrid Adaptive Caching

3.1 Compaction

HAC computes usage information for both objects and frames as described in Section 3.2, and uses this information to select a victim frame F to compact, and also to identify which of F's objects to retain and which to discard. Then it moves retained objects from F into frame T, the current target for retained objects, laying them out contiguously to avoid fragmentation (see Figure 2).

Indirection table entries for retained objects are corrected to point to their new locations; entries for discarded objects are modified to indicate that the objects are no longer present in the cache; also, the reference counts of the objects referenced by discarded objects are decremented. If all retained objects fit in T, the compaction process ends and F can be used to receive the next fetched page; this case is shown in Figure 2(a). If some retained objects do not fit in T, F becomes the new target and the remaining objects are compacted inside F to make all the available free space contiguous. Then, another frame is selected for compaction and the process is repeated for that frame. This case is shown in Figure 2(b).

 


Figure 2: Compacting objects in a frame

This compaction process preserves locality: retained objects from the same disk page tend to be located close together in the cache. Preserving locality is important because it takes advantage of any spatial locality that the on-disk clustering algorithm may be able to capture.

An important issue in hybrid caching is handling the situation where the disk page P of an object o is fetched and o is already in use, cached in frame F. HAC handles this situation in a simple and efficient way. No processing is performed when P is fetched. Since the copy of o in F is installed in the indirection table, o's copy in P will not be installed or used. If there are many such unused objects in P, its frame will be a likely candidate for compaction, in which case all its uninstalled copies will simply be discarded. If instead F is compacted, its copy of o is moved to P (if o is retained) instead of being compacted as usual. In either case, we avoid both extra work and foreground overhead. In contrast, the eager approach in [KK94] copies o into P when P is fetched; this increases the miss penalty because the object is copied in the foreground, and wastes effort if P is compacted soon after but o is not evicted.

Next: Replacement Policy Up: Hybrid Adaptive Caching Previous: Hybrid Adaptive Caching

Miguel Castro, Atul Adya, Barbara Liskov, and Andrew Myers

Copyright ©1997 by the Association for Computing Machinery