CS 414 Homework 4: Solution set

 

1.                   Suppose that we run the LRU paging algorithm with some fixed partition size D on a reference string, and then run the Working Set (WS) algorithm on the identical reference string, with a working set size equal to D (the same value).  You may assume that the actual working set of the program fits into memory.

a)                  True or false (explain why):  "t, SWS(t) Í SLRU(t)

This is true.  Both algorithms page things in on demand, so the main difference is that WS pages OUT some pages that LRU keeps in memory.  Thus, in general, the memory state of the WS algorithm will be a subset of the state of the LRU algorithm.

b)                  True or false (explain why): WS and LRU have the same hit ratio.

False.  Since WS pages out some pages that LRU keeps in memory, WS can have a lower hit ratio (if the program tries to reference these missing pages).

c)                  Suppose that your computer will be used to run a single program – no multitasking or sharing, and that it lets you pick the paging algorithm it will use.  If you care only about maximum performance would you tell the O/S to use the LRU or the WS algorithm?  Explain.

In this situation, it is hard to see an advantage to using WS.  WS tries to reduce the number of pages in use on behalf of an application, freeing memory for use by other programs or for other purposes, such as buffering on behalf of the disk (the inode cache and disk buffer pool).  But in the worst case, WS would still need D pages, so this benefit can’t really be exploited in the case described.  And the WS heuristic is just that – a heuristic.  In some cases, LRU would have a lower miss ratio.  LRU is the better choice.

 

2.                   Suppose that a computer is to be dedicated to running a single, non threaded program continuously.  The true working set of the program is slightly larger than physical memory.  For example, perhaps the program really needs 100 pages but the physical memory only has 95 pages available.  You must chose between the LRU fixed memory partition algorithm and the working set algorithm.  In both cases, you would set the memory partition size equal to the total number of available physical pages.  How would you expect the runtime paging behavior of the two alternatives to differ?

This question is very similar to question 1, but now we know that the true working set won’t fit in the available set of pages, D.  Under these conditions, we want to ask which algorithm will come closest.  Realistically, the ideal solution would probably be something like “least frequently used”, but this isn’t an option.  Under the stated conditions, WS will probably do worse than LRU because it generates a potentially high rate of page-out operations – it will use a working-set-window which is smaller than the true working set size, so we know that it will be trying to page out “active” pages fairly often, perhaps continuously.  LRU will also do some paging, but one might hope that it will approximate the behavior of an LFU algorithm, and in any case it won’t do all this extra work of paging things out unnecessarily.

 

3.                   Suppose that a program has been running on a computer for several years without problems, using a virtual memory policy with a fixed partition of k memory pages.  You move it to a much faster computer but keep the same fixed partition size.  On the new machine, however, the program appears to be thrashing! Explain how this could happen.

In general, a program thrashes when the perceived overhead due to paging is too high compared to the perceived rate at which useful work is being done.  So, if we take a program from a machine with a slow CPU and a fast disk and move it to a machine with a faster CPU and a relatively slower disk, the same program may seem to thrash even with the identical amount of memory.

 

4.                   Suppose that a factory builds widgets.  Each widget is built from sub-widgets, and so forth, in a sort of tree structure.  The nodes are machines that produce things.  The ones at the leaves consume raw materials and output sub-assemblies.  The root node is the machine that makes the widget itself.  Now, suppose that you are designing a producer-consumer structure to interconnect the various stages of the factory. You’ll put a separate instance of the buffer on each “edge” of the tree.  For example, a conveyor belt that can hold a maximum of 3 sub-widgets of type A is like a bounded buffer with capacity 3.  Once it fills up, the producer has to wait (if this is confusing, think of the check-out system at the supermarket).

 

For example, perhaps two sub-widgets, one of type A and one of type B, are required to construct a widget.  The sub-widget generator for type A will produce A’s and put them in the buffer between A and the root.  And the sub-widget generator for type B will produce B’s and put them in a different buffer to the root.  The widget assembly machine at the root will take one A and then one B, assemble one widget, put it in a box for shipment, and then repeat.  (This example is for a factory with just 3 “nodes” and 2 “edges” but in general, the factory may be much more complex – an arbitrary tree, but with a single root node).

 

 Do you need to worry about deadlock? Explain why.

 

No, we don’t need to worry about deadlock.  In this situation, we create a tree of producer-consumer relationships – processes linked by bounded buffers.  In such a situation, the producer can wait on the consumer, if the buffer becomes full.  But because our factory looks like a tree, we can’t end up with a cyclic wait situation: each process waits only on a process higher in the tree, and the root never waits – it only consumes.  Since cyclic wait is a necessary condition for deadlock, deadlock cannot arise.