Homework #2    Solutions

 

1)

            var count: shared integer;

           

            INIT (int value) {

                region count when true do count = value;

            }

 

            WAIT () {

                region count when (count > 0) do count = count-1;

            }

 

            SIGNAL () {

                region count when true do count = count+1;

            }

2)        

            An entry in the page table consists of the frame number (where the page resides in the physical memory) and some extra bits for security and other information.  An entry in the inverted page table consists of the process number (pid), (logical) page number (which page of which process is mapped to the memory) and similar extra bits.  

 

            In a virtual-memory system,  the number of pages per process (i.e., the size of each process' logical address space) is usually much higher than the number of frames in memory (i.e., the size of the physical address space).  In this case, the number of bits in each entry of the inverted page table will be larger than that of the standard page table. However, if no virtual memory is used, and the logical space for each process is smaller then the physical address, then the answer depends on whether there are enough bits for process number (pid) to compensate the difference.

 

3a)

            Both LFU and MFU are stack algorithms.  Before ith access, let Ai be the set of pages in memory of size n and Bi be the set of pages in memory of size n+1.  Let Aj Í Bj, "j ≤ i.  If a page fault occurs only in Ai and not Bi, then the most (least) frequent page, a, will be removed from Ai.  Then, Ai+1 Í Bi+1, because the new page is added to both Ai and Bi and since there was a page fault only in Ai, Ai = Bi and Ai - {a} + {new page} Í Bi + {new page}.  If both Ai and Bi have page faults, it can be shown that the most (least) frequent page in Bi is not present in Ai (in fact it was page removed from Ai-1 in previous page fault) and hence Ai+1 Í Bi+1.  Thus they are stack algorithms.  (Note that we assume that there is a deterministic way of resolving ties. Otherwise, Belady's anomaly could still occur.)

 

3b)

            The clock algorithm exhibits Belady’s anomaly and is not a stack algorithm.  This is because it reduces to FIFO easily.  Consider the following reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.  In fact this is the same example given in the text book and Belady’s anomaly can be seen for clock algorithm also with memory size 3 and 4.

 

4)

            Virtual memory is transparent to the programmer, unlike overlays, which are managed entirely by the user.  Hence, virtual memory invokes the OS (page fault handler) and hence incur performance overhead.  Furthermore, because overlays are entirely managed by the user, the performance degradation because of page faults can be greatly avoided by always having all the required and only the required space for a process in memory.  In addition, virtual memory has an extra level of indirection to mapping addresses.  Dynamic loading scheme has to check for existence in memory for every function call.  Since the units of allocation in dynamic loading scheme are  functions, their sizes are variable and hence external fragmentation problems could arise.    However, dynamic loading can be done entirely in the user space without incurring any performance overheads related to invoking the kernel. 

 

         

5a)

            Direct access in a linked allocation has to be performed by sequentially accessing all the blocks, while in contiguous allocation the correct disk address can be calculated.  Since the adjacent blocks are contiguously allocated, and most modern hard disks have special designs (such as interleaving) to make sequential access from contiguous blocks efficient (less seek and rotation latency) contiguous allocation scheme has better performance than linked allocation scheme.

 

5b)       The advantage of reduced seek time might not apply to memory management because the access time for main memory is approximately the same, irrespective of whether the addresses are contiguous or not.  Although recent advancements in main memory design have produced faster access time for sequential access, the speedup is only a few times compared to three orders of magnitude savings in seek time in sequential disk accesses.  Thus contiguous memory allocation does have the same great benefits as contiguous disk allocation.

 

5c)

            Contiguous allocation makes changing the size of the file by deleting or adding new blocks very difficult.  One way to solve the insertion problem is to allocate in chucks of big contiguous blocks and use some form of mapping and a small table to find out the correct disk address for a given block.  This scheme is similar to paging or segmentation in memory. However, this does not solve deletion problem.  For deletion, we may mark the block as deleted temporarily and run a clean up process infrequently to compact the deleted blocks.

 

5d)

            Contiguous allocation is very efficient to store multi media files, especially since they do not change in size and contents and they are always accessed sequentially.

 

6a)

Average time     = (1 – page fault rate)*(memory access time) + (page fault rate)*(page fault access time)

= (1 – 10-6)*(200*10-9) + (10-6)*(20*10-3)  = 2*10-7 + 2*10-8 – 2*10-13 = 2.2*10-7   = 220ns (page fault rate = 10-6).

= (1 – 10-3)*(200*10-9) + (10-3)*(20*10-3)  = 2*10-7 + 2*10-5 – 2*10-10 = 2.02*10-5 = 20.2μs (page fault rate = 10-3).  

 

6b)

Average time     = (1 – 10-6)*(20*10-6 + 200*10-9) + (10-6)*(20*10-3) = 2*10-5 + 2*10-7 + 2*10-8 – 2*10-11 – 2*10-13 = 2.02*10-5 = 20.2μs.

 

6c)                              

Average time     = (1 – 10-6)*(200*10-9) + (10-6)*(20*10-3) = 2*10-7 + 2*10-8 – 2*10-13 = 2.2*10-7 = 220ns.

 

6d)

Average time     = (1 – 10-6)*(200*10-9) + (10-6)*(40*10-3) = 2*10-7 + 4*10-8 – 2*10-13 = 2.4*10-7 = 240ns

 

6e)

The above calculations show us that higher number of page faults significantly decreases the performance of virtual memory system, that any cost (related to page-replacement algorithms) incurred for each memory access could have a significant impact on the overall performance, and that swapping out dirty pages introduces extra overhead.