CS414 Homework 4.  Due in class on Thursday November 15.

 

1.         Consider the page reference string shown below.  Assuming that Δ=3 (for the demand paging algorithms, Δ is the memory size; for working set algorithms Δ is the working set window size), fill in the table for the (a) LRU and the (b) WS algorithms.   Time starts at 0 and we’ve filled in the first three entries to “warm start” the algorithms.

 

(a) LRU

R(t)

1

2

5

3

2

6

3

1

5

3

2

5

3

1

4

1

5

4

5

4

5

4

3

S(t)

1

1
2

1
2
5

2
5
3

5
3
2

3
2

2
6
3

6
3
1

3
1
5

1
5
3

5
3
2

3
2
5

2
5
3

5
3
1

3
1
4

3
4
1

4
1
5

1
5
4

1
4
5

1
5
4

1
4
5

1
5
4

5
4
3

In(t)

1

2

5

3

Ø

6

Ø

1

5

Ø

2

Ø

Ø

1

4

Ø

5

Ø

Ø

Ø

Ø

Ø

3

Out(t)

Ø

Ø

Ø

1

Ø

5

Ø

2

6

Ø

1

Ø

Ø

2

5

Ø

3

Ø

Ø

Ø

Ø

Ø

1

 

(b) WS

R(t)

1

2

5

3

2

6

3

1

5

3

2

5

3

1

4

1

5

4

5

4

5

4

3

S(t)

1

1
2

1
2
5

2
5
3

5
3
2

3
2

2
6
3

6
3
1

3
1
5

1
5
3

5
3
2

3
2
5

2
5
3

5
3
1

3
1
4

4
1

4
1
5

1
5
4

4
5

5
4

4
5

5
4

5
4
3

In(t)

1

2

5

3

Ø

6

Ø

1

5

Ø

2

Ø

Ø

1

4

Ø

5

Ø

Ø

Ø

Ø

Ø

3

Out(t)

Ø

Ø

Ø

1

Ø

5

Ø

2

6

Ø

1

Ø

Ø

2

5

3

Ø

Ø

1

Ø

Ø

Ø

Ø

 

  1. For each of the two algorithms, compute the hit ratio starting from time t = 3 (i.e. skipping the part that we filled in for you).

 

HRLRU = 11/20, HRWS = 11/20

 

 

2.         Suppose LRU was used to page a mapped file k pages long, which your program scans twice, from start to end sequentially.  Thus the application visits pages 0, 1, … k-1 of the file, then revisits pages 0, 1, … k-1.  For simplicity assume that this is the only paged segment and that the paging algorithm is configured to use Δ frames of memory.

a.          If Δ ≥ k, what hit rate will we obtain using LRU?

 

 The first k references will all page fault, after which the remainder will all be hits.  But normally (as in problem 1) we “warm start” the algorithm by ignoring behavior until the algorithm actually starts doing demand paging.  In this case the hit ratio would be computed only on the last k references and will be 1.0

 

We’ll also accept .5 from people who consider that start-up period to count as page-faults.  In class we agreed that hit ratios should ignore the startup period but perhaps you missed class, and we won’t count that against you.

  

b.         If Δ < k, what hit rate will we obtain using LRU?

 

 0.0

  

c.          Would Denning’s WS algorithm give a different result for 2.a and 2.b using the same values of Δ, but where Δ is now the working set window size?  Explain briefly.

 

WS would behave identically to LRU in both cases.  When Δ ≥ k, all pages of the file are touched within Δ references hence all remain in memory; otherwise, pages remain untouched for time period Δ and hence are paged out shortly before being referenced again.  Basically, this example is a worst-case scenario for both algorithms. 

 

3.         Modern computers tend to have a lot of memory – for example, a gigabyte of main memory will probably be common within a few years, and this is enough to hold a million 1-kbyte pages.  Very few programs are this large.  You’re being interviewed by Microsoft’s operating systems group and they want to know your opinions on virtual memory in this new world.

a.          Give a simple argument for eliminating virtual memory – a few lines long[1].

 

 Virtual memory has a lot of costs: the hardware itself is slowed down (chip space that could go into a larger L1 cache is dedicated to the TLB and the paging algorithm; translating addresses does take time; TLB misses incur a delay), we use main memory for the page table (which could get large), and the O/S needs to spend time figuring out what to page out.  Page faults are expensive.  So if we could get rid of the whole mess we would get a big performance win – it could be a huge improvement.   This is especially true for applications in which the computer basically runs just a single program – and more and more computers are dedicated, special-purpose devices.

 

b.         Give a simple argument for keeping virtual memory – again a few lines long.

 

By now, we know how to keep the costs down, and virtual memory lets us work with conceptually huge address spaces without in fact using huge amounts of real memory.  That fast computer will need vast amounts of memory just to run the fast high-resolution display and to run the many applications that will keep it connected to the network and happy.  Libraries and other utilities are getting larger even faster than memories are getting larger and component architectures will only exacerbate the trend.  So we need virtual memory, or this incredibly fast computer will turn out to only be able to run one or two programs at a time.

 

 



[1] Hint: your answer should focus on the main issues – the strongest pro and con you can come up with.