CS414 Prelim 2. November 11, 2003

5 questions, 20 points each.  75 minutes (10:10-11:25)


1.      One way to implement a multi-threaded Web site application is to structure it in stages, as follows. 


     A stage consists of a set of one or more threads that all perform the same task.  For example, an "input" stage might have a set of threads that accept incoming requests from the network; these would be turned into events by the runtime library.  A "search" stage might be a set of one or more threads that look for information useful in performing a requested task, and an "output" stage might have one or more threads that build web pages to send back to the users who issued requests.  


     Threads in one stage communicate to threads in another stage by sending small messages called "events", and there is an event-queue in between any two stages that ever communicate.  An event queue is unidirectional (one stage produces events and the other consumes them) and has the capacity to hold some bounded number of events.   Threads don't "wait" for responses: a thread typically sits in a loop, waiting for an event to come into its stage, processing that event, perhaps generating additional events or sending messages over the network, and then looping.  Threads don't talk directly to other threads.


      In what follows, we'll assume that threads execute in a shared address space within a single heavy-weight process and that they use monitor-style synchronization when accessing shared variables.


     (a) Using monitors, write a class called event_queue with methods "enqueue(event e)" and "event dequeue()" so that the greatest possible degree of concurrency will be achieved.  The application will create an instance of the event_queue class between each pair of stages that needs an event_queue.  The event_queue constructor should accept one argument, an integer telling the size of the associated event_queue buffer.


      (b) Suppose that the application can have processing loops, so that stage A sends events to stage B, stage B to stage C, etc, and eventually events "triggered" by stage A might get sent back to stage A.  Can a deadlock arise?  Explain why or why not, making reference to the "necessary conditions for deadlock" as application.

2.      In class we learned about IP addressing in the Internet.  For each of the following, write a single sentence to indicate if the claim is true or false, and why.

a.    Claim: "Suppose that machine A is communicating with machine B on the Internet.  There are situations in which the IP address used by A to refer to B might be different from the IP address used by B to refer to itself." 


b.    Claim: "No matter how fast an Internet connection is, if a file is fairly small, TCP might not send data at the maximum data rate for that connection."


c.    Claim: "When sending a long file over the Internet, TCP increases the sending data rate until it detects packet loss."

d.  Claim: "When TCP does detect a packet loss (as in part c), it reduces the data rate to the last rate at which data successfully got through, and then holds the sending rate constant thereafter."

e.  Claim: "Sometimes, Internet addressing would allow machine A to make a connection to machine B, but won't allow machine B to connect to machine A."

f.   Claim: "When people talk about "round trip" data rates and delays in the Internet, they are incorrectly adding together the outgoing and the incoming numbers.  Since these numbers can be very different, it is better to measure the two delays separately (if one can do so) and quote two numbers: the "uplink" data rate and delay, and the "downlink" rate and delay.

g. Claim: "Processes on machines connected by a network link that interacting by exchanging messages can experience race conditions and should use monitors or semaphores to synchronize access to shared data."

h.  Claim: "Because the DNS caches name-to-IP-address data, if an application updates this mapping there may be a period during which messages continue to be sent to the old IP address."

i.  Claim: "The Internet routing adapts in milliseconds if load changes on a link or if a router fails.  As a result, if there is a path from machine A to machine B in the Internet, messages from A to B can be counted on to get through despite disruptions.  Message loss occurs during periods when there is no route between A and B."

j.  Claim: "A lossy link, like a wireless link, can cause TCP to misbehave by shrinking its adaptive window and hence running very slowly."


3.         Below is are two memory reference tableaus for the same reference string.  We assume a fixed sized memory partition of size D = 4 pages. 

a.    Fill in the two tables, one for the least recently used (LRU) policy, and the other for least frequently used (LFU). We filled in the part for t=1 to t=4.


b.  Now fill in the third table.  This one is for the Working Set policy with working-set size D = 4 pages.


c.  If your goal was to implement a practical algorithm that achieves the smallest practical miss ratio, and you had to chose between Working Set and Least Frequently Used, which algorithm would you pick, and why?  Assume that the machine will only run one program and therefore that the memory partition size is a constant.


d.  Same question as in (c), but now assume that the machine will run a great many programs and that we would also like to make the best possible use of the available memory.


4.  When we link a program to a dynamically linked library, or DLL, we first "map" the file containing the DLL's code into memory, but then we create a separate copy of the DLL's data segment.  Why is this necessary?  After all, since the file is mapped into memory, the data segment is visible.  So why not just use that copy?