Homework 5

CS 482 - Spring 2005

Due: Friday, Mar 11

Note: Include your Cornell NetID on your each section of your homework. This simplifies the process of recording your grades.

Part A

  1. 1a = Problem 4 at the end of Chapter 7. 
    1b = Problem 5 at the end of Chapter 7.

    Decide whether you think the following statements are true or false. If true, give a short explanation. If false, give a counterexample. 

    1. Let G be an arbitrary flow network, with a source s, a sink t, and a positive integer capacity ce on every edge e. If f is a maximum s-t flow in G, then f saturates every edge out of s with flow. (I.e. for all edges e out of s, we have f(e) = ce.)
    2. Let G be an arbitrary flow network, with a source s, a sink t, and a positive integer capacity ce on every edge e; and let (A,B) be a minimum s-t cut with respect to these capacities. Now suppose we add 1 to every capacity; then (A,B) is still a minimum s-t cut with respect to these new capacities {1 + ce : e in E}.

Part B

  1. Problem 17 at the end of Chapter 7.

    You've been called in to help some network administrators diagnose the extent of a failure in their network. The network is designed to carry traffic from a designated source node s to a designated target node t, so we will model it as a directed graph G = (V,E), in which the capacity of each edge is 1, and in which each node lies on at least one path from s to t.

    Now, when everything is running smoothly in the network, the maximum s-t flow in G has value k. However, the current situation -- and the reason you're here -- is that an attacker has destroyed some of the edges in the network, so that there is now no path from s to t using the remaining (surviving) edges. For reasons that we won't go into here, they believe the attacker has destroyed only k edges, the minimum number needed to separate s from t (i.e. the size of a minimum s-t cut); and we'll assume they're correct in believing this.

    The network administrators are running a monitoring tool on node s, which has the following behavior: if you issue the command ping(v), for a given node v, it will tell you whether there is currently a path from s to v. (So ping(t) reports that no path currently exists; on the other hand, ping(s) always reports a path from s to itself.) Since it's not practical to go out and inspect every edge of the network, they'd like to determine the extent of the failure using this monitoring tool, through judicious use of the ping command.

    So here's the problem you face: give an algorithm that issues a sequence of ping commands to various nodes in the network, and then reports the full set of nodes that are not currently reachable from s. You could do this by pinging every node in the network, of course, but you'd like to do it using many fewer pings (given the assumption that only k edges have been deleted). In issuing this sequence, your algorithm is allowed to decide which node to ping next based on the outcome of earlier ping operations.

    Give an algorithm that accomplishes this task using only O(k log n) pings.

Part C

  1. Problem 29 at the end of Chapter 7.

    Some of your friends have recently graduated and started a small company, which they are currently running out of their parents' garages in Santa Clara. They're in the process of porting all their software from an old system to a new, revved-up system; and they're facing the following problem.

    They have a collection of n software applications, {1, 2,..., n}, running on their old system; and they'd like to port some of these to the new system. If they move application i to the new system, they expect a net (monetary) benefit of bi > 0. The different software applications interact with one another; if applications i and j have extensive interaction, then the company will incur an expense if they move one of i or j to the new system but not both --- let's denote this expense by xij > 0.

    So if the situation were really this simple, your friends would just port all n applications, achieving a total benefit of sumi bi. Unfortunately, there's a problem...

    Due to small but fundamental incompatibilities between the two systems, there's no way to port application 1 to the new system; it will have to remain on the old system. Nevertheless, it might still pay off to port some of the other applications, accruing the associated benefit and incurring the expense of the interaction between applications on different systems.

    So this is the question they pose to you: which of the remaining applications, if any, should be moved? Give a polynomial-time algorithm to find a set S subset of {2, 3, ..., n} for which the sum of the benefits minus the expenses of moving the applications in S to the new system is maximized.