DARPA TASK Web page                                               DARPA TASK Description
Principled Analysis & Synthesis of Agent Systems

TASK  LED_RACE Challenge Problem  

      
IISI Home
CIS Home
CS Home

DARPA TASK      DARPA

Research team

 

Principal Investigator: Bart Selman

Co-Principal Investigators: Carla Gomes and Henry Kautz

Others:  Ramon Bejar and Ioannis Vetsikas
 
 

 

 
Overall Goal and Techniques
 

The basic computational problem in many of the domains TASK addresses is the solution of large sets of interacting, distributed constraints by multiple agents. The central hypothesis upon which our work is based is that large systems of constraints behave much like physical systems with many interacting components. For example, sets of constraints can undergo phenomena such as phase transitions from being solvable to having no solution. Furthermore, analysis using the mathematical tools of statistical physics yields empirically-verifiable predictions that are stronger than those that can be obtained by a classical theoretical analysis.  As the size and complexity of distributed systems grows, we expect the physics-style analysis to become a more critical tool.
 
 

To achieve this goal, we propose to create innovative tools and algorithms that will allow agent systems to solve tasks that would traditionally be out of reach. Our approach builds on our work on:

     Connections between hard computational tasks and models from statistical physics[4, 6]. Our previous work on models based on statistical physics for hard combinatorial search problems can be seen in terms of the analysis of a distributed problem solving task where each individual constraint (or small group of constraints) is represented by a single agent. A natural mapping from the global behavior of searching a solution to a physical system exists because in statistical physics global properties are determined by the microscopic interactions of the individual components. We will extend this analysis to incorporate much richer agent models.
    Randomization techniques. Randomization is a powerful mechanism to increase overall predictability.  In classical combinatorial search problems,  "rapid restarts" [3] and "portfolio" strategies [2] have been shown to be very effective. We  plan to extend these techniques to complex multi-agent systems to increase the overall robustness of the system.

 


Problem Description (RACE Challenge Problem) 

 

 

The RACE problem provides a compelling framework for the study of multi-agent behaviors and their inherent computational complexity. According to this problem, the Department of Defense (DOD) needs to formulate and execute plans for transporting personnel and equipment in a case of an emergency situation. A number of companies have signed agreements with the DOD to provide transportation required in such situations. The agreements involve a certain amount of regular contractual work that the DOD awards to the companies with the understanding that in case of a crisis the companies provide emergency services proportionally to the contractual work received. The companies provide estimates of what it would cost the company to provide each emergency job. After all estimates are in, the DOD goes trough a job allocation process where it tries to assign jobs in a "fair" (balanced) manner to the various companies.

 

Characterizing the complexity of the RACE Allocation Problem

Initial results [1] indicate that the computational complexity of the allocation problem is closely related to the structure of the cost matrix of the problem. The cost matrix determines, for every pair (company,job) the cost of performing the job for the company. Figure 1 shows a cost matrix for a problem with three companies and five jobs. In our work we have considered two different notions for fairness allocation: "Min-max fairness" and "Lex min-max fairness". An allocation is min-max fair if it minimizes the maximum cost paid by any company and it is lex min-max fair if it has the lexicographically smallest cost vector. The figure also shows, with the numbers in red of every column, the best lex min-max fair allocation.

 


 

Figure 1: Cost matrix for a sample problem.

 

We can see the relation of the structure of the cost matrix with the complexity of the allocation problem, for example, in two boundaries cases. In the "Uniform bidding" case, all companies declare the same cost for a given job. Figure 2 shows the structure of the cost matrix for that case, where colors represent different costs. In that case, the allocation problem for a decision version of min-max fairness becomes equivalent to bin packing. So, this case is NP-complete, although good approximation schemes and average-case results are known.
 

 


Figure 2: Structure of the cost matrix for the uniform bidding case


In the "Uniform cost" case, a company declares the same cost for all the  jobs. Figure 3 shows the structure of the cost matrix for that case. In that case, we have a polynomial algorithm for the lex min-max allocation problem. So, this case represents a tractable sub-case of our general allocation problem. 

 


Figure 3: Structure of the cost matrix for the uniform cost case

 

In our work we have also considered the typical (average) complexity of the problem when the cost matrix has a more general form, by studying the empirical computational complexity of solving the allocation problem when the cost matrix of the problem is obtained from random problem distributions, and we have also studied the existence of phase transitions in the decision version of the problem. A particular random problem distribution is defined by specifying the model for obtaining a cost matrix for the problem. For example, in one of the models, given a set of C companies and J jobs,  we obtain the cost matrix by selecting, uniformly at random (u.a.r.), exactly c companies for every job such that the selected companies will be the only ones that can perform the job. The cost for every selected company is generated also u.a.r from a price interval [LC,UC]. We have detected a phase transition in the decision version of min-max fairness allocation, i.e. is there an allocation such that the maximum cost paid by any company is less than a given upper limit cost K ? The Phase transition occurs when we increase the ratio of the number of companies to the number of jobs. Figure 4 shows the results for an experiment for measuring the average complexity of the problem with different values for that ratio.  The blue plot shows, for every different ratio companies/jobs, the average size of the search tree visited by a systematic search algorithm and the red plot the percentage of instances found to have solution with that ratio, from a total sample of 100 instances. We observe a phase transition from no solvable instances to solvable instances around a critical value of the ratio (0.56 in this experiment) and around this ratio we have also a peak in the computational complexity of solving the instances of the problem.

 

 

Figure 4: Complexity and phase transition in the RACE job allocation problem

 

Future Work

We have started to consider the bidding process for companies submitting their costs for performing the various jobs and to consider the connections of this problem with combinatorial auctions, because in a more general setting the companies will provide different costs for particular subsets of jobs, instead of giving costs for single jobs. In that direction, we have started a collaboration with the Stanford group working on the Combinatorial auction test suit CATS[5] in order to understand better the computational complexity of realistic combinatorial auction domains and apply this knowledge to get insight into the RACE domain.


 
 
 
 

Presentations

 TASK PI Meeting Workshop. Santa Fe, New Mexico. April 2001. (Powerpoint presentation)
 
 

References

1
R. Bejar, I. Vetsikas, C. Gomes, Henry Kautz and B. Selman.
Structure and Phase Transition Phenomena in the VTC Problem.
In TASK PI Meeting Workshop, 2001. (PS file
 
2
Carla P. Gomes and Bart Selman.
Algorithm portfolio design: Theory vs. practice.
In Proc. of UAI-97, 1997.
 
3
Carla P. Gomes, Bart Selman, and Henry Kautz.
Boosting combinatorial search through randomization.
In Proceedings of the 15th National Conference on Artificial Intelligence, AAAI'98, Madison/WI, USA, pages 431-437. AAAI Press, 1998. 
 
4
R. Monasson, R. Zecchina, S. Kirkpatrick, B. Selman, and L. Troyansky.
Determining computational complexity from characteristic 'phase transitions'.
Nature, 400(8), 1999. 
 
5
Leyton-Brown, K., Pearson, M., & Shoham, Y.
Towards a Universal Test Suite for Combinatorial Auction Algorithms
In the Proceedings of ACM Conference on Electronic Commerce (EC-00), 2000.
 
6
Bart Selman and Scott Kirkpatrick.
Critical behavior in the computational cost of satisfiability testing.
Artificial Intelligence, 81(1-2):273-295, 1996.

 

         IISI Home
CIS Home
CS Home