 


DARPA TASK Web page DARPA TASK DescriptionPrincipled Analysis & Synthesis of Agent Systems 

IISI
Home
CIS Home CS Home 
Research
team
Principal Investigator: Bart Selman CoPrincipal Investigators: Carla Gomes and Henry Kautz Others:
Ramon Bejar and
Ioannis Vetsikas


Overall
Goal and Techniques
The
basic computational problem in many of the domains TASK addresses is the
solution of large sets of interacting, distributed constraints by multiple
agents. The central hypothesis upon which our work is based is that large
systems of constraints behave much like physical systems with many interacting
components. For example, sets of constraints can undergo phenomena such as phase
transitions from being solvable to having no solution. Furthermore, analysis
using the mathematical tools of statistical physics yields
empiricallyverifiable predictions that are stronger than those that can be
obtained by a classical theoretical analysis.
As the size and complexity of distributed systems grows, we expect the
physicsstyle analysis to become a more critical tool.
To achieve this goal, we propose to create innovative tools and algorithms that will allow agent systems to solve tasks that would traditionally be out of reach. Our approach builds on our work on: Connections
between hard computational tasks and models from statistical physics[4,
6]. Our previous work on models based on statistical physics
for hard combinatorial search problems can be seen in terms of the analysis of
a distributed problem solving task where each
individual constraint (or small group of constraints) is represented by a
single agent. A natural
mapping from the global behavior of searching a solution to a physical system
exists because in statistical physics global properties are determined by the
microscopic interactions of the individual components. We will extend
this analysis to incorporate much richer agent models.
Randomization
techniques. Randomization is a powerful mechanism to increase
overall predictability. In classical combinatorial search problems, "rapid restarts" [3] and "portfolio"
strategies [2] have been shown to be very effective. We
plan to extend these techniques to complex multiagent systems to increase the
overall robustness
of the system.
The RACE problem provides a compelling framework for the study of multiagent behaviors and their inherent computational complexity. According to this problem, the Department of Defense (DOD) needs to formulate and execute plans for transporting personnel and equipment in a case of an emergency situation. A number of companies have signed agreements with the DOD to provide transportation required in such situations. The agreements involve a certain amount of regular contractual work that the DOD awards to the companies with the understanding that in case of a crisis the companies provide emergency services proportionally to the contractual work received. The companies provide estimates of what it would cost the company to provide each emergency job. After all estimates are in, the DOD goes trough a job allocation process where it tries to assign jobs in a "fair" (balanced) manner to the various companies.
Characterizing the complexity of the RACE Allocation Problem Initial results [1] indicate that the computational complexity of the allocation problem is closely related to the structure of the cost matrix of the problem. The cost matrix determines, for every pair (company,job) the cost of performing the job for the company. Figure 1 shows a cost matrix for a problem with three companies and five jobs. In our work we have considered two different notions for fairness allocation: "Minmax fairness" and "Lex minmax fairness". An allocation is minmax fair if it minimizes the maximum cost paid by any company and it is lex minmax fair if it has the lexicographically smallest cost vector. The figure also shows, with the numbers in red of every column, the best lex minmax fair allocation.



Figure 1: Cost matrix for a sample problem. 

We can see the relation of the structure of
the cost matrix with the complexity of the allocation problem, for example, in
two boundaries cases. In the "Uniform bidding" case, all companies
declare the same cost for a given job. Figure 2 shows the structure of the cost
matrix for that case, where colors represent different costs. In that case, the
allocation problem for a decision version of minmax fairness becomes equivalent
to bin packing. So, this case is NPcomplete, although good approximation
schemes and averagecase results are known. 



In the "Uniform cost" case, a company declares the same cost for all the jobs. Figure 3 shows the structure of the cost matrix for that case. In that case, we have a polynomial algorithm for the lex minmax allocation problem. So, this case represents a tractable subcase of our general allocation problem.




In our work we have also considered the typical (average) complexity of the problem when the cost matrix has a more general form, by studying the empirical computational complexity of solving the allocation problem when the cost matrix of the problem is obtained from random problem distributions, and we have also studied the existence of phase transitions in the decision version of the problem. A particular random problem distribution is defined by specifying the model for obtaining a cost matrix for the problem. For example, in one of the models, given a set of C companies and J jobs, we obtain the cost matrix by selecting, uniformly at random (u.a.r.), exactly c companies for every job such that the selected companies will be the only ones that can perform the job. The cost for every selected company is generated also u.a.r from a price interval [LC,UC]. We have detected a phase transition in the decision version of minmax fairness allocation, i.e. is there an allocation such that the maximum cost paid by any company is less than a given upper limit cost K ? The Phase transition occurs when we increase the ratio of the number of companies to the number of jobs. Figure 4 shows the results for an experiment for measuring the average complexity of the problem with different values for that ratio. The blue plot shows, for every different ratio companies/jobs, the average size of the search tree visited by a systematic search algorithm and the red plot the percentage of instances found to have solution with that ratio, from a total sample of 100 instances. We observe a phase transition from no solvable instances to solvable instances around a critical value of the ratio (0.56 in this experiment) and around this ratio we have also a peak in the computational complexity of solving the instances of the problem.


Figure 4: Complexity and phase transition in the RACE job allocation problem 

Future Work We have started to consider the bidding process for companies submitting their costs for performing the various jobs and to consider the connections of this problem with combinatorial auctions, because in a more general setting the companies will provide different costs for particular subsets of jobs, instead of giving costs for single jobs. In that direction, we have started a collaboration with the Stanford group working on the Combinatorial auction test suit CATS[5] in order to understand better the computational complexity of realistic combinatorial auction domains and apply this knowledge to get insight into the RACE domain.
PresentationsTASK PI Meeting Workshop. Santa Fe, New Mexico. April 2001. (Powerpoint presentation)References
In TASK PI Meeting Workshop, 2001. (PS file) In Proc. of UAI97, 1997. In Proceedings of the 15th National Conference on Artificial Intelligence, AAAI'98, Madison/WI, USA, pages 431437. AAAI Press, 1998. Nature, 400(8), 1999. In the Proceedings of ACM Conference on Electronic Commerce (EC00), 2000. Artificial Intelligence, 81(12):273295, 1996.


IISI
Home
CIS Home CS Home 