Compute-Intensive Methods in Artificial Intelligence

Bart Selman
Dept. of Computer Science
Cornell University
selman@cs.cornell.edu
 

Position statement

In the 1970s and 1980s the success of knowledge-intensive approaches to problem solving eclipsed earlier work on compute-intensive general methods. However, in recent years compute-intensive methods have made a surprising comeback. One of the most prominent examples is the success of IBM's Deep Blue in defeating Gary Kasparov in the first game of the 1996 ACM Challenge match. The game led Kasparov to exclaim, "I could feel -- I could smell -- a new kind of intelligence across the table." Deep Blue derives its strength mainly from highly optimized search; it considers an astonishing 200 million chess positions per second.

Another dramatic development in the compute-intensive approach was the recent computer proof resolving the Robbins problem (McCune, Oct. 1996). The Robbins problem is a well-known problem in Boolean algebra, and was open for over sixty years. The computer proof was found by applying powerful search techniques guided by general search tactics. Unlike previous results in the area of automated theorem proving, this time several aspects of the computer proof could be called "creative" by mathematicians' standards.

Deep Blue's performance and the resolution of the Robbins problem are good examples of the dramatic improvement in performance of compute-intensive approaches compared to just a few years ago. Other examples of the success of such methods are in planning, qualitative reasoning about physical systems, and learning.

Recent developments in general search and reasoning techniques now allow us to solve combinatorial problems with up to ten thousand variables and over one million constraints. The state-of-the-art of only a few years ago (ca. 1990) was about two hundred variables with up to five hundred constraints. This dramatic increase in feasible problem size has led to a series of new applications. For example, current AI planners incorporating the new search methods can synthesize optimal plans containing about two hundred operators; compare this with the previous generation of domain-independent AI planners (ca. 1995) that could synthesize plans with only about ten to twenty steps. Given the wealth of new ideas in this area, it is reasonable to that in the near furture we will be able to synthesize plans with several thousand operations, thereby opening up a wide range of new practical applications in areas as diverse as protocol (software/hardware) verification, supply-chain management (a powerful extension of traditional scheduling applications), and program synthesis.

These developments show that quantitative changes in compute-intensive methods can lead to dramatic qualitative changes in overall performance. Compute-intensive methods promise to alleviate the knowledge acquisition bottleneck in AI by allowing expert-level performance to be achieved with only a minimal amount of domain knowledge. In effect, powerful search and constraint techniques eliminate the need for domain specific search and reasoning control. Complementary to this approach is the current focus on data-intensive methods, such as data mining techniques, to automatically extract domain structure from large data sets.