Devika Subramanian
Assistant Professor
PhD Stanford University, 1989


The essence of intelligence is performing appropriate action given bounded computational, sensory, and information resources. I view AI as the design and analysis of limited-resource agents that perform tasks in dynamic environments, and my research is focused on understanding the effects of representation choice and resource limitations on the design of such agents.

  1. What principles are there for choosing representations in bounded systems?
  2. How can we design systems that integrate deliberation about the future with acting in the moment?
  3. Can we formally define optimal bounded systems and design them automatically from task and environment descriptions?
  4. Can we construct learning policies for systems that continuously adapt to their environments?

I approach these questions by identifying and rigorously formulating problems and solutions using mathematical tools drawn from several disciplines. Instead of simply specifying theories of intelligent behavior, I design decision procedures that demonstrably generate these behaviors and then apply them to real problems to validate the theories. This has enabled me to formulate computational theories of intelligence and to obtain useful results in other fields.

One aspect of intelligence is the ability to identify distinctions that are relevant to one's goals. My dissertation was one of the first pieces of work in AI that rigorously formulated and solved the problem of adapting distinctions to a class of tasks. I cast the problem of designing new, more efficient representations as a deductive process driven by the irrelevance principle. The irrelevance principle simplifies computation by enabling an agent to make as few distinctions as needed to satisfy accuracy requirements. My theory unifies representation shifts in several disciplines and demonstrates that representations are shaped by computational pressures and can be logically derived using principles of computational economy. For example, my algorithms derived the partition representation of equivalence relations and constructed Thevenin equivalents of circuits from Kirchoff's and Ohm's laws.

My PhD student, Adam Webber, used the irrelevance principle to derive optimizations of functional programs. Starting with a definition of repeated computation, he developed a graph-grammar formalism to facilitate the detection and elimination of repeated work. This is one of the first attempts in functional programming to derive optimizations from first principles. Experiments showed the efficacy of our work: most optimizations in Aho, Sethi, Ullman, which were developed over a period of 20 years, were automatically discovered. The theory also found optimization patterns not commonly found in programs written by humans.


University Activities

Professional Activities

Lectures

Publications


Return to:
1994-1995 Annual Report Home Page
Departmental Home Page

If you have questions or comments please contact: www@cs.cornell.edu.


Last modified: 26 November 1995 by Denise Moore (denise@cs.cornell.edu).