Computer
Science
Colloquium
Thursday, March 13, 2003
4:15 PM
B17 Upson Hall
Valentine
Kabanets
University of California, San Diego
Derandomizing Probabilistic Algorithms means Proving Circuit Lower Bounds
What is the power of randomness in algorithm design? On the one hand, many computational problems have very efficient and simple probabilistic
algorithms. On the other hand, certain probabilistic algorithms can be successfully derandomized, i.e., replaced by equivalent deterministic
algorithms, without significant decrease in efficiency. The most famous example is the recent discovery of a deterministic
polynomial-time algorithm for Primality Testing [AKS'02].
Can _every_ probabilistic algorithm be derandomized? Recent research suggests that the answer is Yes, since exponential lower bounds on the
circuit complexity of certain computational problems would imply universal derandomization [IW'97]. However, proving even
superpolynomial circuit lower bounds is a notoriously difficult task. Is there any other way to achieve derandomization?
In this talk, I will show that proving circuit lower bounds is indeed necessary. In particular, designing a deterministic polynomial-time
algorithm to test whether the determinant of a given symbolic matrix
is identically zero is as hard as proving superpolynomial circuit lower bounds. Thus, progress in algorithm design is dependent on
progress in circuit complexity.
Joint work with Russell Impagliazzo (UCSD)