Ronitt Rubinfeld
Assistant Professor
ronitt@cs.cornell.edu
http://www.cs.cornell.edu/home/ronitt/ronitt.html
PhD UC Berkeley, 1990
We study the question of how to trust the correctness of a computation. In particular, we consider two models which allow the user to check the result of a specific computation at runtime: program result checking and probabilistically
checkable proof systems. Both approaches apply even if the computation is performed remotely or if the code is not available. 

Suppose we obtain a program P which purports to compute a function f. Program result checkers allow one to determine at runtime that P is correct on
specific inputs by performing a small amount of extra computation. Checkers for a large range of functions have been constructed, including functions from linear algebra, cryptography, data structures, combinatorial optimization and computational geometry. We have been developing a particularly useful approach for algebraic, linear algebra and numerical programs: the idea is to determine a set of properties that are satisfied by the function f, and to test that P also satisfies the same properties.
We recently introduced spot checkers in order to allow one to quickly determine that P is giving a
result that is at least reasonable in some useful sense. For example, it is possible to determine in O (log n) time that a sorting program is giving a result that is 99% correct in terms of edit distance. We have given spot checkers for several problems in data structures and graph theory.
Probabilistically checkable proof (PCP) systems are a way of writing down a proof of a statement in such a
way that a verifier can look at the statement and then at only a constant number of (randomly chosen) locations in the proof in order to be convinced
that the statement is correct. PCPs can be used to convince a user of the correctness of a computation. We have introduced the notion of approximate PCPs in which the user looks at a small number of randomly chosen locations of both the proof and the statement and then can be certain that the proof is almost correct in the following interesting sense: it may be that the statement being proven is not
correct, but the user is guaranteed that it is at least approximately correct. For example, if an optimization problem has a solution of value at least v, the user may be satisfied to be convinced that the solution value is at least .99v. We have shown that for many optimization problems, one can construct approximate PCPs which bound the quality of the solution, in which the total running time of the user is surprisingly small. For example, we give protocols that allow a prover to convince a polylogarithmic time user of the existence of a large cut in a graph, a large matching in a graph or a small bin packing.
Professional Activities

Program Committee: COLT, Madison, WI, July 1998; Complexity '99, Atlanta GA, May 1999
Lectures
 Learning Polynomials  the
highly noisy case. IBM
Hawthorne, NY, Sept. 1998.
 Fast PCPs for optimization
problems. IBM Yorktown, NY,
Oct 1998.
 —. Computer Science, MIT
Cambridge, MA, Nov 1998.
 —. Computer Science,
U.C.
Berkeley, CA, Feb. 1999.
 —. Computer Science,
Stanford, CA, Feb. 1999.
 —. Computer Science,
U.C.
San Diego, CA, March 1999.

—. STOC, Atlanta GA, May
1999.
 Trusting the correctness of
computations. NEC, Princeton,
NJ, March 1999.
 —. IBM, San Jose, CA, Apr.
1999.
