Generating Compiler Optimizations from Proofs
You need Flash to view this presentation. However, if you are using an iPhone or the like, you can watch this presentation here (it takes a few seconds to buffer).

Recorded using PowerPoint 2010 beta.
Converted using Super.
Embedded using JW Player 5.

Generating Compiler Optimizations
from Proofs

POPL 2010: [pdf] [bibtex] [pptx] [mp4]

Authors: Ross Tate, Michael Stepp, Sorin Lerner

Technical Report: [pdf] [bibtex]

Translations: Estonian

Abstract

We present an automated technique for generating compiler optimizations from examples of concrete programs before and after improvements have been made to them. The key technical insight of our technique is that a proof of equivalence between the original and transformed concrete programs informs us which aspects of the programs are important and which can be discarded. Our technique therefore uses these proofs, which can be produced by translation validation or a proof-carrying compiler, as a guide to generalize the original and transformed programs into broadly applicable optimization rules.

We present a category-theoretic formalization of our proof generalization technique. This abstraction makes our technique applicable to logics besides our own. In particular, we demonstrate how our technique can also be used to learn query optimizations for relational databases or to aid programmers in debugging type errors.

Finally, we show experimentally that our technique enables programmers to train a compiler with application-specific optimizations by providing concrete examples of original programs and the desired transformed programs. We also show how it enables a compiler to learn efficient-to-run optimizations from expensive-to-run super-optimizers.

Proof Generalization

A significant component of this paper is our framework for proof generalization. In the same way abstract interpretation can be applied to various abstractions to produce iterative program analyses, our framework can be applied to various logics to produce algorithms for generalizing. For example, by applying our framework to E-PEGs, we get an algorithm for generalizing proofs of equivalence between concrete programs into broadly applicable program optimizations. We have already applied our framework to databases, Hindley-Milner type inference, contracts, and type checking to construct algorithms for aiding database query optimizations, type debugging of Hindley-Milner-based type inference, contract debugging, and type generalization. We want this framework to be easily accessible, and we would be interested to hear about additional applications. We would be glad to help researchers overcome the hurdles of encoding logics categorically or constructing pushout completions, so that they may exploit our framework to solve their own problems. Please feel free to contact us for advice.

The "Most" General Proof: Many of the questions we received after our presentation were regarding how a proof can be "most" general. There are many parameters to this attribute. First, the proof is key. A concrete optimization may not have a most general form, but each proof does. However, the most general form of some proofs may be more general than the most general form of some other proofs. Often this may be a complete accident; other times one proof may be a slightly better variant of another proof. For example, any sequential form of a traditional tree-structured proof will actually produce a better generalization than the proof tree. In the paper, we show a number of techniques for improving a proof so that it has a better generalization. Second, the generalization process all occurs within a logic. Some logics are more general than other, so more general logics will produce more general proofs. In particular, the distinction between axioms and axiom schemas becomes very important in proof generalization. For example, in program expression graphs there are loop operations like `pass' which each operate at some loop index. One logic might perceive this as a set of operators: pass-1, pass-2, and so on. Another logic might perceive this as a single operator with a loop index parameter: pass-i. In the former logic, generalized optimizations will still have the loop index constants, whereas in the latter logic the generalized optimizations will also generalize the loop index. In our implementation, we use the latter logic, but in the paper we discuss an even more general logic we could have used. Third, the axiom set has a significant impact on generalization. Some axioms do not generalize well. Consider the example from our presentation where we replace 8+8-8 with 8. In our presentation, we use axioms such as x-x=0 and x+0=x. We could have also simply used constant folding. However, constant folding produces poor generalizations. Thus, in our implementation we avoid constant folding as much as possible, only including it in our axiom set as a last resort. There are more factors which influence the generalization process, but these three are the most important ones. In our experience, it was easy to design a good logic and a good axiom set, and the automatically generated proofs were all what we hoped for. We hope this proof generalization technique and its applications open new interesting research directions for proof theory.

Category Theory

Some have asked us why we abstracted our proof generalization technique at all, and why we used category theory as our abstraction. However, we actually designed the abstract algorithm first, using category theory, and then used that to figure out how to solve our concrete problem. We got stuck with the concrete problem, overwhelmed by the details and the variables, and any solution we could think of seemed arbitrary. In order to reflect and simplify, we decided to phrase our question categorically. This lead to a diagram of sources and sinks, so we just used pushouts and pullbacks to glue things together. The biggest challenge was coming up with pushout completions, rather than using some existing standard concept. The categorical formulation was easy to specify and reason about. Afterwards, we instantiated the abstract processes, such as pushouts, with concrete algorithms, such as unification, in order to produce our final implementation with strong generality guarantees.

We have actually found this process of abstracting to category theory whenever we get stuck to be quite fruitful. Not only does it end up solving our concrete problem, but we end up with a better understanding of our own problem as well as an abstract solution which can be easily adapted to other applications. Thus, our experience suggests that category theory may be useful in constructing actual algorithms, in addition to being useful as a framework for formalization. We would be interested to know of other similar experiences, either positive or negative.

Questions, Comments, and Suggestions

If you have any questions, comments, or suggestions, please e-mail us. We would be glad to know of any opinions people have or any clarifications we should make.