The PRL systems provide support for automated deduction in the style of tactic-oriented theorem provers. We introduced the notion of a tactic-tree to provide this support. It weds the idea of tactics from Edinburgh LCF with the concept of a sequent proof. We also integrated decision procedures into these proofs. The Nuprl prover integrates ideas from many papers on automated reasoning. For example: matching; forward and backward chaining; simplification; and a large term rewriting package capture some of the best ideas for automating deduction. This section looks at the broader context of our work in order to explain our accomplishments and our future plans.

This field is young, as suggested by the word "automated," but its basis is ancient as suggested by the word "deduction." Indeed the spirit of Euclid and Descartes can be seen in the earliest modern results.

Some trace the origins of Automated Deduction to the Cornell summer meeting in 1957 that brought together a large number of logicians and computer scientists. Others say that it began before that with the 1955 Logic Theorist program of Newell, Shaw and Simon, or with Martin Davis' 1954 implementation of Presburger's decision procedure (which proved that the sum of two even numbers is even).

**Logic Theorist (LT)** was a program to "mimic human reasoning"
in proving theorems and was demonstrated on fifty-two theorems from
the propositional calculus in * Principia
Mathematica*; LT proved thirty-eight of them. The LT prover
was very simple; we can describe it now in a half page of tactic code.
In a broad sense, LT is framed in terms of
Euclid's notion of deductive proof and Pappus' notion of "heuristics."
(In about 300 B.C., Pappus wrote about a systematic approach to
discovering geometric proofs.) Indeed, Gelernter's 1959 Geometry
Machine that came soon after LT draws heavily on the understanding of
proof taught in courses on Euclidean deductive geometry.

In 1962, John McCarthy wrote about using computers to check
mathematical proofs.
Quote. In 1963, Paul Abrahams wrote a PhD thesis at MIT under Marvin
Minsky that described a program to check theorems in
* Principia Mathematica*. Abrahams saw his system as an
assembly language level coding of proofs, and he considered steps for
building a "higher level proof language" like higher level programming
languages. As a start he introduced the notion of macro steps telling
the machine how to build proofs.

Martin Davis, on the other hand, implemented a decision procedure in the spirit of Descartes' reduction of geometry to algebra. Responding to the LT results, Hao Wang also coded a decision procedure to prove all of the theorems which the Logic Theorist proved and more 350 theorems of * Principia*, in fact. Wang too was operating in the spirit of Descartes who, in his reduction of geometry to algebra, imagined an algorithmic way to solve all geometric problems. These algebraic methods underlie the geometry prover of Chow based on the Ritt/Wu algorithm.

These early papers staked out two main regions of the field: one an effort to employ machines to create or discover proofs that humans would make, in the spirit of Euclid/Pappus; and the other to apply algorithms that decide validity or provability by any algorithmic means in the spirit of Descartes. We now call these * decision procedures*.

Hao Wang's 1960 paper **Toward Mechanical Mathematics** presents a vision that is remarkably close to that behind our project. Quote. He also captured the spirit of the field in this wonderful piece: "In one sense it is silly to make machines do what they cannot do. In another sense, this is precisely the exciting thing." Wang also understood the potential for this new field. He said, "The cross fertilization of logic and computers ought to produce in the long run some fundamental change in the nature of all mathematical activity."

Leibniz imagined a universal formal calculus which could express reasoning in any subject. Quote. and an algorithmic procedure which could be applied to decide truth of statements in this calculus. After Frege produced the predicate calculus in 1879, it became possible to explore the dream of Leibniz. Indeed, Frege refers explicitly to this dream in his booklet.

At the Cornell summer meeting of 1957, Abraham Robinson outlined a
proof procedure for the first order predicate calculus based on
searching for models in the Herbrand universe. This idea started the
"main line" of investigation in AD for fifteen years. There was a
progression of ideas for improving the search through the Herbrand
universe. Prawitz, in 1960, showed how to use "lazy evaluation" to
reduce generation of unnecessary subformulas of the goal. Martin Davis
showed in 1963 how to use Skolem functions rather than parameters for
the terms in the universe and how to check satisfiability by
"matching" Skolem terms in formulas and then invoking the famous
**Davis-Putnam procedure** for deciding satisfiability; matching
turned out to be the same notion as unification. (Doug McIlroy of Bell
Labs built a prover based on these ideas in 1962.) Then, by 1965,
J. Alan Robinson showed how to combine unification and satisfiability
checking into a single rule called "resolution." This kicked off the
main line of AD work on resolution based on theorem proving.

Siekman and Wrightson collected the basic papers on AD from 1957 to 1970. The first volume starts with a survey paper by Martin Davis that covers the material discussed above.

The next phase of work beyond that reported in Siekman and Wrightson must include the discovery of tactic-oriented proof by the Edinburgh LCF group (Milner, Gordon, Wadsworth and others); the focused attack on proofs by induction characterized by Boyer and Moore; and the natural deduction based program verification systems produced at Cornell, Stanford, SRI, and Texas that combined decision procedures and proof checking.

This period also saw advances in solving open mathematical problems. Doug Howe solved the **Girard Paradox.** Citation. Chet Murthy solved **Stoltenzberg's problem** of constructivizing the Nash/Williams proof of Higman's Lemma. Citation William McCune used his EQP prover to solve the Robbin's algebra conjecture. Prolog was born and Lambda-Prolog. Term rewriting became a major subarea. There are other major advances as well that have not been catalogued and collected. We will not try to do this collecting but instead talk about the work that influenced Nuprl.

Already by 1960 John McCarthy had seen the need for better programming languages for building theorem provers and carrying out other symbolic computing tasks. Lisp grew out of this work and has an enormous impact on the field (Nuprl up to version 4 is grounded in Common Lisp). Already in 1960 McCarthy had coded a piece of Wang's prover. He claimed that it took him only two hours to code the I component and that it ran correctly on the fourth try.

A vague notion of tactic appeared in the early writings of McCarthy on theorem proving and was perhaps nurtured in the Stanford LCF project. Quote. However, the modern notion appeared full blown in the Edinburgh LCF system circa 1979. Citation Theorem proving in LCF would be done by writing programs in a metalanguage (ML) that used special primitives to manipulate the formulas, terms and proofs of the object logic. The ML language supported a special style of searching for proofs, namely the top down refinement style. It provided the special mechanism of tacticals for combining programs that decompose a goal into subgoals in such a way that if the subgoals are provable then the goal is provable.

Boyer and Moore built a system, NqThm, for proving quantifier free (Nq) theorems (Thm) about terminating pure Lisp functions. Their theory was a version of Church's Primitive Recursive Arithmetic (PRA) in which Sexpressions replaced numbers as the primitive data type. The main proof rule is an induction principle on lists. They built a theorem prover that was especially good at automating these inductive proofs. The style of proof is to break up a problem into a sequence of lemmas that the system can automatically prove. Expert users such as Boyer, Moore, Kaufmann, Shankar, etc. could decompose problems in bite sized pieces for the prover.

The Boyer/Moore prover was a tightly integrated collection of methods for rewriting terms, deciding linear arithmetic and guessing induction formulas. Some of the work with Oyster was to "reconstruct" NqThm as a more open system.

Nuprl's organization of proofs is inspired by the insights from the Logic Theorist. Namely, as we report in the article Writing Programs that Construct Proofs, "Indeed, the concept of a goal or problem used in LT fits extremely well in this context. . . . The idea is that a tactic decomposes a goal G into a list of subgoals, G1,. . . ,Gn."

Also, Nuprl includes the use of decision procedures as integral parts of the proof process, especially for deciding statements that humans would find "obvious" and also to cope with combinatorially difficult proof steps. This combination was foreseen by Minsky, but was first realized in program verifiers like PL/CV and the Stanford Pascal Verifier in the 1970s.The Nuprl prover evolved in stages starting with Constable, Knoblock and Bates which Howe substantially expanded. Then Basin developed a rewrite package; Jackson expanded and reorganized the tactics; and now Hickey is leading the complete modular rebuilding of the prover. Kreitz and his students continue to enhance the automatic capabilities.