Next: Basic Types in Up: The Metalanguage Previous: Using ML in

# Tactics in ML

Conceptually it is convenient to think of tactics

as mappings from proofs to proofs. The actual implementation of tactics, however, is not quite that simple. The actual type  of tactics is

```    tactic = proof -> proof list # validation,
```

where proof is the type of (possibly incomplete) proofs and

```    validation = proof list -> proof.
```

The idea is that a tactic decomposes a goal proof G into a finite list of subgoals, , and a validation,  v. The subgoals are degenerate proofs, i.e., proofs which have only the top--level sequent filled in and have no refinement rule or subgoals. We say that a proof achieves  the subgoal if the top--level sequent of is the same as the top--level sequent of . The validation, v, takes a list , where each achieves , and produces a proof P that achieves G. (Note that wherever we speak of proof in the context of ML, we mean possibly incomplete proofs.)

Thus a tactic completes some portion of a proof and produces subgoals that it was unable to prove completely and a validation that will take any achievement of those subgoals and produce an achievement of the original goal. If the validation is applied to achievements that are complete proofs, then the result of the validation is a complete proof of the goal G. In particular, if the tactic produces an empty list of subgoals then the validation applied to the empty list produces a complete proof of the goal.

The advantage of this system becomes apparent when one composes tactics. Since a tactic produces an explicit list of subgoals a different (or the same) tactic can easily be applied to each of the subgoals; the tactics thus decompose goals in a top--down  manner. This makes it easy to write specialized and modular tactics and to write special tacticals for composing tactics. This system has the disadvantage, however, that it does not easily accommodate a bottom--up style of tactic.

The Nuprl system recognizes two kinds of tactics: refinement tactics and transformation tactics. A refinement tactic operates like a derived rule of inference. Applying a refinement tactic results in the name of the tactic appearing as the refinement rule and any unproved subgoals produced by the tactic appearing as subgoals to the refinement. The following are the actual steps that occur when a refinement  tactic is invoked.

1. The ML variable prlgoal is associated with the current sequent, which is viewed as a degenerate proof. Note that there may be a refinement rule and subgoals below the sequent but that these are ignored as far as refinement tactics are concerned.

2. The given tactic is applied to prlgoal, producing a (possibly empty) list of unproved subgoals and a validation.

3. The validation is applied to the subgoals.

4. The tactic name is installed as the name of the refinement rule in the proof. The refinement tree that was produced by the validation in the previous step is stored in the proof, and any remaining unproved subgoals become subgoals of the refinement step.

The four steps above assume that the tactic terminates without producing an error or throwing a failure that propagates to the top level. If such an event does occur then the error or failure message is reported to the user and the refinement is marked as bad; this behavior exactly mimics the failure of a primitive refinement rule.

Transformation  tactics have the effect of mapping proofs to proofs. The result of a transformation tactic is spliced into the proof where the tactic was invoked and takes the place of any previous proof. The following are the steps that occur when a transformation tactic is invoked.

1. The ML variable prlgoal is associated with the proof below, and including, the current sequent.

2. The specified transformation tactic is applied to prlgoal, producing a (possibly empty) list of subgoals and a validation.

3. The validation is applied to the list of subgoals.

4. The proof that is the result of the previous step is grafted into the original proof below the sequent.

The key difference between refinement  and transformation  tactics is that transformation tactics are allowed to examine the subproof that is below the current node whereas refinement tactics are not. The result of a transformation tactic will, in general, depend upon the result of the examination. Since most tactics do not depend on the subproof below the designated node, they may be used as either transformation tactics or refinement tactics; in fact, since a refinement tactic cannot examine the subproof, any refinement tactic may be used as a transformation tactic. The main implementation difference between refinement tactics and transformation tactics is how the result of the tactic is used. In the former the actual proof constructed by the tactic is hidden from the user, and only the remaining unproved subgoals are displayed. In the latter the result is explicitly placed in the proof.

Next: Basic Types in Up: The Metalanguage Previous: Using ML in

Richard Eaton
Thu Sep 14 08:45:18 EDT 1995