A refinement tactic resembles a derived rule of inference. Such a tactic is invoked by typing the name of the tactic where a refinement rule is requested by the proof editor. When the tactic is applied it tries to prove the current goal. There are three possible outcomes from a tactic invocation.

- If the tactic completely proves the current goal then the
name of the tactic name is entered as the rule name, and the status
of the sequent is changed to
*complete*. This refinement generates no subgoals. - If the tactic produces part of a proof of the current goal
then the tactic is entered as the rule name, and the status
of the sequent is changed to
*incomplete*. Each of the subgoals left unproved in the proof produced by the tactic is listed as a subgoal to the refinement. - If the tactic fails then an error message
is displayed in the command window, and
the status of the sequent is changed to
*bad*.

The behavior of a refinement tactic resembles that of a primitive rule of inference or decision procedure (such as arith). However, although refinement tactics are similar to rules of inference, they should not be thought of as fixed rules; within the tactic, for instance, the goal can be examined and different refinements can be applied based upon this analysis. In addition, by using the failure mechanism that is primitive to the ML language a tactic may approach a goal using one technique, find that this leads to a dead--end, backtrack and try another approach. Thus while the result of a tactic may represent just a few primitive refinement steps, the method by which those few steps were arrived at may have required a substantial amount of computation.

Thu Sep 14 08:45:18 EDT 1995