We think of proofs as a recursive data type whose components are as follows.

- A list of declarations.
- These represent the hypotheses of the
top--level sequent of the proof.
They are accessed with the function
`hypotheses`, which maps proofs to a list of terms. - A term.
- The goal of the top--level sequent. It is accessed with
the function
`conclusion`. - A rule.
- The top--level refinement rule. This will be missing when
the goal has not been refined. It is accessed with the function
`refinement`. The function fails if there is no refinement. - A list of proofs.
- The subgoals to the refinement, if any. They are
accessed with the function
`children`. The function fails if there is no refinement.

For the ML type ` proof` a
complement of destructors are available
that allow the conclusion, hypotheses (declarations), rule and children to
be extracted from a proof. There is only one primitive function in ML that
constructs new proof objects; that is ` refine`. The function `
refine`
maps rules into tactics and forms the basis of all tactics. When supplied
with an argument rule and proof, ` refine` performs, in effect, one refinement
step upon the sequent of the proof using the given rule. The result of this
is the typical tactic result structure of a list of subgoals paired with a
validation. The list of subgoals is the list of children (logically
sequents, but represented as degenerate proofs) resulting from the
refinement of the sequent with the rule.

The function ` refine` is the representation of the actual Nuprl
logic
in ML, for every primitive refinement step accomplished by a tactic will be
performed by applying ` refine`. The subgoals of the refinement
are calculated by
the
refinement routine, `
deduce_children`, from the proof
and the rule. Constructing the validation
,
an ML function, is more
complicated. Given achievements of the subgoals, the purpose of the validation
is to produce an achievement of the goal. The validation therefore
constructs a new proof node whose sequent is the sequent of the original
goal, whose refinement rule is the rule supplied as an argument to refine, and
whose children are the achievements (partial proofs) of the subgoals.

The user will probably never construct or use a validation except to apply
one to a list of achievements; all the validations one will probably use
will be constructed by ` refine` and by standard tacticals. The
important thing to know about validations is that they cannot construct
incorrect proofs. This is enforced by the strong type structure of ML and
by the interface between Nuprl
and ML. See [Constable, Knoblock, & Bates 84] for an
account of how the correctness of proofs is ensured.

The function ` refine_using_prl_rule`
is a composition of ` refine` and another ML
function, ` parse_rule_in_context`
. This function takes a token (the ML
equivalent of a string) and a proof and produces a rule. It uses the same
parser that is used when a rule is typed into the proof editor. This allows
one to avoid using the explicit rule constructors, although efficiency is lost.
Note that the proof argument is used to disambiguate
the rules.

In addition to ` refine` there is one other basis tactic,
` IDTAC` ,
which is the identity tactic. The result of
applying ` IDTAC` to a
proof is one subgoal, the original proof, and a validation that, when
applied to an achievement of the proof, returns the achievement. The
function ` IDTAC` is useful since it may serve as a * no--op* tactic.

Thu Sep 14 08:45:18 EDT 1995