The reason why I said that the word verification may be dangerous is that the principle of verification formulated by the logical positivists in the thirties said that a proposition is meaningful if and only if it is verifiable, or that the meaning of a proposition is its method of verification. Now that is to confuse meaningfulness and truth. I have indeed used the word verifiable and the expression method of verification. But what is equated with verifiability is not the meaningfulness but the truth of a proposition, and what qualifies as a method of verification is a proof that a proposition is true. Thus the meaning of a proposition is not its method of verification. Rather, the meaning of a proposition is determined by what it is to verify it, or what counts as a verification of it.
The next point that I want to bring up is the question,
And it suffices to think of mathematical propositions here, like the Goldbach conjecture, the Riemann hypothesis, or Fermat's last theorem. This fundamental question was once posed to me outright by a colleague of mine in the mathematics department, which shows that even working mathematicians may find themselves puzzled by deep philosophical questions. At first sight, at least, there seem to be two possible answers to this question. One is simply,
To see this, assume first of all that A is a proposition, or problem. Then,
as well as its inverse
are both valid. This is the sense of saying that A is true if and only if A can be proved to be true. To justify the first rule, assume that you know its premise, that is, that you have proved that A is true. But, if you have proved that A is true, then you can, or know how to, prove that A is true, which is what you need to know in order to have the right to judge the conclusion. In this step, I have relied on the principle that, if something has been done, then it can be done. To justify the second rule, assume that you know its premise, that is, that you know how to prove the judgement A is true. On that assumption, I have to explain the conclusion to you, which is to say that I have to explain how to verify the proposition A. This is how you do it. First, put your knowledge of the premise into practice. That yields as result a proof that A is true. Now, such a proof is nothing but knowledge how to verify, or a method of verifying, the proposition A. Hence, putting it, in turn, into practice, you end up with a verification of the propositionAm as required. Observe that the inference in this direction is essentially a contraction of two possibilities into one: if you know how to know how to do something, then you know how to do it.
All this is very easy to say, but, if one is at all puzzled by the question whether there are unprovable truths, then it is not an easy thing to make up one's mind about. For instance, it seems, from Heyting's writings on the semantics of intuitionist logic in the early thirties, that he had not arrived at this position at that time. The most forceful and persistent criticism of the idea of a knowledge independent, or knowledge transcendent, notion of truth has been delivered by Dummett, although it seems difficult to find him ever explicitly committing himself in his writings to the view that, if a proposition is true, then it can also be proved to be true. Prawitz seems to be leaning towards this nonrealistic principle of truth, as he calls it, in his paper ``Intuitionistic Logic: A Philosophical Challenge.'' And, in his book Det Osägbara, printed in the same year, Stenlund explicitly rejects the idea of true propositions that are in principle unknowable. The Swedish proof theorists seem to be arriving at a common philosophical position.
Next I have to say something about hypothetical judgements, before I proceed to the final piece, which consists of the explanations of the meanings of the logical constants and the justifications of the logical laws. So far, I have only introduced the two forms of categorical judgement A is a proposition and A is true. The only forms of judgement that I need to introduce, besides these, are forms of hypothetical judgement. Hypothetical means of course the same as under assumptions. The Greek , hypothesis, was translated into Latin suppositio, supposition and they both mean the same as assumption. Now, what is the rule for making assumptions, quite generally? It is simple. Whenever you have a judgement in the sense that I am using the word, that is, a judgement in the sense of an instance of a form of judgement , then it has been laid down what you must know in order to have the right to make it. And that means that it makes perfectly good sense to assume it, which is the same as to assume that you know it, which, in turn, is the same as to assume that you have proved it. Why is it the same to assume it as to assume that you know it? Because of the constant tacit convention that the epistemic force, I know..., is there, even if it is not made explicit. Thus, when you assume something, what you do is that you assume that you know it, that is, that you have proved it. And, to repeat, the rule for making assumptions is simply this: whenever you have a judgement, in the sense of an instance of a form of judgement, you may assume it. That gives rise to the notion of hypothetical judgementand the notion of hypothetical proof, or proof under hypotheses.
The forms of hypothetical judgementthat I shall need are not so many. Many more can be introduced, and they are needed to other purposes. But what is absolutely necessary for me is to have access to the form
A_{1} true,..., A_{n} true  A prop, 
which says that A is a proposition under the assumptions that A_{1},..., A_{n} are all true, and, on the other hand, the form
A_{1} true,..., A_{n} true  A true, 
which says that the proposition A is true under the assumptions that A_{1},..., A_{n} are all true. Here I am using the vertical bar for the relation of logical consequence, that is, for what Gentzen expressed by means of the arrow in his sequence calculus, and for which the double arrow is also a common notation. It is the relation of logical consequence, which must be carefully distinguished from implication. What stands to the left of the consequence sign, we call the hypotheses, in which case what follows the consequence sign is called the thesis, or we call the judgements that precede the consequence sign the antecedents and the judgement that follows after the consequence sign the consequent. This is the terminology which Gentzen took over from the scholastics, except that, for some reason, he changed ``consequent'' into ``succedent'' and ``consequence'' into ``sequence,'' German Sequenz, usually improperly rendered by sequent in English.
A_{1} true,..., A_{n} true  A prop 
A_{1} true,..., A_{n} true 
A true 
Since I am making the assumptions A_{1} true,..., A_{n} true, I must be presupposing something here, because, surely, I cannot make those assumptions unless they are judgements. Specifically, in order for A_{1} true to be a judgement, A_{1} must be a proposition, and in order for A_{2} true to be a judgement, A_{2} must be a proposition, but now merely under the assumption that A_{1}is true,..., and in order for A_{n} true to be a judgement, A_{n} must be a proposition under the assumptions that
A_{1},..., A_{n1} are all true. Unlike in Gentzen's sequence calculus, the order of
the assumptions is important here. This is because of the generalization that something being a proposition may depend on other things being true. Thus, for the assumptions to make sense, we must presuppose
Supposing this, that is, supposing that we know this, it makes perfectly good sense to assume, first, that A_{1} is true, second, that A_{2} is true,..., finally, that A_{n} is true, and hence
is a perfectly good judgement whatever expression A is, that is, whatever expression you insert into the place indicated by the variable A. And why is it a good judgement? To answer that question, I must explain to you what it is to know such a judgement, that is, what constitutes knowledge, or proof, of such a judgement. Now, quite generally, a proof of a hypothetical judgement, or logical consequence, is nothing by a hypothetical proof of the thesis, or consequent, from the hypotheses, or antecedents. The notion of hypothetical proof, in turn, which is a primitive notion, is explained by saying that it is a proof which, when supplemented by proofs of the hypotheses, or antecedents, becomes a proof of the thesis, or consequent. Thus the notion of categorical proof precedes the notion of hypothetical proof, or inference, in the order of conceptual priority. Specializing this general explanation of what a proof of a hypothetical judgement is to the particular form of hypothetical judgement
A_{1} true,..., A_{n} true,  A prop. 
that we are in the process of considering, we see that the defining property of a proof
or such a judgement is that, when it is supplemented by proofs
... 
of the hypothesis, or antecedents, it becomes a proof
j 
of the thesis, or consequent.
Consider now a judgement of the second form
A_{1} true,..., A_{n} true  A true 
For it to make good sense, that is, to be a judgement, we must know, not only
A_{1} true 
A_{2} prop, 
A_{1} true,...,A_{n1} true  A_{n} prop, 
A_{1} true,..., A_{n} true  A prop. 
Otherwise, it does not make sense to ask oneself whether A is true under the assumptions A_{1} true,..., A_{n} true. As with any proof of a hypothetical judgement, the defining characteristic of a proof
A true
of a hypothetical judgement of the second form is that, when supplemented by proofs,
... 
A_{1} true A_{n} true
of the antecedents, it becomes a categorical proof
of the consequent.
I am sorry that I have had to be so brief in my treatment of hypothetical judgements, but what I have said is sufficient for the following, except that I need to generalize the two forms of hypothetical judgement so as to allow generality in them. Thus I need judgements which are, not only hypothetical, but also general, which means that the first form is turned into
and the second form into
Both of these forms involve a generality, indicated by subscribing the variables that are being generalized to the consequence sign, which must be carefully distinguished from, and which must be explained prior to, the generality which is expressed by means of the universal quantifier. It was only to avoid introducing all complications at once that I treated the case without generality first. Now, the meaning of a hypotheticogeneral judgement is explained by saying that, to have the right to make such a judgement, you must possess a free variable proof of the thesis, or consequent, from the hypotheses, or antecedents. And what is a free variable proof? It is a proof which remains a proof when you substitute anything you want for its free variables, that is, any expressions you want, of the same arities as those variables. Thus
is a proof of a hypotheticogeneral judgement of the first form provided it becomes a categorical proof.
when you first substitute arbitrary expressions a_{1},..., a_{m}, of the same respective artities as the variables x_{1},..., x_{m}, for those variables, and then supplement it with proofs
... 
of the resulting substitution instances of the antecedents. The explanation of what constitutes a proof of a hypotheticogeneral judgement of the second form is entirely similar.
The difference between an inference and a logical consequence, or hypothetical judgement, is that an inference is a proof of a logical consequence. Thus an inference is the same as a hypothetical proof. Now, when we infer, or prove, we infer the conclusion from the premises. Thus, just as a catgorical proof is said to be a proof of its conclusion, a hypothetical proof is said to be a proof, or an inference, of its conclusion from its premises. This makes it clear what is the connection as well as what is the difference between an inference with its premises and conclusion on one hand, and a logical consequence with its antecedents and consequent on the other hand. And the difference is precisely that it is the presence of a proof of a logical consequence that turns its antecedents into premises and consequent into conclusion of the proof in question. For example, if A is a proposition, then
A true  true 
is a perfectly good logical consequence with A true as antecedent and true as consequent, but
is not an inference, not a valid inference, that is, unless A is false. In that case, only, may the conclusion true be inferred from the premise A true.
Let us now pass on to the rules of inference, or proof rules, and their semantical explanations. I shall begin with the rules of implication. Now, since I am treating A is a proposition as a form of judgement, which is on a par with the form of judgement A is true, what we ordinarily call formation rules will count as rules of inference, but that is merely a terminological matter. So let us look at the formation rule for implication.
formation
(A true)  
A prop  B prop 
This rule says, in words, that if A is a proposition and B is a proposition provided that A is true, then is a proposition. In the second premise, I might just as well have used the notation for logical consequence
A true  B prop 
that I introduced earlier in this lecture, because to have a proof of this logical consequence is precisely the same as to have a hypothetical proof of B prop from the assumption A true. But, for the moment, I shall use the more suggestive notation
in imitation of Gentzen. It does not matter, of course, which notation of the two that I employ. The meaning is in any case the same.
Explanation. The rule of implication formation is a rule of immediate inference, which means that you must make the conclusion evident to yourself immediately, without any intervening steps, on the assumption that you know the premises. So assume that you do know the premises, that is, that you know the proposition A, which is to say that you know that B is a proposition under the assumption that A is true. My obligation is to explain to you what proposition is. Thus I have to explain to you what counts as a verification, or solution of this proposition, or problem. And the explanation is that what counts as a verification of is a hypothetical proof.
that B is true under the assumption that A is true. In the Kolmogorov interpretation, such a hypothetical proof appears as a method of solving the problem B provided that the problem A can be solved, that is, a method which together with a method of solving the problem A becomes a method of solving the problem B. The explanation of the meaning of implication, which has just been given, illustrates again the rigidity of the order of conceptual priority: the notions of hypothetical judgement, or logical consequence, and hypothetical proof have to be explained before the notion of implication, because, when you explain implication, they are already presupposed.
Given the preceding explanation of the meaning of implication, it is not difficult to justify the rule of implication introduction.
introduction
As you see, I am writing it in the standard way, although, of course, it is still presupposed that A is a proposition and that B is a proposition under the assumption that A is true. Thus you must know the premises of the formation and and the premise of the introduction rule in order to be able to grasp its conclusion.
Explanation. Again, the rule of implication introduction is a rule of immediate reference, which means that you must make the conclusion immediately evident to yourself granted that you know the premises, that is, granted that you possess a hypothetical proof that B is true from the hypothesis that A is true. By the definition of implication, such a proof is nothing but a verification of the proposition . And what is it that you must know in order to have the right to judge to be true? You must know how to get yourself a verification of . But, since you already possess it, you certainly know how to acquire it: just take what you already have. This is all that there is to be seen in this particular rule. Observe that its justification rests again on the principle that, if something has been done, then it can be done.
Next we come to the elimination rule for implication, which I shall formulate in the standard way, as modus ponens, although, if you want all elimination rules to follow the same pattern, that is, the pattern exhibited by the rules of falsehood, disjunction, and existence elimination, there is another formulation that you should consider, and which has been considered by SchroederHeister. But I shall have to content myself with the standard formulation in these lectures.
elimination
Here it is still assumed, of course that A is a proposition and that B is a proposition provided that A is true.
Explanation. This is a rule of immediate inference, so assume that you know the premises, that is, that you possess proofs
of them, and I shall try to make the conclusion evident to you. Now, by the definitions of the notion of proof and the notion of truth, the proof of the first premise is knowledge how to verify the proposition . So put that knowledge of yours into practice. What you then end up with is a verification of , and because of the way the implication was defined, that verification is nothing by a hypothetical proof
that B is true from the assumption that A is true. Now take your proof of the right premise and adjoin it to the verification of . Then you get a categorical proof
of the conclusion that B is true. Here, of course, I am implicitly using the principle that, if you supplement a hypothetical proof with proofs of its hypotheses, then you get a proof of its conclusion. But his is in the nature of a hypothetical proof: it is that property which makes a hypothetical proof into what it is. So now you have a proof that B is true, a proof which is knowledge how to verify B. Putting it, in turn, into practice, you end up with a verification of B. This finishes my explanation of how the proposition B is verified.
In the course of my semantical explanation of the elimination rule for implication, I have performed certain transformations which are very much like an implication reduction in the sense of Prawitz. Indeed, I have explained the semantical role of this syntactical transformation. The place where it belongs in the meaning theory is precisely in the semantical explanation, or justification, of the elimination rule for implication. Similarly, the reduction rules for the other logical constants serve to explain the elimination rules associated with those constants. The key to seeing the relationship between the reduction rules and the semantical explanations of the elimination rules is this: to verify a proposition by putting a proof of yours that it is true into practice corresponds to reducing a natural deduction to introductory form and deleting the last inference. This takes for granted, as is in fact the case, that an introduction is an inference in which you conclude, from the possession of a verification of a proposition, that you know how to verify it. In particular, verifying a proposition B by means of a proof that B is true
which ends with an application of modus ponens, corresponds to reducing the proof of the left premise to introductory form
(A true)  


B true 
then performing an implication reduction in the sense of Prawitz, which yields the proof
as result, and finally reducing the latter proof to introductory form and deleting its last, introductory inference. This is the syntactical counterpart of the semantical explanation of the elimination rule for implication.
The justifications of the remaining logical laws follow the same pattern. Let me take the rules of conjunction next.
& formation.
Here the premises of the formation rule are still in force, although not made explicit, which is to say that A and B are still assumed to be propositions.
Explanation. Assume that you know the premises, that is, that you possess proofs
of them. Because of the meaning of conjunction, just explained, this means that you have verified A & B. Then you certainly can, or know how to, verify the proposition A & B, by the principle that, if something has been done, then it can be done. And this is precisely what you need to know in order to have the right to judge A & B to be true.
If you want the elimination rule for conjunction to exhibit the same pattern as the elimination rules for falsehood, disjunction, and existence, it should be formulated differently, but, in its standard formulation, it reads as follows
& elimination.
Thus, in this formulation, there are two rules and not only one. Also, it is still presupposed, of course, that A and B are propositions.
Explanation. It suffices for me to explain one of the rules, say the first, because the explanation of the other is completely analogous. To this end, assume that you know the premise, and I shall explain to you the conclusion, which is to say that I shall explain how to verify A. This is how you do it. First use your knowledge of the premise to get a verification of A & B. By the meaning of conjunction, just explained, that verification consists of a proof that A is true as well as a proof that B is true,
Now select the first of these two proofs. By the definitions of the notions of proof and truth, that proof is knowledge how to verify A. So, putting it into practice, you end up with a verification of A. This finishes the explanations of the rules of conjunction.
The next logical operation to be treated in disjunction. And, as always, the formation rule must be explained first.
formation.
Explanation. To justify it, assume that you know the premises, that is, that you know what it is to verify A as well as what it is to verify B. On that assumption, I explain to you what proposition is saying that a verification of is either a proof that A is true or a proof that B is true.
Thus, in the wording of the Kolmogorov interpretation, a solution to the problem is either a method of solving the problem A or a method of solving the problem B.
introduction.
In both of these rules, the premises of the formation rule, which say that A and B are propositions, are still in force.
Explanation. Assume that you know the premise of the first rule of disjunction introduction, that is, that you have proved, or possess a proof of, the judgement that A is true. By the definition of disjunction, this proof is a verification of the proposition . Hence, by the principle that, if something has been done, then it can be done, you certainly can, or know how to, verify the proposition . And it is this knowledge which you express by judging the conclusion of the rule, that is, by judging the proposition to be true. The explanation of the second rule of disjunction introduction is entirely similar.
elimination.
Here it is presupposed, not only that A and B are propositions, but also that C is a proposition provided that is true. Observe that, in this formulation of the rule of disjunction elimination, C is presupposed to be a proposition, not outright, but merely on the hypothesis that is true. Otherwise it is just like the Gentzen rule.
Explanation. Assume that you know, or have proved, the premises. By the definition of truth, your knowledge of the first premise is knowledge how to verify the proposition . Put that knowledge of yours into practice. By the definition of disjunction, you then end up either with a proof that A is true or with a proof that B is true.
In the first case, join the proof that A is true to the proof that you already possess of the second premise, which is a hypothetical proof that C is true under the hypothesis that A is true,
You then get a categorical, or nonhypothetical, proof that C is true,
Again, by the definition of truth, this proof is knowledge how to verify the proposition C. So, putting this knowledge of yours into practice, you verify C. In the second case, join the proof that B is true, which you ended up with as a result of putting your knowledge of the first premise into practice, to the proof that you already possess of the third premise, which is a hypothetical proof that C is true under the hypothesis that B is true,
You then get a categorical proof that C is true,
As in the first case, by the definition of truth, this proof is knowledge how to verify the proposition C. So, putting this knowledge into practice, you verify C. This finishes my explanation how to verify the proposition C, which is precisely what you need to know in order to have the right to infer the conclusion that C is true.
formation
Explanation. This is an axiom, but not in its capacity of mere figure: to become an axiom, it has to be made evident. And, to make it evident, I have to explain what counts as a verification of . The explanation is that there is nothing that counts as a verification of the proposition . Under no condition is it true. Thinking of as a problem, as in the Kolmogorov interpretation, it is the problem which is defined to have no solution.
An introduction is an inference in which you conclude that a proposition is true, or can be verified, on the ground that you have verified it, that is, that you possess a verification of it. Therefore, being defined by the stipulation that there is nothing that counts as a verification of it, there is no introduction rule for falsehood.
elimination.
Here, in analogy with the rule of disjunction elimination, C is presupposed to be a proposition, not outright, but merely under the assumption that is true. This is the only divergence from Gentzen's formulation of ex falso quodlibet.
Explanation. When you infer by this rule, you undertake to verify the proposition C when you are provided with a proof that is true, that is, by the definition of truth, with a method of verifying . But this is something that you can safely undertake, because, by the definition of falsehood, there is nothing that counts as a verification of . Hence is false, that is, cannot be verified, and hence it is impossible that you ever be provided with a proof that is true. Observe the step here from the falsity of the proposition to the unprovabiity of the judgement that is true. The undertaking that you make when you infer by the rule of falsehood elimination is therefore like saying,
where such and such is something of which you know, that is, are certain, that it cannot be done.
Observe that the justification of the elimination rule for falsehood only rests on the knowledge that is false. Thus, if A is a proposition, not necessarily , and C is a proposition provided that A is true, then the inference
is valid as soon as A is false. Choosing C to be , we can conclude, by implication introduction, that is true provided that A is false. Conversely, if is true and A is true, then by modus ponens, would be true, which it is not. Hence A is false if is true. These two facts together justify the nominal definition of , the negation of A, as , which is commonly made in intuitionistic logic. However, the fact that A is false if and only if is true should not tempt one to define the notion of denial by saying that
means that
That the proposition A is false still means that it is possible to verify A, and this is a notion which cannot be reduced to the notions of negation, negation of propositions, that is, and truth. Denial comes before negation in the order of conceptual priority, just as logical consequence comes before implication, and the kind of generality which a judgement may have comes before universal quantification.
As has been implicit in what I have just said,
Moreover, in the course of justifying the rule of falsehood elimination, I proved that is false, that is, that is not true. Now, remember that, in the very beginning of this lecture, we convinced ourselves that a proposition is true if and only if the judgement that it is true is provable. Hence, negating both members, a proposition is false if and only if the judgement that it is true cannot be proved, that is, is unprovable. Using this in one direction, we can conclude, from the already established falsity of , that the judgement that is true is unprovable. That is, if you want, an absolute consistency proof: it is a proof of consistency with respect to the unlimited notion of provability, or knowability, that pervades these lectures. And
is the judgement which expresses the absolute consistency, if I may call it so. By my chain of explanations, I hope that I have succeeded in making it evident.
The absolute consistency brings with it as a consequence the relative consistency of any system or correct, or valid, inference rules. Suppose namely that you have a formal proof in that system of the judgement that is true. Because of the absolute consistency, that is, the unprovability of the judgement that is true, that formal proof, although formally correct, is no proof, not a real proof, that is. How can that come about? Since a formal proof is a chain of formally immediate inferences, that is, instances of the inference rules of the system, that can only come about as a result of there being some rule of inference which is incorrect. Thus, if you have a formal system, and you have convinced yourself of the correctness of the inference rules that belong to it, then you are sure that the judgement that is true cannot be proved in the system. This means that the consistency problem is really the problem of the correctness of the rules of inference, and that, at some stage or another, you cannot avoid having to convince yourself of their correctness. Of course, if you take any old formal system, it may be that you can carry out a metamathematical consistency proof for it, but that consistency proof will rely on the intuitive correctness of the principles of reasoning that you use in that proof, which means that you are nevertheless relying on the correctness of certain forms of inference. Thus the consistency problem is really the problem of the correctness of the rules of inference that you follow, consciously or unconsciously, in your reasoning.
After this digression on consistency, we must return to the semantical explanations of the rules of inference. The ones that remain are the quantifier rules.
formation.
Explanation. The premise of this rule is a judgement which has generality in it. If I were to make it explicit, I would have to write it
It is a judgement which has generality in it, although it is free from hypotheses. And remember what it is to know such a judgement: it is to possess a free variable proof of it. Now, assume that you do know the premise of this rule, that is, that you possess a free variable proof of the judgement that A(x) is a proposition. On that assumption, I explain the conclusion to you by stipulating that a verification of the proposition consists of a free variable proof that A(x) is true, graphically,
By definition, that is a proof in which the variable (x) may be replaced by anything you want, that is, any expression you want of the same arity as the variable x. Thus, if X is a variable ranging over complete expressions, then you must substitute a complete expression for it, and similarly, if it ranges over incomplete expressions of some arity. In the Kolmogorov interpretation, the explanation of the meaning of the universal quantifier would be phrased by saying that expresses the problem of constructing a general method of solving the problem A(x) for arbitrary X.
introduction.
Here the premise of the formation rule, to the effect that A(x) is a proposition for arbitrary x, is still in force.
Explanation. Again, the premise of this rule is a general judgement, which would read
if I were to employ the systematic notation that I introduced earlier in this lecture. Now, assume that you know this, that is, assume that you possess a free variable proof of the judgement that A(x) is true. Then, by the principle that, if something has been done, then it can be done, you certainly can give such a proof, and this is precisely what you must be able, or know how, to do in order to have the right to infer the conclusion of the rule.
elimination.
Here it is presupposed, of course, that A(x) is a proposition for arbitrary x. And, as you see, I have again chosen the usual formulation of the elimination rule for the universal quantifier rather than the one which is patterned upon the elimination rules for falsehood, disjunction, and existence.
Explanation. First of all, observe that, because of the tacit assumption that A(x) is a proposition for arbitrary x, both and A(a) are propositions, where a is an expression of the same arity as the variable x. Now, assume that you know the premise, that is, that you know how to verify the proposition , and I shall explain to you how to verify the proposition A(a). To begin with, put your knowledge of the premise into practice. That will give you a verification , which, by the definition of the universal quantifier, is a free variable proof that A(x) is true.
A(x) true
Now, this being a free variable proof means precisely that it remains a proof whatever you substitute for x. In particular, it remains a proof when you substitute a for x so as to get
A(a) true
So now you have acquired a proof that A(a) is true. By the definitions of the notions of proof and truth, this proof is knowledge how to verify the proposition A(a). Thus, putting it into practice, you end up with a verification of A(a), as required.
formation.
Explanation. Just as in the formation rule associated with the universal quantifier, the premise of this rule is really the general judgement
although I have not made the generality explicit in the formulation of the rule. Assume that you know the premise, that is, assume that you possess a free variable proof
guaranteeing that A(x) is a proposition, and I shall explain to you what proposition is, that is, what counts as verification of it. The explanation is that a verification of consists of an expression a of the same arity as the variable x and a proof
showing that the proposition A(a) is true. Observe that the knowledge of the premise is needed in order to guarantee that A(a) is a proposition, so that it makes sense to talk about a proof that A(a) is true. In the Kolmogorov interpretation, would be explained as the problem of finding an expression a, of the same arity as the variable x, and a method of solving the problem A(a).
introduction.
Here, as usual, the premise of the formation rule is still in force, which is to say that A(x) is assumed to be a proposition for arbitrary x.
Explanation. Assume that you know the premise, that is, assume that you possess a proof that A(a) is true,
By the preceding explanation of the meaning of the existential quantifier, the expression a together with this proof make up a verification of the proposition . And, possessing a verification of the proposition , you certainly know how to verify it, which is what you must know in order to have the right to conclude that is true. Like in my explanations of all the other introduction rules, I have here taken for granted the principle that, if something has been done, then it can be done.
elimination.
Here it is presupposed, not only that A(x) is a proposition for arbitrary x, like in the introduction rule, but also that C is a proposition provided that the proposition is true.
Explanation. First of all, in order to make it look familiar, I have written the second premise in Gentzen's notation
rather than in the notation
but there is no difference whatever in sense. Thus the second premise is really a hypotheticogeneral judgement. Now, assume that you know the premises. By the definition of the notion of truth, your knowledge of the first premise is knowledge how to verify the proposition . Put that knowledge of yours into practice. You then end up with a verification of the proposition . By the definition of the existential quantifier, this verification consists of an expression a of the same arity as the variable x and a proof that the proposition A(a) is true,
Now use your knowledge, or proof, of the second premise. Because of the meaning of a hypotheticogeneral judgement, this proof
is a free variable proof that C is true from the hypothesis that A(x) is true. Being a free variable proof means that you may substitute anything you want, in particular, the expression a, for the variable x. You then get a hypothetical proof
that C is true from the hypothesis that A(a) is true. Supplementing this hypothetical proof with the proof that A(a) is true that you obtained as a result of putting your knowledge of the first premise into practice, you get a proof
that C is true, and this proof is nothing but knowledge how to verify the proposition C. Thus, putting it into practice, you end up having verified the proposition C, as required.
The promise of the the title of these lectures, On the Meanings of the Logical Constants and the Justifications of the Logical Laws has now been fulfilled. As you have seen, the explanations of the meanings of the logical constants are precisely the explanations belonging to the formation rules. And the justifications of the logical laws are the explanations belonging to the introduction and elimination rules, which are the rules that we normally call rules of inference. For lack of time, I have only been able to deal with the pure logic in my semantical explanations. To develop some interesting parts of mathematics, you also need axioms for ordinary inductive definitions, in particular, axioms of computation and axioms for the natural numbers. And, if you need predicates defined by transfinite, or generalized, introduction, then you will have to add the appropriate formation, introduction, and elimination rules for them.
I have already explained how you see the consistency of a formal system of correct inference rules, that is, the impossibility of constructing a proof
that falsehood is true which proceeds according to those rules, not by studying metamathematically the proof figures divested on all sense, as was Hilbert's program, but by doing just the opposite: not divesting them of sense, but endowing them with sense. Similarly, suppose that you have a proof
that a proposition A is true which depends, neither on any assumptions, nor on any free variables. By the definition of truth and the identification of proof and knowledge, such a proof is nothing by knowledge how to verify the proposition A. And, as I remarked earlier in this lecture, verifying the proposition A by putting that knowledge into practice is the same as reducing the proof to introductory form and deleting the last, introductory inference. Moreover, the way of reducing the proof which corresponds to the semantical explanations, notably of the elimination rules, is precisely the way that I utilized for the first time in my paper on iterated inductive definitions in the Proceedings of the Second Scandinavian Logic Symposium, although merely because of its naturalness, not for any genuine semantical reasons, at that time. But no longer do we need to prove anything, that is, no longer do we need to prove metamathematically that the proof figures, divested of sense, reduce to introductory form. Instead of proving it, we endow the proof figures with sense, and then we see it! Thus the definition of convertibility, or computability, and the proof of normalization have been transposed into genuine semantical explanations which allow you to see this, just as you can see consistency semantically. And this is the point that I had intended to reach in these lectures.