Reading: Pass and Tseng, 6.2

Proofs for propositional logic in the natural deduction style, proof trees, soundness, completeness

introduction and elimination rules, excluded middle, assumption rules, reductio ad absurdum

**Review exercises**- Without reference to the given rules, write down the \(∧\) elimination rule(s). Choose another operator, and write introduction and elimination rules.
- The attached rules don't have the formulas "\(T\)" (which is always true) and "\(F\)" (which is always false). Write introduction and elimination rules for these formulas (hint: they do not all exist; what can you prove using the fact that "true" holds?).
- convert between the informal proof and the formal proof tree below
- build proof trees for various tautologies, such as demorgan's laws.

In this lecture we model proofs as objects of study. In a subsequent lecture, we will show how proofs are related to semantic truth.

In propositional logic, we can easily check any statement by drawing a truth table. However, this procedure does not extend easily to logics containing statements like "for all \(x\)" and "there exists \(x\)" (this is called first-order logic); proofs provide another way to reason about the truth of statements.

We will study a style of proof system called "natural deduction"; in this style, proof objects will resemble the informal proofs we are now used to.

**Definition:** We say **\(ψ\) is provable from assumptions \(φ_1,φ_2,\dots\)** (written "\(φ_1,φ_2,\dots⊢ψ\)") if there exists a proof tree with \($φ_1,φ_2,\dots⊢ψ\) at the root.

This notation is deliberately similar to but different from \(φ_1,φ_2,\dots⊨ψ\); we will prove that these notions are equivalent in the next lecture.

A proof tree is constructed by starting with premises that are known to hold, and applying valid rules of inference to reach the desired conclusion. A rule of inference has the following form:

You should read the inference rule as "to prove *conclusion* using assumptions *assumptions*, one can first prove *premise 1* under the assumptions for *premise 1* and then prove *premise 2* under *assumptions 2* and so on.

For example, the following rule explains how to prove \(φ∧ψ\):

To prove \(φ∧ψ\) under some set of assumptions, it suffices to prove \(φ\) under that set of assumptions, and also to prove \(ψ\) under the same assumptions. Note that I am using "\(\cdots\)" to refer to any set of assumptions.

The name of this rule is "\(∧\) introduction". **Introduction rules** tell you how to prove statements of the corresponding variety. For another example, see the \(∨\) introduction rules in the list of inference rules: these rules encode that fact that to prove \(φ∨ψ\) you can either prove \(φ\) or you can prove \(ψ\).

This is in contrast to **elimination rules**, which tell you how to *use* a statement in a proof. For example, the first \(∧\) elimination rule shows that if you have a proof of \(φ∧ψ\) then you have a proof of \(φ\) (the other rule says you also have a proof of \(ψ\)).

Of note is the \(∨\) elimination rule. This corresponds to case analysis. If I know \(φ_1∨φ_2\), I can't conclude \(φ_1\), because it might not be true. However, If I give a proof of \(ψ\) in the case that \(φ_1\) is true, and I *also* give a proof of \(ψ\) in the case that \(φ_2\) is true, then knowing either \(φ_1\) or \(φ_2\) is true allows me to conclude that in any case, \(ψ\) holds.

Assumptions are also added in the \(→\) introduction rule. To prove \(φ→ψ\), first assume \(φ\); if you can then prove \(ψ\), then you have proven \(φ→ψ\).

There are two proofs that require no further work: you can always use an assumption (the assumption rule), and you can always conclude that a formula is either true or false (law of excluded middle). If you think of writing a proof as a recursive process (or as a proof as an inductively defined object) then these two rules are the "base cases". Every complete proof tree must have one of these at the top of every branch.

Finally, there is the "reductio ad absurdum" rule. This rule says that no matter what \(ψ\) you are trying to prove, if you demonstrate a contradiction from the assumptions you have already made, then your proof is complete.

As an example, let's prove \(⊢ ¬(P∧Q) → ¬P∨¬Q\). I will first give an informal proof, and then show how to encode that proof as a tree.

**Claim:** If \(P∧Q\) is false, then either \(P\) is false or \(Q\) is false.

**Proof:** Suppose \(P∧Q\) is false. We wish to show that either \(P\) is false or \(Q\) is false. Now, we know that \(P\) is either true or false. If \(P\) is false, we are done. If not, then we consider the possible cases for \(Q\). If \(Q\) is false, we are done. But if neither \(P\) nor \(Q\) is false, then we have a contradiction, because we know that \(P\) and \(Q\) must both be true, but we assumed that \(P\) and \(Q\) were not both true. Since we have shown \(¬P∨¬Q\) holds in all cases under the assumption \(¬(P∧Q)\), we conclude \(¬(P∧Q) → ¬P ∨ ¬Q\).

Formal proof:

Some things to note: this proof is large on paper; people don't usually write them all the way out, but I wanted to do so to show you how it worked. Also, we commonly write \(\cdots\) to abbreviate the uninteresting assumptions at various points in the tree, or break up the tree into multiple pieces. For example, we might abbreviate this tree as follows: