In propositional logic, the statements we are proving are completely abstract. To be able to prove programs correct, we need a logic that can talk about the things that programs compute on: integers, strings, tuples, datatype constructors, and functions. We'll enrich propositional logic with the ability to talk about these things, obtaining a version of predicate logic.
The syntax extends propositional logic with a few new expressions, shown in blue:
Formulas f,g ::= ⊤ (* true *)  ⊥ (* false *)  ¬f (* negation; syntactic sugar for f ⇒ ⊥ *)  f ∧ f (* conjunction (and) *)  f ∨ f (* disjunction (or) *)  f ⇒ f (* implication (ifthen) *)  ∀x.f (* f is true for all x. f can mention x*)  ∃x.f (* There exists some x such that f is true *)  t_{1} = t_{2} (* t_{1} is equal to t_{2} *)  P(t_{1},...,t_{n}) (* nary predicate (aka relation) P is true for t_{1},...,t_{n} *) Terms t ::= c (* constants (integers, tuples, other values) *)  x (* variables *)  fn(t_{1},...,t_{n}) (* result of applying nary function fn to t_{1},...,t_{n} *)
Terms t stand for individual elements of some domain of objects we are reasoning about, such as the natural numbers.
The formula ∀x.f means that the formula f is true for any choice of x. This is called universal quantification, and ∀ is the universal quantifier. The formula ∃x.f denotes existential quantification. It means that the formula f is true for some choice of x, though there may be more than one such x.
It is possible to restrict the range of quantifiers to quantify over some subset of the domain of possible values. For universal quantifiers, we use an implication ⇒, and for existential quantifiers, we use conjunction ∧. For example, if we wanted to say that all positive numbers x satisfy some property Q(x), we could write ∀x.x > 0 ⇒ Q(x). This works because the quantified formula is vacuously true for numbers not greater than 0. To say that there exists a positive number that satisfies Q, we can write ∃x.x > 0 ∧ Q(x).
Using quantifiers, we can express some interesting statements. For example, we can express the idea that a number n is prime in various logically equivalent ways:
Prime(n)  ⇔  ∀m. 1 < m ∧ m < n ⇒ ¬∃k. k*m = n 
⇔  ¬∃m. 1 < m ∧ m < n ∧ ∃k. k*m = n  
⇔  ¬∃m. ∃k. 1 < m ∧ m < n ∧ k*m = n 
Introduction and elimination rules can be defined for universal and existential quantifiers. In the following rules, f(t) refers to f(x) with all free occurrences of the variable x replaced by the term t.
rule name  rule  intuition  

∀  intro  (*) This rule can only be applied if x does not occur free in Γ. We can conclude that f holds for all x if we choose an arbitrary x and prove f(x).  
elim  If we've proven f holds for all x, we can conclude that f holds for any given t.  
∃  intro  We can prove that there exists some x with property f by simply producing a t with property f.  
elim  (*) this rule can be applied only if a does not occur free in Γ or g. This is like ∨ elimination — we can only conclude something from the existence of an x if the conclusion g doesn't depend on which x satisfies f. 
The rule (∀elim) specializes the formula f(x) to a particular value t of x. (We require implicitly that t be of the right type to be substituted for x.) Since f holds for all x, it should hold for any particular choice of x, including t. The (∀intro) rule formalizes the type of argument that starts, "Let a be an arbitrary element..." If one can prove a fact f(a) for arbitrarily chosen a, then f(x) holds for all x.
The rule (∃intro) derives ∃x.f(x) because a witness t to the existential has been produced. Intuitively, if f(t) holds for some t, then certainly there exists an x such that P(x) holds. The idea behind rule (∃elim) is that if g can be shown without using any information about the witness x other than f(x), then the mere existence of an element satisfying f is enough to imply g.
The proviso (*) in the (∀intro) and (∃elim) rules is a restriction on the use of the rule. This restriction prevents us from doing unsound reasoning like the following:
This proof says that if a particular x is greater than 10, then every x is greater than 10, something that is clearly false! The problem is the use of ∀intro: we are able to prove that x > 10, but not for an arbitrary x, only for the particular x we had already made assumptions about.
However, it is fine for the variable a to appear in an assumption that is made after the point where (∀intro) is applied. For example,
Predicate logic allows the use of arbitary predicates P. Equality (=) is such a predicate. It applies to two arguments; we can read t_{1}=t_{2} as a predicate =(t_{1},t_{2}). But in addition to the rules above for arbitrary predicates, equality has some special properties. The following three rules capture that equality is an equivalence relation: it is reflexive, symmetric, and transitive.
 (reflexivity) 
 (symmetry) 
 (transitivity) 
Beyond being an equivalence relation, equality preserves meaning under substitution. If two things are equal, substituting one for the other in equal terms results in equal terms. This is known as Leibniz's rule (substitution of equals for equals):

Leibniz's rule can also be applied to show propositions are logically equivalent :

The same idea can be applied completely at the propositional level as well. If we can prove that two formulas are equivalent, they can be substituted for one another within any other formula.

This admissible rule can be very convenient for writing proofs, though anything we can prove with it can be proved using just the basic rules. It can be very handy when there is a large library of logical equivalences to draw upon, because it allows rewriting of deeply nested subformulas.
For reasoning about specific kinds of values, we need axioms that describe how those values behave. For example, the following axioms partly describe the integers and can be used to prove many facts about integers. In fact, they define a more general structure, a commutative ring, so anything proved with them holds for any commutative ring. These axioms are all considered to be implicitly universally quantified.
(x+y)+z = x+(y+z)  (associativity of +) 
x+y = y+x  (commutativity of +) 
(x*y)*z = x*(y*z)  (associativity of *) 
x*y = y*x  (commutativity of *) 
x*(y+z) = x*y+x*z  (distributivity of * over +) 
x + 0 = x  (additive identity) 
x + (x) = 0  (additive inverse) 
x*1 = x  (multiplicative identity) 
x*0 = 0  (annihilation) 
These rules use a number of functions: +, *, , 0, and 1 (we can think of 0 and 1 as functions that take zero arguments). These symbols are represented by the metavariable f in the grammar earlier.
Proving facts about arithmetic can be tedious. For our purposes, we will write proofs that do reasonable algebraic manipulations as a single step, e.g.:
 (algebra) 
This proof step can be done explicitly using the rules and axioms above, but it takes several steps.