3.1


Natural Deduction in SL: Overview

In the chapter 1, we introduced the truth table method, which allowed us to check to see if various logical properties were present, such as whether a statement is a tautology or whether an argument is valid. The method in that chapter was semantic, because it relied on the meaning of symbols, specifically, whether they were interpreted as true or false. The nice thing about that method was that it was completely mechanical. If you just followed the rules like a robot, you would eventually get the right answer. You didn't need any special insight and there were no tough decisions to make. The downside to this method was that the tables quickly became way too long. It just isn't practical to make a 32 line table every time you have to deal with five different sentence letters. In this chapter, we are going to introduce a new method for checking for validity and other logical properties.

§ 1 Substitution Instances and Proofs


This time our method is going to be purely syntactic. We won't be at all concerned with what our symbols mean. We are just going to look at the way they are arranged. Our method here will be called a system of natural deduction. When you use a system of natural deduction, you won't do it mechanically. You will need to understand the logical structure of the argument and employ your insight. This is actually one of the reasons people like systems of natural deduction. They let us represent the logical structure of arguments in a way we can understand. Learning to represent and manipulate arguments this way is a core mental skill, used in fields like mathematics and computer programming.

Consider two arguments in SL:

$P_{1}$   $ (P \vee Q)$

$P_{2}$   $\neg P$


$\therefore$   $Q$

$P_{1}$   $ (P \to Q)$

$P_{2}$   $ P$


$\therefore$   $Q$

These are both valid arguments. Go ahead and prove that for yourself by constructing the four-line truth tables. These particular valid arguments are examples of important kinds of arguments that are given special names. Argument A is an example of a kind of argument disjunction elimination (\vee-E). Given a disjunction and the negation of one of the disjuncts, the other disjunct follows as a valid consequence. Argument B makes use of a different valid form: Given a conditional and its antecedent, the consequent follows as a valid consequence. In our system it will be called conditional elimination (\to-E).

Both of the arguments above remain valid even if we substitute different sentence letters. You don't even need to run the truth tables again to see that these arguments are valid:

$P_{1}$   $ (A \vee B)$

$P_{2}$   $\neg A$


$\therefore$   $B$

$P_{1}$   $ (A \to B)$

$P_{2}$   $ A$


$\therefore$   $B$

Replacing $P$ with $A$ and $Q$ with $B$ changes nothing (so long as we are sure to replace every $P$ with an $A$ and every $Q$ with a $B$). What's more interesting is that we can replace the individual sentence letters in Argument A and Argument B with longer sentences in SL and the arguments will still be valid, as long as we do the substitutions consistently. Here are two more perfectly valid instances of disjunction and conditional elimination.

$P_{1}$   $ (C\wedge D) \vee (E\vee F))$

$P_{2}$   $\neg (C \wedge D)$


$\therefore$   $E \vee F$

$P_{1}$   $ (G \to H) \to (I \vee J)$

$P_{2}$   $ (G \to H)$


$\therefore$   $(I \vee J)$

Again, you can check these using truth tables, although the 16 line truth tables begin to get tiresome. All of these arguments are what we call substitution instances of the same two logical forms. We call them that because you get them by replacing the sentence letters with other sentences, either sentence letters or longer sentences in SL. A substitution instance cannot change the sentential connectives of a sentence, however. The sentential connectives are what make the logical form of the sentence. We can write these logical forms using fancy greek letters.

$P_{1}$   $ (\phi \vee \psi)$

$P_{2}$   $\neg \phi$


$\therefore$   $\psi$

$P_{1}$   $ (\phi \to \psi)$

$P_{2}$   $\phi$


$\therefore$   $\psi$

As we explained earlier, the fancy script letters are metavariables. They are a part of our metalanguage and can refer to single sentence letters like $P$ or longer sentences like $A \leftrightarrow (B \wedge (C \vee D))$.

Formally, we can define a sentence form as a sentence in SL that contains one or more metavariables in place of sentence letters. A substitution instance of that sentence form is then a sentence created by consistently substituting sentences for one or more of the metavariables in the sentence form. Here 'consistently substituting' means replacing all instances of the metavariable with the same sentence. You cannot replace instances of the same metavariable with different sentences, or leave a metavariable as it is, if you have replaced other metavariables of that same type. An argument form is an argument that includes one or more sentence forms, and a substitution instance of the argument form is the argument obtained by consistently replacing the sentence forms in the argument form with their substitution instances.

Once we start identifying valid argument forms like this, we have a new way of showing that longer arguments are valid. Truth tables are fun, but doing the 1028 line truth table for an argument with 10 sentence letters would be tedious. Worse, we would never be sure we hadn't made a little mistake in all those Ts and Fs. Part of the problem is that we have no way of knowing why the argument is valid. The table gives you very little insight into how the premises work together.

The aim of a proof system is to show that particular arguments are valid in a way that allows us to understand the reasoning involved in the argument. Instead of representing all the premises and the conclusion in one table, we break the argument up into steps. Each step is a basic argument form of the sort we saw above, like disjunctive syllogism or modus ponens. Suppose we are given the premises $\neg L \to (J \vee L)$ and $\neg L$ and wanted to show $J$. We can break this up into two smaller arguments, each of which is a substitution inference of a form we know is correct.

$P_{1}$   $ (\neg L \to (J \vee L))$

$P_{2}$   $\neg L$


$\therefore$   $J \vee L$

$P_{1}$   $J \vee L$

$P_{2}$   $\neg L$


$\therefore$   $J$

The first argument is a substitution instance of modus ponens and the second is a substitution instance of disjunctive syllogism, so we know they are both valid. Notice also that the conclusion of the first argument is the first premise of the second, and the second premise is the same in both arguments. Together, these arguments are enough to get us from $\neg L \to (J \vee L)$ and $\neg L$ to $J$.

These two arguments take up a lot of space, though. To complete our proof system, we need a system for showing clearly how simple steps can combine to get us from premises to conclusions. The system we will use in this book was devised by the American logician Frederic Brenton Fitch (1908--1987). We begin by writing our premises on numbered lines with a bar on the left and a little bar underneath to represent the end of the premises. Then we write 'Want' on the side followed by the conclusion we are trying to reach. If we wanted to write out the arguments above, we would begin like this.

We then add the steps leading to the conclusion below the horizontal line, each time explaining off to the right why we are allowed to write the new line. This explanation consists of citing a rule and the prior lines the rule is applied to. In the example we have been working with we would proceed like this

The little chart above is a proof that $J$ follows from $\neg L \to (J \to L)$ and $\neg L$. We will also call proofs like this derivations. Formally, a proof is a sequence of sentences. The first sentences of the sequence are assumptions; these are the premises of the argument. Every sentence later in the sequence follows from earlier sentences by one of the rules of proof. The final sentence of the sequence is the conclusion of the argument.

§ 2 Basic Rules for Sentential Logic


In designing a proof system, we could just start with disjunctive syllogism and modus ponens. Whenever we discovered a valid argument that could not be proved with rules we already had, we could introduce new rules. Proceeding in this way, we would have an unsystematic grab bag of rules. We might accidentally add some strange rules, and we would surely end up with more rules than we need.

Instead, we will develop what is called a system of natural deduction. In a natural deduction system, there will be two rules for each logical operator: an introduction, and an elimination rule. The introduction rule will allow us to prove a sentence that has the operator you are 'introducing' as its main connective. The elimination rule will allow us to prove something given a sentence that has the operator we are 'eliminating' as the main logical operator.

In addition to the rules for each logical operator, we will also have a reiteration rule. If you already have shown something in the course of a proof, the reiteration rule allows you to repeat it on a new line. We can define the rule of reiteration like this

This diagram shows how you can add lines to a proof using the rule of reiteration. As before, the script letters represent sentences of any length. The upper line shows the sentence that comes earlier in the proof, and the bottom line shows the new sentence you are allowed to write and how you justify it. The reiteration rule above is justified by one line, the line that you are reiterating. So the 'R $m$' on line 2 of the proof means that the line is justified by the reiteration rule (R) applied to line $m$. The letters $m$ and $n$ are variables, not real line numbers. In a real proof, they might be lines 5 and 7, or lines 1 and 2, or whatever. When we define the rule, however, we use variables to underscore the point that the rule may be applied to any line that is already in the proof.

Obviously, the reiteration rule will not allow us to show anything new. For that, we will need more rules. The remainder of this section will give six basic introduction and elimination rules. This will be enough to do some basic proofs in SL. Sections 4.3 through 4.5 will explain introduction rules involved in fancier kinds of derivation called conditional proof and indirect proof. The remaining sections of this chapter will develop our system of natural deduction further and give you tips for playing in it.

Conjunction

Think for a moment: What would you need to show in order to prove $E \wedge F$?

Of course, you could show $E \wedge F$ by proving $E$ and separately proving $F$. This holds even if the two conjuncts are not atomic sentences. If you can prove $[(A \vee J) \to V]$ and $[(V \to L) \to (F \vee N)]$, then you have effectively proved $[(A \vee J) \to V] \wedge [(V \to L) \to (F \vee N)].$

So this will be our conjunction introduction rule, which we abbreviate $\wedge I$:

A line of proof must be justified by some rule, and here we have '$\wedge I$ $m$, $n$.' This means: Conjunction introduction applied to line $m$ and line $n$. Again, these are variables, not real line numbers; $m$ is some line and $n$ is some other line. If you have $K$ on line 8 and $L$ on line 15, you can prove $(K\wedge L)$ at some later point in the proof with the justification '$\wedge I$ 8, 15.'

We have written two versions of the rule to indicate that you can write the conjuncts in any order. Even though $K$ occurs before $L$ in the proof, you can derive $(L \wedge K)$ from them using the right-hand version $\wedge I$. You do not need to mark this in any special way in the proof.

Now, consider the elimination rule for conjunction. What are you entitled to conclude from a sentence like $E \wedge F$? Surely, you are entitled to conclude $E$; if $E \wedge F$ were true, then $E$ would be true. Similarly, you are entitled to conclude $F$. This will be our conjunction elimination rule, which we abbreviate $\wedge E$:

When you have a conjunction on some line of a proof, you can use $\wedge E$ to derive either of the conjuncts. Again, we have written two versions of the rule to indicate that it can be applied to either side of the conjunction. The $\wedge E$ rule requires only one sentence, so we write one line number as the justification for applying it. For example, both of these moves are acceptable in derivations.

Some textbooks will only let you use $\wedge E$ on one side of a conjunction. They then make you prove that it works for the other side. We won't do this, because it is a pain in the neck.

Even with just these two rules, we can provide some proofs. Consider this argument.

$[(A\vee B)\to (C \vee D)] \wedge [(E \vee F) \to (G \vee H)] \vdash [(E \vee F) \to (G \vee H)] \wedge (A\vee B)\to (C \vee D)]$

The main logical operator in both the premise and conclusion is a conjunction. Since the conjunction is symmetric, the argument is obviously valid. In order to provide a proof, we begin by writing down the premise. After the premises, we draw a horizontal line---everything below this line must be justified by a rule of proof. So the proof looks like this:

From the premise, we can get each of the conjuncts by $\wedge E$. Then, the rule $\wedge I$ requires that we have each of the conjuncts available somewhere in the proof. They can be separated from one another, and they can appear in any order. So by applying the $\wedge I$ rule to lines 3 and 2, we arrive at the desired conclusion.

This proof is trivial, but it shows how we can use rules of proof together to demonstrate the validity of an argument form. Also: Using a truth table to show that this argument is valid would have required a staggering 256 lines, since there are eight sentence letters in the argument.

Disjunction

If $M$ were true, then $M \vee N$ would also be true. So the disjunction introduction rule $\vee I$ allows us to derive a disjunction if we have one of the two disjuncts:

Like the rule of conjunction elimination, this rule can be applied two ways. Also notice that $\psi$ can be any sentence whatsoever. So the following is a legitimate proof:

This might seem odd. How can we prove a sentence that includes $A$, $B$, and the rest, from the simple sentence $M$---which has nothing to do with the other letters? The secret here is to remember that all the new letters are on just one side of a disjunction, and nothing on that side of the disjunction has to be true. As long as $M$ is true, we can add whatever we want after a disjunction and the whole thing will continue to be true.

Now consider the disjunction elimination rule. What can you conclude from $M \vee N$? You cannot conclude $M$. It might be $M$'s truth that makes $M \vee N$ true, as in the example above, but it might not. From $M \vee N$ alone, you cannot conclude anything about either $M$ or $N$ specifically. If you also knew that $N$ was false, however, then you would be able to conclude $M$.

Conditionals and biconditionals

The rule for conditional introduction is complicated because it requires a whole new kind of proof, called conditional proof. We will deal with this in the next section. For now, we will only use the rule of conditional elimination.

Nothing follows from $M\to N$ alone, but if we have both $M \to N$ and $M$, then we can conclude $N$. This is another rule we've seen before: modus ponens. It now enters our system of natural deduction as the conditional elimination rule $\to E$.

Biconditional elimination ($\leftrightarrow E$) will be a double-barreled version of conditional elimination. If you have the left-hand subsentence of the biconditional, you can derive the right-hand subsentence. If you have the right-hand subsentence, you can derive the left-hand subsentence. This is the rule:

§ 3 Conditional Proof


So far we have introduced introduction and elimination rules for the conjunction and disjunction, and elimination rules for the conditional and biconditional, but we have no introduction rules for conditionals and biconditionals, and no rules at all for negations. That's because these other rules require fancy kinds of derivations that involve putting proofs inside proofs. In this section, we will look at one of these kinds of proof, called conditional proof.

Conditional Introduction

Consider the argument: $R\vee F \therefore \neg R \to F$

The argument is valid. You can use the truth table to check it. Unfortunately, we don't have a way to prove it in our syntactic system of derivation. To help us see what our rule for conditional introduction should be, we can try to figure out what new rule would let us prove this obviously true argument.

If we had $\neg R$ as a further premise, we could derive $F$ by the $\vee E$ rule. But sadly, we do not have $\neg R$ as a premise, and we can't derive it directly from the premise we do have---so we cannot simply prove $F$. What we will do instead is start a subproof, a proof within the main proof. When we start a subproof, we draw another vertical line to indicate that we are no longer in the main proof. Then we write in an assumption for the subproof. This can be anything we want. Here, it will be helpful to assume $\neg R$. Our proof should start like this:

It is important to notice that we are not claiming to have proved $\neg R$. We do not need to write in any justification for the assumption line of a subproof. You can think of the subproof as posing the question: What could we show if $\neg R$ were true? For one thing, we can derive $F$. To make this completely clear, I have annotated line 2 'Assumption for CD,' to indicate that this is an additional assumption we are making because we are using conditional derivation (CD). I have also added 'Want: F' because that is what we will want to show during the subderivation. In the future I won't always include all this information in the annotation. But for now we will use it to be completely clear on what we will be doing.

First, we deduced F in the subderivation. This has shown that if we had $\neg R$ as a premise, then we could prove $F$. In effect, we have proven $\neg R \to F$. So the conditional introduction rule $\to I$ will allow us to close the subproof and derive $\neg R \to F$ in the main proof. Notice that the justification for applying the $\to I$ rule is the entire subproof. Usually that will be more than just two lines.

Now that we have that example, let's lay out more precisely the rules for subproofs and then give the formal schemes for the rule of conditional and biconditional introduction.

Rule 1 gives you great power. You can assume anything you want, at any time. But with great power, comes great responsibility, and rules 2--5 explain what your responsibilities are. Making an assumption creates the burden of starting a subproof, and subproofs must end before the proof is done. (That's why we can't start a subproof on the last line.) Closing a subproof is called discharging the assumptions of that subproof. So we can summarize your responsibilities this way: You cannot complete a proof until you have discharged all of the assumptions introduced in subproofs. Once the assumptions are discharged, you can use the whole subproof as a justification, but not the individual lines. So you need to know going into the subproof what you are going to use it for once you get out. As in so many parts of life, you need an exit strategy. With those rules for subproofs in mind, the $\to I$ rule looks like this:

You still might think this gives us too much power. In logic, the ultimate sign you have too much power is that given any premise \phi you can prove any conclusion \psi. Fortunately, our rules for subproofs don't let us do this. Imagine a proof that looks like this:

It may seem as if a proof like this will let you reach any conclusion $\psi$ from any premise $\phi$. But this is not the case. By rule 2, in order to complete a proof, you must close all of the subproofs, and we haven't done that. A subproof is only closed when the vertical line for that subproof ends. To put it another way, you can't end a proof and still have two vertical lines going.

You still might think this system gives you too much power. Maybe we can try closing the subproof and writing \psi in the main proof, like this

But this is wrong, too. By rule 5, once you close a subproof, you cannot refer back to individual lines inside it.

When we introduce a subproof, we typically write what we want to derive in the right column, just like we did in the first example in this section. This is just so that we do not forget why we started the subproof if it goes on for five or ten lines. There is no 'want' rule. It is a note to ourselves and not formally part of the proof.

Having an exit strategy when you launch a subproof is crucial. Even if you discharge an assumption properly, you might wind up with a final line that doesn't do you any good. In order to derive a conditional by $\to I$, for instance, you must assume the antecedent of the conditional in a subproof. The last line of the subproof must be the consequent of the conditional, and the whole conditional is the first line after the end of the subproof. Pick your assumptions so that you wind up with a conditional that you actually need. It is always permissible to close a subproof and discharge its assumptions, but it will not be helpful to do so until you get what you want.

Biconditional introduction

Just as the rule for biconditional elimination was a double-headed version of conditional elimination, our rule for biconditional introduction is a double-headed version of conditional introduction. In order to derive $W \leftrightarrow X$, for instance, you must be able to prove $X$ by assuming $W$ and prove $W$ by assuming $X$. The biconditional introduction rule $\leftrightarrow I$ requires two subproofs. The subproofs can come in any order, and the second subproof does not need to come immediately after the first-but schematically, the rule works like this:

§ 4 Indirect Proof (Reductio ad Absurdum)


The last two rules we need to discuss are negation introduction $\neg I$ and negation elimination $\neg E$. As with the rules of conditional and biconditional introduction, we have put off explaining the rules, because they require launching subproofs. In the case of negation introduction and elimination, these subproofs are designed to let us perform a special kind of derivation classically known as reductio ad absurdum, or simply reductio.

A reductio in logic is a variation on a tactic we use in ordinary arguments all the time. In arguments we often stop to imagine, for a second, that what our opponent is saying is true, and then realize that it has unacceptable consequences. In so-called 'slippery slope' arguments or 'arguments from consequences,' we claim that doing one thing will will lead us to doing another thing which would be horrible. For instance, you might argue that legalizing physician assisted suicide for some patients might lead to the involuntary termination of lots of other sick people. These arguments are typically not very good, but they have a basic pattern which we can make rigorous in our logical system. These arguments say 'if my opponent wins, all hell will break loose.' In logic the equivalent of all hell breaking loose is asserting a contradiction. The worst thing you can do in logic is contradict yourself. The equivalent of our opponent being right in logic would be that a sentence we are trying to prove true turns out to be false (or alternately, that a sentence we are trying to prove false turns out to be true.) So, in developing the rules for reductio ad absurdum, we need to find a way to say 'if this sentence were false (or true), we would have to assert a contradiction.'

Here is a simple mathematical argument in English for the conclusion that there is no largest number:

$P_{1}$   Assume there is some greatest natural number. Call it $A$.

$P_{2}$   That number plus one is also a natural number.

$P_{2}$   Obviously, $A+1 > A$.

$P_{2}$   So there is a natural number greater than $A$

$P_{2}$   This is impossible, since $A$ is assumed to be the greatest natural number.

$P_{2}$   This is impossible, since $A$ is assumed to be the greatest natural number.


$\therefore$   There is no greatest natural number.

This argument form is traditionally called a reductio. Its full Latin name is reductio ad absurdum, which means 'reduction to absurdity.' In a reductio, we assume something for the sake of argument---for example, that there is a greatest natural number. Then we show that the assumption leads to two contradictory sentences---for example, that $A$ is the greatest natural number and that it is not. In this way, we show that the original assumption must have been false.

In our system of natural deduction, this kind of proof will be known as indirect proof.The basic rules for negation will allow for arguments like this. If we assume something and show that it leads to contradictory sentences, then we have proven the negation of the assumption. This is the negation introduction $\neg I$ rule:

For the rule to apply, the last two lines of the subproof must be an explicit contradiction: either the second sentence is the direct negation of the first, or vice versa. We write 'for reductio' as a note to ourselves, a reminder of why we started the subproof. It is not formally part of the proof, and you can leave it out if you find it distracting.

To see how the rule works, suppose we want to prove a law of double negation $A$ $\therefore$ $\neg \neg A$

The $\neg E$ rule will work in much the same way. If we assume $\neg \phi$ and show that it leads to a contradiction, we have effectively proven $\phi$. So the rule looks like this:

One important bit of strategy. Sometimes, you will launch a subproof right away by assuming the negation of the conclusion to the whole argument. Other times, you will use a subproof to get a piece of the conclusion you want, or some stepping stone to the conclusion you want.