Deductionis a *syntactic* understanding of the notion of inference. In earlier modules we learned about the semantic notion of entailment, symbolized as $\vDash$, when we reason about arguments in terms of the propositions' truth values. Here, we think about inferences in terms of rules of inferences that grounded in the relationship between the symbols themselves.

DerivabilityThe symbol $\vdash$ represents the notion of * syntactical derivability*. For any set of SL or PL statement $\Gamma$ and some statement $\phi$, '$\Gamma \vdash \phi$' means ' from the set $\Gamma$, the sentence $\phi$ is derivable using permissible rules of inferences.' We will sometimes refer the process of deriving something a *derivation* or a *proof*.

AssumptionThe statements above the horizontal lines are assumptions - they are accessible for drawing inferences. Our job as the deducer is to find out that *deductively* follow from these given assumptions. In this derivation, we were given certain assumptions(premises); but not all derivations begin with given assumptions, however. Eventually we will learn to prove statements without them; we'd have to make them our own.

JustificationThe numbers and symbol to the right of each line are there to justify that particular line was derived. For instance, we justify the first two lines by stating that these were given. Lines 3 to 7, however, were the product of using rules of inferences on the lines indicated.

FormulasWe will also starting referring the SL or PL sentences as * formulas *, as opposed to propositions or sentences. The reason for this is to emphasize that in a syntactic context we are treating the symbols as nothing but a bunch of strings. Putting it differently, when we say formulas we refer to the symbols themselves without caring about its meaning at all.

FormalismFor the sake of convenience, we will express rules of inferences using this notation: $(\Gamma, \theta)$, which is an ordered pair that says: from the finite set of formulas $\Gamma$, we can infer the formula $\theta$. This seems pretty abstract, but you will see that it is really nothing but a straightforward procedure of processing bits of strings.

$\wedge E$:$(\{\alpha \wedge \beta\},\alpha)$ and $(\{\alpha \wedge \beta\},\beta)$

Conjunction Elimination -$\wedge E$ says that whenever you have a formula of the form $\alpha \wedge \beta$, you are entitled to infer either $\alpha$ or $\beta$. This rule is supposed to mirror the truth table definition of conjunction: $\alpha \wedge \beta$ is true when both of the conjuncts are true.

$\wedge I$:$(\{\alpha, \beta\},\alpha \wedge \beta)$

Conjunction Introductionworks in the opposite way: if you have two separate sentence $\alpha$ and $\beta$, you can put them together with a conjunction. In the example here, we put 3 and 6 together using this rule, producing line 7. In this case, $\beta$ takes the form of a complex formula $(B \vee D)$

$\vee E$:$(\{\alpha\},\alpha \vee \beta)$

Disjunction Introductionis similar to $\wedge I$, except you only need one formula to start with. The idea is that you are allowed to introduce a disjunction as long as one of the disjuncts is accessible.

$\to E$:$(\{\alpha, \alpha \to \beta\}, \beta)$

Conditional Eliminationis also called by its Latin name *Modus Ponens*. It states that whenever you have a conditional statement along its *antecedent*, you are allowed to infer the consequent as well.