
\chapter{Introduction}\label{ch:intro}

\setcounter{section}{-1}

\section{Building-blocks}

This first section reviews some basic definitions and conventions to
be followed in these notes. 

An \defn{ordered pair} is a set 
\begin{equation*}
  \{\{a\},\{a,b\}\}, 
\end{equation*}
which is denoted
\begin{equation*}
  (a,b).
\end{equation*}
The sole purpose of the definition is to ensure that
\begin{equation*}
  (a,b)=(x,y)\Iff a=x\amp b=y.
\end{equation*}

The \defn{Cartesian product} of sets $A$ and $B$ is
the set
\begin{equation*}
  \{(x,y):x\in A\land y\in B\},
\end{equation*}
denoted
\begin{equation*}
 A\times B.  
\end{equation*}
To express that some set $f$ is a \defn{function} from $A$ to $B$, we can
just write
\begin{equation*}
  f:A\To B.
\end{equation*}
This means $f$ is a subset of $A\times B$ with a certain property,
namely, for every $a$ in $A$, there is a unique $b$ in $B$ such that
$(a,b)\in f$; then we write 
\begin{equation*}
  f(a)=b.  
\end{equation*}
The function $f$ can also be written as
\begin{equation*}
  x\longmapsto f(x).
\end{equation*}
The set of all functions from $A$ to $B$ can be denoted
\begin{equation}\label{eqn:map-set}
  B^A.
\end{equation}
(Some people write ${}^AB$ instead.)

Let $\varN$ be the set of \defn{natural numbers}:
\begin{equation*}
  \varN = \{0,1,2,3,\dots\}
        = \{0,0',0'',0''',\dots\}.
\end{equation*}
It is notationally convenient to treat $0$ as $\emptyset$, and $n'$ as
$n\cup\{n\}$.  Then 
\begin{equation*}
  n=\{0,\dots,n-1\}
\end{equation*}
for all $n$ in $\varN$.  Under this understanding of the natural
numbers, the \defn{$n$th Cartesian power} of $A$ is precisely
\begin{equation*}
  A^n,
\end{equation*}
in the notation introduced on Line~\eqref{eqn:map-set} above.
Thus, the $n$th Cartesian power of $A$ is the set of functions from
$n$ to $A$.  An element of $A^n$ can be written as any one of
\begin{center}
\hfill
  $(a_0,\dots,a_{n-1})$,\hfill
  $i\longmapsto a_i,$\hfill
  $\tuple a$;\hfill\mbox{}
\end{center}
it can be called an \defn{(ordered) $n$-tuple} from $A$.
Note well that 
\begin{equation*}
  A^0=\{\emptyset\}=\{0\}=1; 
\end{equation*}
this is true even if $A$
is empty.  Also, every element of $A^1$ is $\{(0,a)\}$ for some $a$ in
$A$.  So we have a bijection
\begin{equation}\label{eqn:A1}
  x\longmapsto\{(0,x)\}
\end{equation}
from $A$ to $A^1$.
We may sometimes treat this bijection as an \defn{identification:}
that is, we may neglect to distinguish between $a$ and
$\{(0,a)\}$.

For any $m$ and $n$ in $\varN$, we have a bijection
\begin{equation}\label{eqn:concat}
  (\tuple x,\tuple y)\longmapsto \tuple x\concat\tuple y
\end{equation}
from $A^m\times
  A^n$ to $A^{m+n}$.
In this notation, $\tuple a\concat\tuple b$ is the $(m+n)$-tuple
\begin{equation*}
  (a_0,\dots,a_{m-1},b_0,\dots,b_{n-1}); 
\end{equation*}
this is the $(m+n)$-tuple 
 $\tuple c$ such
that
\begin{equation*}
  c_k=
  \begin{cases}
    a_k,&\text{ if }k<m;\\
    b_{k-m},&\text{ if }m\leq k<m+n.
  \end{cases}
\end{equation*}
We shall always treat the bijection on Line~\eqref{eqn:concat} as an
identification; in particular, we shall always write $(\tuple a,\tuple
b)$ instead of $\tuple a\concat\tuple b$.

An \defn{$n$-ary operation on $A$} is a function from $A^n$ to $A$.
The set of these can be denoted
\begin{equation*}
  A^{A^n}.
\end{equation*}
In particular, a $0$-ary or \defn{nullary} operation on $A$ is an element of
$A^1$; by the bijection in Line~\eqref{eqn:A1} then, we may identify a
nullary operation on $A$ with an element of $A$.  

An \defn{$n$-ary relation on $A$} is a subset of $A^n$; the set of
these is 
\begin{equation*}
  \pow{A^n}. 
\end{equation*}
In particular, a nullary operation is a subset of $A^0$, that is, of $1$
(or $\{0\}$); so the nullary operation is $0$ or $1$.

An $n$-ary operation on $A$ is then a (certain kind of) subset of
$A^n\times A$, and this product can be identified with $A^n\times A^1$
and hence with $A^{n+1}$; so an $n$-ary operation on $A$ can be
thought of as an $(n+1)$-ary relation on $A$.  More precisely, if
$f:A^n\to A$, then one may refer to the $(n+1)$-ary relation
\begin{equation*}
  \{(\tuple x,f(\tuple x)):\tuple x\in A^n\}
\end{equation*}
as the \defn{graph} of $f$; but there is a bijection between graphs in
this sense and functions.

\section{Structures}\label{S:structures}

Our fundamental object of study will be \tech{structures}.  The notion
of a structure provides a way to unify the treatment of many
mathematical ideas.  By our official definition,
a \defn{structure} is an ordered pair $(A,\interpretation)$, also
referred to as $\str A$, where:
\begin{mylist}
  \item
$A$ is a non-empty set, called the \defn{universe} of the structure;
\item
$\interpretation$ is a function, written also
  \begin{equation*}
    s\longmapsto s^{\str A},
  \end{equation*}
whose domain $\lang$ is called the \defn{signature} of the structure;
\item
$s^{\str A}$ is either an element of $A$ or an $n$-ary operation or
  relation on $A$ for some positive $n$, for each $s$ in $\lang$.
\end{mylist}
If $\lang=\{s_0,s_1,\dots\}$, then $\str A$ can be written
\begin{equation*}
  (A,s_0^{\str A}, s_{1}^{\str A},\dots).
\end{equation*}

\begin{examples}{The following are structures:}
  \label{examples:structures} 
    \item
  $(\varN,{}',0)$;
\item
a group $G$, or $(G,\cdot,{}\inv,1)$;
\item
an \emph{abelian} group $G$, or $(G,+,-,0)$;
\item
a unital ring $R$, or $(R,+,-,\cdot,0,1)$;
\item
the ring $\Z$, or $(\Z,+,-,\cdot,0,1)$;
\item
the field $\R$, or $(\R,+,-,\cdot,0,1)$;
\item
the two-element field $\F_2$, or $(\F_2,+,-,\cdot,0,1)$; see
\S~\ref{sect:prop};
\item
a partial order $(\Omega,\leq)$;
\item
the \emph{ordered} field $\R$, or $(\R,+,-,\cdot,0,1,\leq)$;
\item
a vector-space $V$ over a field $K$; here the signature of $V$ is
\begin{equation*}
  \{+,-,0\}\cup\{a\cdot{}:a\in K\}, 
\end{equation*}
where $a\cdot{}$ is the singulary
operation of multiplying by $a$;
\item
the \defn{power-set} structure on a non-empty set $\Omega$, namely
\begin{equation*}
  (\pow{\Omega},\cap,\cup,{}\comp,\emptyset,\Omega,\included); 
\end{equation*}
\item
the \defn{truth-}structure
\begin{equation*}
  (\B,\land,\lor,\lnot{}, 0,1,\models), 
\end{equation*}
where $\B=\{0,1\}$, and
  $\models$ is the binary relation $\{(0,0), (0,1), (1,1)\}$.   (The name
  `truth-structure' is not standard, as far as I know.)
\end{examples}

Note well that
$\B=\{\emptyset,\{\emptyset\}\}=\pow{\{\emptyset\}}=\pow 1$, and the
truth-structure \emph{is} the power-set structure on $1$.
\defn{Propositional logic} studies the truth-structure;
  \defn{model-theory} studies \emph{all} structures.


With $\interpretation$ as above in the structure $(A,\interpretation)$:
\begin{mylist}
  \item
$s^{\str A}$ is the \defn{interpretation} in $\str A$ of $s$;
\item
$s$ is a \defn{symbol} for $s^{\str A}$.
\end{mylist}
So $s$ is one of the following:
\begin{mylist}
  \item
a \defn{constant};
\item
an \defn{$n$-ary function-symbol} for some positive $n$ in $\varN$;
\item
an \defn{$n$-ary predicate}\footnote{Or \tech{relation-symbol.}} for some
positive $n$ in $\varN$.  
\end{mylist}

Since nullary operations on $A$ can be considered as elements of $A$,
a constant can be considered as a nullary function-symbol.

Here are some observations about our definition of \tech{structure}:
\begin{mylist}
  \item
I am following the old convention (used for example in
\cite{MR0409165}) of denoting the universe of a structure by a Roman
letter, and the structure itself by the corresponding Fraktur letter.
Recent writers (as in \cite{MR1924282} or
\cite{MR1800596}) use `calligraphic' letters, not Fraktur:
\begin{center}
  \begin{tabular}{r | c | c | c | c | c | c | c}
For a structure with universe: & $A$ & $B$ & $C$ & \dots & $M$ & $N$ &
\dots \\ \hline
\rule{0mm}{4mm}I write: & $\str A$ & $\str B$ & $\str C$ &
\dots & $\str M$ & $\str N$ & \dots\\ \hline 
\rule{0mm}{4mm}Others may write: & $\mathcal A$ & $\mathcal B$ &
$\mathcal C$ & \dots & $\mathcal M$ & $\mathcal N$ & \dots
  \end{tabular}
\end{center}
Another option (taken in \cite{MR94e:03002}) is to use an ordinary letter
like $A$ for a structure, and then $\operatorname{dom}(A)$ for its
universe.  (Here `dom' stands for \tech{domain}.)
Finally, one might not bother to make a typographical
distinction between a structure and its universe.  Indeed, as
suggested in the examples, the
distinction is not easy to make with standard structures like $\Z$ or
$\R$. 
\item
Similarly, it is not always easy or convenient to distinguish between
a symbol and its interpretation.  A \defn{homomorphism} from a group
$G$ to a group $H$ is usually described as a function $f$ from $G$ to
$H$ such that 
\begin{equation*}
  f(g_0\cdot g_1)=f(g_0)\cdot f(g_1)
\end{equation*}
for all $g_e$ in $G$.  If we are trying to be precise, we should call
the groups $\str G$ and $\str H$, with group-operations $\cdot^{\str
  G}$ and $\cdot^{\str H}$ respectively, and we should say that $f$ is
such that
\begin{equation*}
  f(g_0\cdot^{\str G} g_1)=f(g_0)\cdot^{\str H} f(g_1)
\end{equation*}
for all $g_0$ and $g_1$ in $G$.  But writing this way soon becomes
tedious. 
\item
In a structure $(A,\interpretation)$, the universe $A$ and the
\defn{interpretation}-function $\interpretation$ work together to
provide interpretations of the
symbols in $\lang$ \emph{as} elements of, or operations or relations on,
a certain set, namely $A$ itself.  That's all a structure is:
something that provides a mathematical 
interpretation for certain symbols.  We shall develop this idea
later.  What makes model-theory interesting is that the same symbols
can have different interpretions.  For example, $\cdot$ in $\Z$ has
different properties from $\cdot$ in $\R$.  Here begins the
distinction between
\defn{syntax} \label{syntax} (abstract rules for working with
symbols) and \defn{semantics} (the meaning of the symbols; see also
Chapter~\ref{ch:prop}).  
\end{mylist}


\section{Propositional logic}
\label{sect:prop}

This section reviews propositional logic; but
the subject will be treated more deeply in Chapter~\ref{ch:prop}.

Of the so-called \tech{truth-structure} given among the
Examples~\ref{examples:structures}, the signature is
\mbox{$\{\land,\lor,\lnot,0,1,\models\}$.}  The symbol $\models$ is
here a binary predicate (later it will also have other uses).  The
other symbols are function-symbols; we shall call
them \defn{\propositional{} connectives.}\footnote{Or \tech{Boolean
    connectives.}}  We may use additional \propositional{}
connectives.
For example:
\begin{myenum}\setcounter{myenum}{-1}
\item
$0$ and $1$ are nullary connectives;
\item
$\lnot$ is a singulary\footnote{This word is more etymologically
  correct than the more common \Eng{unary.}} connective;
\item
$\land$, $\lor$, $\to$, $\iff$, and $\eor$ are binary connectives.
\end{myenum}
Each of these has a standard interpretation as an operation on $\B$.
These interpretations can be given by \defn{truth-tables:}
\begin{center}
  \begin{tabular}{c|c||c|c|c|c|c|c|c|c}
    $P$ & $Q$ & $0$ & $1$ & $\lnot P$ & $P\land Q$ & $P\lor Q$ &
    $P\to Q$ & $P\iff Q$ & $P\eor Q$ \\\hline
$0$&$0$&$0$&$1$&$1$&$0$&$0$&$1$&$1$&$0$\\
$1$&$0$&   &   &$0$&$0$&$1$&$0$&$0$&$1$\\
$0$&$1$&   &   &   &$0$&$1$&$1$&$0$&$1$\\
$1$&$1$&   &   &   &$1$&$1$&$1$&$1$&$0$
  \end{tabular}
\end{center}
Alternatively, we can first understand $\B$
as a two-element unital ring with addition- and multiplication-tables
\begin{center}
\hfill
  \begin{tabular}{c | cc}
$+$ & $0$ & $1$ \\ \hline
$0$ & $0$ & $1$ \\
$1$ & $1$ & $0$
  \end{tabular},\hfill
  \begin{tabular}{c | cc}
$\cdot$ & $0$ & $1$ \\ \hline
$0$ & $0$ & $0$ \\
$1$ & $0$ & $1$
  \end{tabular}.\hfill\mbox{}
\end{center}
Then $\B$ is the ring sometimes denoted $\Z_2$; it is a
\emph{field,} and as such can be denoted $\F_2$.  Then 
\begin{mylist}
  \item
$0$ and $1$ are symbols for themselves;
\item
$\eor$ is another symbol for addition; 
\item
$\land$ is another symbol for multiplication;
\item
the remaining connectives are thus:
\begin{center}
  \begin{tabular}{c|c}
symbol & interpretation\\\hline
$\lnot$ & $x\mapsto x+1$\\
$\lor$ & $(x,y)\mapsto x\cdot y+x+y$\\
$\to$ & $(x,y)\mapsto x\cdot y+x+1$\\
$\iff$ & $(x,y)\mapsto x+y+1$
  \end{tabular}
\end{center}
\end{mylist}
In general,
a \defn{signature} for propositional logic is a set of \propositional{}
connectives.  Let $\lang$ be such.  The \defn{(propositional)
  formulas} of $\lang$ are certain strings composed of:
\begin{mylist}
  \item
symbols from $\lang$;
\item
\defn{propositional variables} $P_0$, $P_1$, $P_2$, \dots
\end{mylist}
In particular:
\begin{mylist}
  \item
each variable is a formula.
\item
$*\sF_0\dotsb \sF_{n-1}$ is a formula, if $*$ is an $n$-ary connective
  from $\lang$, and the $\sF_i$ are formulas.  (If $n=0$, then $*$ by
  itself is a formula.)
\end{mylist}
(Why we \emph{can} define a set of propositional formulas this way will
be indicated in \S~\ref{sect:prop-form}.)
Usually, if $*$ is binary, then, instead of $*\sF_0\sF_1$, we write
\begin{equation}\label{eqn:infix}
  (\sF_0*\sF_1)
\end{equation}
(thus introducing new symbols: the two parentheses; but we usually do
not write the outermost set of parentheses in a formula).
A formula is \defn{$n$-ary} if\footnote{Alternatively, one may want to
  refer to a 
formula as $n$-ary if it contains no more than $n$ distinct
variables, without worrying about which variables those are.} its
variables belong to the set
$\{P_0,\dots,P_{n-1}\}$.  If a formula is $n$-ary, then its
\defn{arity} is $n$.  An $n$-ary formula $\sF$ can be written as
\begin{equation*}
  \sF(P_0,\dots,P_{n-1}). 
\end{equation*}

As each \propositional{} connective has a standard interpretation as
an operation on $\B$,
so every $n$-ary formula has a standard interpretation as such an
operation, in an obvious
way.  We can say then that the $n$-ary formula
\defn{represents} the $n$-ary operation that is its standard
interpretation.
Notationally, an $n$-ary formula $\sF$ will represent the function 
\begin{equation*}
  \tuple x\longmapsto \widehat \sF(\tuple x)
\end{equation*}
from $\B^n$ to $\B$.  Then the standard interpretations of formulas
can be defined as follows:
\begin{mylist}
  \item
If $k<n$, then the formula $P_k$ is an $n$-ary formula and, as
such, represents 
the operation
\begin{equation*}
  \tuple x\longmapsto x_k
\end{equation*}
from $\B^n$ to $\B$.
(This operation can be denoted $\widehat P_k$.)
\item
If $\{\sF_0,\dots,\sF_{n-1}\}$ is a set of $n$ formulas, each of them
$k$-ary, and if $*$ is an $n$-ary \propositional{} connective in
$\lang$, then
the formula $*\sF_0\dotsb \sF_{n-1}$ represents the function
\begin{equation*}
  \tuple x\longmapsto g(\widehat \sF_0(\tuple x),\dots,\widehat
\sF_{n-1}(\tuple x))
\end{equation*}
from $\B^k$ to $\B$, where $g$ is the standard interpretation of $*$.
\end{mylist}
In particular, if $*$ is $n$-ary, then its standard interpretation is
$\named{\sG}$, where $\sG$ is the formula $*P_0\dotsb P_{n-1}$.  When
formulas are written with the notation of Line~\eqref{eqn:infix}, then
their interpretations can be given by truth-tables in the
style shown in the proof of Theorem~\ref{thm:to-not} below.

The notion that a propositional formula represents an operation will be
developed further in the next chapter in case $\lang$ is
$\{\lnot,\to\}$.  We shall be able to restrict ourselves to this
signature, because it is \tech{adequate}.
In general, a signature for propositional logic is \defn{adequate} if,
for each $n$-ary operation $g$ on
$\B$, there is an $(n+k)$-ary formula $\sF$ of $\lang$ (for some $k$)
such that
\begin{equation*}
  g(\tuple e)=\widehat \sF(\tuple e,\tuple f)
\end{equation*}
for all $\tuple e$ in $\B^n$ and $\tuple f$ in $\B^k$: that is, every
operation on $\B$ is \defn{represented} in $\lang$ by some formula.  
We allow the {arity} of $\sF$ here to be larger than that of $g$,
since we 
want it to be possible for signatures without nullary connectives to
be adequate. 

The following basic tool for establishing adequacy of a signature was
proved by Emil Post in 1921 \cite{Post}: 

\begin{lemma}\label{lem:Post}
  A signature of propositional logic is adequate, provided that, in
  this signature, the following operations are represented:
  \begin{mylist}
    \item
the constant functions $0$ and $1$;
\item
the ternary function $f$ given by
\begin{center}
  \begin{tabular}{c|c|c|c}
$e_0$ & $e_1$ & $e_2$ & $f(\tuple e)$\\ \hline
$0$&$0$&$0$&$0$\\
$1$&$0$&$0$&$1$\\
$0$&$1$&$0$&$0$\\
$1$&$1$&$0$&$1$\\
$0$&$0$&$1$&$0$\\
$1$&$0$&$1$&$0$\\
$0$&$1$&$1$&$1$\\
$1$&$1$&$1$&$1$
  \end{tabular}.
\end{center}
  \end{mylist}
\end{lemma}

\begin{proof}
  We use induction on the arity of operations.  The nullary operations
  are represented in the signature by assumption.
  Suppose all
  $n$-ary operations are represented, and $g$ is $(n+1)$-ary.  If
  $e\in\B$, let
  $h_e$ be the $n$-ary operation $\tuple x\mapsto g(\tuple x,e)$.
  By definition,
  \begin{equation*}
    f(e_0,e_1,e_2)=
    \begin{cases}
      e_0,&\text{ if }e_2=0;\\
      e_1,&\text{ if }e_2=1.
    \end{cases}
  \end{equation*}
Then for all $\tuple d$ in $\B^n$, we have
\begin{equation*}
  g(\tuple d,e)=h_e(\tuple d)=f(h_0(\tuple d),h_1(\tuple d),e).
\end{equation*}
Thus the function $g$ is
  \begin{equation*}
    (\tuple x,y)\longmapsto f(h_0(\tuple x),h_1(\tuple x),y).
  \end{equation*}
By inductive hypothesis, each of the functions $h_e$ is represented by
some formula
\begin{equation*}
  \sH_e(P_0,\dots,P_{n-1},\dots); 
\end{equation*}
by assumption, $f$ is represented by
some function $\sF(P_0,P_1,P_2,\dots)$.  Hence
$g$ is represented by
\begin{equation*}
  \sF(\sH_0(P_0,\dots,P_{n-1},\dots),
  \sH_1(P_0,\dots,P_{n-1},\dots),P_n,\dots).
\end{equation*}
By induction, the operations of all arities are represented. 
\end{proof}

\begin{theorem}\label{thm:to-not}
The propositional signature  $\{\to,\lnot{}\}$ is adequate.
\end{theorem}

\begin{proof}
By the lemma, it is enough to observe:
\begin{mylist}
  \item
  $P_0\to P_0$ represents $1$;
\item
  $\lnot(P_0\to P_0)$ represents $0$;
\item
the operation $f$ as in the lemma is
  represented by the formula 
  \begin{equation*}
      \lnot((\lnot P_2\to P_0)\to \lnot(P_2\to P_1)),
  \end{equation*}
  since its {truth-table} is
  \begin{center}
    \begin{tabular}{c|c|c|c|c|c|c|c|c|c}
$\lnot$ & $((\lnot$ & $P_2$ & $\to$ & $P_0)$ & $\to$ & $\lnot$ & $(P_2$ &
      $\to$ & $P_1))$\\ \hline
$0$&$1$&$0$&$0$&$0$&$1$&$0$&$0$&$1$&$0$\\
$1$&$1$&$0$&$1$&$1$&$0$&$0$&$0$&$1$&$0$\\
$0$&$1$&$0$&$0$&$0$&$1$&$0$&$0$&$1$&$1$\\
$1$&$1$&$0$&$1$&$1$&$0$&$0$&$0$&$1$&$1$\\
$0$&$0$&$1$&$1$&$0$&$1$&$1$&$1$&$0$&$0$\\
$0$&$0$&$1$&$1$&$1$&$1$&$1$&$1$&$0$&$0$\\
$1$&$0$&$1$&$1$&$0$&$0$&$0$&$1$&$1$&$1$\\
$1$&$0$&$1$&$1$&$1$&$0$&$0$&$1$&$1$&$1$
    \end{tabular}.
  \end{center}
\end{mylist}
(Note that the last formula is \defn{equivalent} to $(\lnot P_2\to
  P_0)\land(P_2\to P_1)$.) 
\end{proof}




\section{Syntax and semantics}

We introduce propositional connectives as a way to understand, and to
make precise, certains parts of ordinary language: namely,
conjunctions and other `structural' words like \Eng{and, or, not,} and
\Eng{if\dots then.} 
For example, we interpret the connectives
$\lnot$ and $\to$ as in the truth-tables
\begin{center}
\hfill
  \begin{tabular}{c|c}
$\lnot$ & $P_0$ \\ \hline
$1$ & $0$ \\
$0$ & $1$
    \end{tabular}
\hfill
\begin{tabular}{c|c|c}
$P_0$ & $\to$ & $P_1$\\ \hline
$0$ & $1$ & $0$\\
$1$ & $0$ & $0$\\
$0$ & $1$ & $1$\\
$1$ & $1$ & $1$
\end{tabular}
\hfill\mbox{}
\end{center}
because:
\begin{mylist}
  \item
we think\footnote{It is possible to think the other way, where
  $0$ is truth and $1$ is falsity; this is done, for example, in
  \cite[Ch.~4, Exercise~3.7, p.~178]{MR83e:04002}.} of $0$ as falsity
  and $1$ as truth;
\item
we take $\lnot$ to stand for a word like \Eng{not} that \emph{negates}
sentences, and we take $\to$ to stand for the locution \Eng{if\dots then;}
\item
in our mathematical writing at any rate, 
\begin{myitem}
  \item
a claim will be true
if and only if its negation is false, and 
\item
an implication
\Eng{If $A$, then $B$} will be false if and only if $A$ is true, but
$B$ is false.
\end{myitem}
\end{mylist}
A \tech{tautology} will be a propositional formula that is `always
true'; that is, the interpretation of an $n$-ary tautology will be the
constant $n$-ary function $\tuple x\mapsto 1$.  In particular, the
propositional formula $P_0\to P_0$ will be a tautology.  This will be
so, because the
sentence \Eng{If $A$, then $A$} is always true; also, from the
truth-table for $P_0\to P_1$ given above, we can construct the truth-table
\begin{center}
  \begin{tabular}{c|c|c}
$P_0$ & $\to$ & $P_0$\\ \hline
$0$ & $1$ & $0$\\
$1$ & $1$ & $1$    
  \end{tabular}
\end{center}
In short, tautology is a \defn{semantic} notion: it concerns the
`meaning' of formulas.  At least, the notion of tautology concerns the
meaning of formulas when they are thought of as forms of sentences 
of ordinary language.  (The etymology of \Eng{semantic} is discussed
below).  To express that a
formula is a tautology, we shall write in front of it the symbol
\begin{equation*}
  \models.
\end{equation*}
Whenever a notion is based directly on truth-tables, we shall
consider it to be semantic.

Gottlob Frege is credited with the first precise formulation of an
alternative to the truth-table method for establishing tautologies.
This formulation is given in the \emph{Begriffsschift}
\cite{MR0263601} of 1879.  A bit of Frege's peculiar notation 
(discussed below in \S~\ref{sect:notation}) survives:  To express that
a formula is a tautology found by Frege's method, we shall write in
front of it the symbol
\begin{equation*}
  \proves.
\end{equation*}
We shall refer to this method as \defn{syntactic,} because it directly
involves the
way that symbols are arranged into formulas, without consideration of
their possible interpretations.  (Again, etymology is discussed
below.)  Forty-two years later, in 1921, in the same paper cited in
the last section as the source of Lemma~\ref{lem:Post}, Emil
Post published a proof  \cite[p.~169]{Post} that
the syntactical method can establish
\emph{all} tautologies:
\begin{equation*}
  \proves\sF\Iff {}\models\sF.
\end{equation*}
The syntactical method is the method of \defn{formal proof.}  It is a
mechanical method, in the sense that a machine can recognize when the
method has been applied successfully.  

The method of formal proof does have its foundation in semantic
considerations, if only because it is designed to establish semantic
facts.  Also, the truth-table method itself is mechanical, so it too
seems to have a right to be called syntactic.  Thus, our
distinction between the syntactic and the semantic is somewhat
arbitrary.  The distinction will be more
profound in the context of first-order logic.

The remainder of this section treats\footnote{With the help of the
  Liddell--Scott lexicon \cite{LSJ}.} the Greek origins of our words
\Eng{syntax} and \Eng{semantics.}

The Greek etymon for
\Eng{syntax,} namely \Gk{<h s'untaxic,
  -ewc,} refers originally to an \emph{arranging,} a \emph{putting
  together in order,} 
especially of soldiers.  In one passage of Plato's \emph{Republic}
\cite[591d]{Shorey}, it is \emph{wealth} that may be arranged.
In that passage, the character of Socrates describes the wise man: 
\begin{quote}
  \Gk{O>uko~un, e>~ipon, ka`i t`hn >en t~h| t~wn qrhm'atwn kt'hsei
  \Gkemph{s'untax'in} te ka`i sumfwn'ian?  ka`i t`on >'ogkon to~u
  pl'hjouc o>uk >ekplhtt'omenoc <up`o to~u t~wn poll~wn makarismo~u
  >'apeiron a>ux'hsei, >ap'eranta kak`a >'eqwn?}

And will it not also be so, I said, with the \Gkemph{arranging} and
harmonizing of his 
possessions?  He will not let himself be dazzled by the felicitations
of the multitude and pile up the mass of his wealth without measure,
involving himself in measureless ills, will he?\footnote{The
  translation is adapted from Shorey's \cite{Shorey}.}
\end{quote}
The arranging implied by \Gk{s'untaxic} can also be \emph{grammatical,}
a putting together of \emph{words.}

The source of \Eng{semantics} is the Greek
adjective \Gk{shmantik'oc, -'h, -'on,} meaning \emph{significant} or
\emph{meaningful.}  Related words include the verb \Gk{shma'inw}
(signify) and the noun \Gk{t`o shme~ion} (sign).  In \emph{On
  Interpretation} \cite[16a19,
  b5]{Aristotle-LI}, Aristotle defines nouns and verbs:
\begin{quote}
  \Gk{>'Onoma m`en o>~un >est`i fwn`h \Gkemph{shmantik`h} kat`a sunj'hkhn
  >'aneu qr'onou, <~hc mhd`en m'eroc >est`i \Gkemph{shmantik`on}
  keqwrism'enon;

<R~hma d'e >esti t`o \Gkemph{prosshma~inon} qr'onon, o<~u m'eroc o>ud`en
  \Gkemph{shma'inei} qwr'ic, ka`i >'estin >ae`i t~wn 
kaj> <et'erou legom'enwn \Gkemph{shme~ion.}}

A noun is a sound, \Gkemph{meaningful} by convention, without [grammatical]
tense, of which no part separately is \Gkemph{meaningful.}

A verb is [a
  sound] \Gkemph{signifying} a tense besides; no part of it is
  \Gkemph{mean}\-\Gkemph{ingful}
%meaningful
separately; it is always a \Gkemph{sign} of \emph{things said of} something.
\end{quote}
The more basic \Gk{t`o s~hma, -atoc,} meaning \emph{sign, mark,
  token,} appears in Homer (\emph{Iliad,} X.465--468):
\begin{quote}
\Gk{<`Wc >~ar'' >ef'wnhsen, ka`i >ap`o <'ejen <uy'os'' >ae'irac\\
j~hken >an`a mur'ikhn; d'eelon d' >ep`i \Gkemph{s~hm'a} t'' >'ejhke\\
summ'aryac d'onakac mur'ikhc t'' >erijhl'eac >'ozouc,\\
m`h l'ajoi a>~utic >i'onte jo`hn di`a n'ukta m'elainan.}

With these words, he took the spoils and set them upon a tamarisk
tree, and they make a \Gkemph{mark} at the place by pulling up reeds
and gathering boughs of tamarisk, that they might not miss it as they
came back through the fleeting hours of darkness.\footnote{Text and Samuel
  Butler's translation are from \url{http://www.perseus.tufts.edu}.}
\end{quote}
