Symbolic logic

This is an old revision of this page, as edited by EdH (talk | contribs) at 16:23, 9 June 2003 (Merge formal logic). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Symbolic logic is primarily concerned with the stucture of reasoning. Logicians translate natural language sentences into a symbolic notation, which allows them to investigate complex relationships among the elements of the sentences. Concepts like validity, entailment, consistency and contradiction can be rigorously defined in a formal, symbol-based language, enabling logicians to compose proofs about whether these features are present or absent in defined circumstances.

Symbolic logic is also known as formal logic and mathematical logic (or simply as "logic" by mathematicians. )The term "Symbolic logic" is sometimes used to contrast with philosophical logic, the latter being concerned with formal descriptions of aspects of natural language that resist easy formalization. On the other hand "mathematical logic" can be used to refer to a particular discipline within mathematics, having its origin in the need to express the content of the work of Kurt Gödel. That represented a certain shift of emphasis on the idea of metamathematics: symbolic logic studied mathematically is simply further mathematics, even if it admits the interpretation that it is also about mathematics.

The subject was originally described by philosophers (see logic and history of logic) using natural language (words). Because words are often too ambiguous and vague to describe the process of reasoning rigorously, formal language was developed. It is largely used to describe sets (see set theory) and proofs (see proof theory). It has a basic pedagogic value in making clear what a (formal) proof is, and the sense in which set theory provides an adequate foundation for mathematical abstractions.

Logical statements are to be expressed as symbolic strings, in a precise, compact and unambiguous notation (sometimes described as a logical calculus), similar to notations used in mathematics. Axioms, statements which are accepted without proof, are identified, and the valid rules of argumentation (transformation rules) are defined.


The claim made for mathematics, and largely justified during the twentieth century, was that mathematical reasoning can in principle be formalized using a formal language. That is, a complete formalization could be carried out within symbolic logic, if really required. It is not claimed that it has been carried out in detail; that carrying it out would have any particular value except in cases where there is some dispute about what has been proved; or that it should be carried out. Informal usage still dominates the way mathematicians discuss their subject, and mathematical papers use a semi-formal style.

The development of computers raised further questions about what could be done in practice with mathematics as formal system: for example the machine checking of proofs, as well as the idea that computers could find proofs (automated theorem proving), or store proofs too lengthy to be written out by hand. These ideas depend on an initial formalization.

Proofs by humans can be computer-assisted, posing a different problem: does one simply accept a repeatable machine calculation as 'proof', when in principle it is a statement about a complex computer architecture? One can hardly expect a proof of correctness of the compiler.


The first task in formalization may be to establish some standard form for mathematical statements. They will have to be treated in a uniform way; and the everyday perception of their definiteness must be supported. The stock of mathematical propositions is commonly supposed to have some distinctive properties (for example in relation to false dilemma).

The scheme still used for putting mathematical propositions into a standard formal shape goes back to Gottlob Frege. It has been so successful that it has been adopted elsewhere (at least within analytic philosophy, and as a consequence in some other linguistic discussions) as the definitive notion of propositions, concerning some basic set of 'constants'. The syntax of propositions is both symbolic (meaning notations are used for everything) and formal (meaning that the rules for construction are quite explicit and exhaustive). We expect that the syntax will undertake for us the task of narrowing down amongst strings of symbols to those that could in principle be statements. That is, our propositions are supposed at least grammatically (as we would see it) to be of the type that could in principle be deemed true or false.

Details of such a language are given at first-order predicate calculus.


Here 'first-order' refers to what quantification can be done (i.e. the scope of the quantified variables). In ordinary mathematical practice in abstract algebra one wants to quantify in various ways: over (say) elements of a given group; or over subgroups of a group; or over all groups. These have different logical status, only the first being on the face of it first-order. The first-order theory of groups isn't the same as what is meant by group theory: it has less expressive power. In return it is a theory that has a good model theory. The formalization of existing mathematics in first-order set theory was the initial successful step.


Some samples of symbolic representation and notation.

Lowercase letter p, q and r with italic font are conventionally used to denote propositions like:

p: 1 + 2 = 3

This statement defines p is 1 + 2 = 3 and that is true.

Two propositions can be combined using conjunction, disjunction or conditional. They are called binary logical operators. Such combined propositions are called compound propositions. For example,

p: 1 + 1 = 2 and "logic is the study of reasoning."

In this case, and is a conjunction. Notice how the two propositions can differ totally from each other.

In mathematics and computer science, one may want to state a proposition depending on some variables like:

p: n is an odd integer.

This proposition can be either true or false according to the variable n.

A proposition 'with free variables' is called propositional function with domain of discourse D. To form an actual proposition, one follows Frege's improvements in representing quantification. For every n, or for some n, can be specified by quantifiers: either the universal quantifier or the existential quantifier. For example,

for all every n in D, P(n).

This can be written also like:

∀n P(n).

When there are several variables free, the standard situation in mathematical analysis since Weierstrass, the quantifications for all ... there exists or there exists ... such that for all (and more complex analogues) can be expressed.


The requirements on true and false as so-called truth-values of propositions are classical. According to Aristotle we admit no contradiction, so that no proposition is both true and false. Also, according to the law of excluded middle, a proposition must either be true or false. In combination the requirements are of exactly two truth values, that are mutually exclusive. Each proposition can be either true or false (but cannot be both at the same time).

Of course, if they could be independently true or false, there would be no subject of logic: there are some constraints and formal logic is supposed to tell one as much as possible about those, in the abstract.

This picture depends both on the possibility and limitations of formalization.

For example, for the following propositions:

(a) "11 is a prime number."
(b) "The discrete mathematics course is a nightmare for computer science majors."
(c) "Taquitos kill 5 billion people each year."

One can say that: (a) is true, (b) is either true or false depending on your perception and (c) is false. But only (a) is subject to a completely formal regime of logic. Here (b) uses a subjective concept that one wouldn't expect to have formal status; and (c) is false empirically rather than on logical grounds alone.

Philosophers have certainly tried to extend formalization further than mathematics; as have those working on Artificial Intelligence by the symbolic route.

From the point of view of the history of mathematical logic, the classical laws on truth values have presented serious problems. Until the work of Kurt Gödel, the attitude to proof theory was the naive one, that the possibility of contradiction in central areas of mathematics was 'inconceivable' and was a technical matter, to which a solution in principle could be found. And until the content of the constructivist view of intuitionistic mathematics was understood, excluded middle was considered essential to preserve the 'definite' character of mathematical statements. The attitudes of David Hilbert on these matters might be characterised as 'good scientist, poor mathematician': his work clarified the foundational questions to the point where he could be proved wrong. These days with the help of model theory one has a better idea of the semantic content of mathematical propositions.


There are many different systems of symbolic logic. Any system of symbolic logic has a number of components: the set of acceptable sentences, called well-formed formulas (or wffs); transformation rules for deriving new formulas from one or more initial formulas; the set of axioms, which is a subset of the set of wffs. The sets of wffs and axioms can be finite or infinite, so long as they are recursive; i.e. so long as there exists a procedure for determining whether any given sentence is a wff or axiom, which could be carried out in a finite number of steps by a device such as a Turing machine (sometimes, it is enough to require that these sets be recursively enumerable).

The theorems of the system are the axioms and all the wffs that can be derived from the axioms by a finite number of applications of the transformation rules. In systems of natural deduction the set of axioms is empty and the transformation rules correspond to patterns of reasoning such as modus ponens whose validity, or truth-preserving nature, is easily recognized.

For a symbolic logic system to be useful it helps to have two additional properties; soundness and completeness. These properties relate the system of symbolic logic to some underlying model. The property of soundness states that if a theorem can be proven in the symbolic system then it is also true in the model. The property of completeness states that if the theorem is true in a model, then there is some proof for it in the associated symbolic logic. A symbolic logic is said to be determined by a model if it is both sound and complete. Logics of this kind are useful for systems of mechanical theorem proving.

Metalogic is the study of different systems of symbolic logic, and their properties. We can not only prove statements in a logical system; we can also prove statements about logical systems. The most important result along these lines are Gödel's incompleteness theorems which essentially show that every sufficiently powerful logical system does contain sentences which can be neither proved nor refuted from within the system.

There are three main types of logical systems which are used: propositional calculus, predicate calculus, and modal logic.

Various alternatives to the classical system have been studied, including:

  • many-valued -- permits sentences to be more than just true or false, but also have intermediate truth values
  • paraconsistent -- permits inconsistent sentences. Does not have ex contradictione quodlibet (from a contradiction anything follows)
  • infinitary -- permits sentences to be infinitely long
  • intuitionistic -- system of logic used in mathematical intuitionism
  • relevant -- has only relevant implication
  • substructural -- systems of predicate calculus weaker than the classical

Predicate logic extends propositional calculus to deal with predication and quantification. Again we have a classical system, and various deviations from this system. Most of the different systems of propositional calculus can be extended in this manner to produce systems of predicate calculus. There are two main varieties of systems of predicate calculus: first-order predicate calculus, which allows quantification over objects and is the standard logic used in mathematics (see set theory); and higher-order predicate calculus, which permits in addition quantification over predicates and predication of predicates (see HOL).

There exist proofs for the soundness and completeness of both predicate and first order logic (see Goedel's completeness theorem). Second and higher order logics are not complete however; in other words in all second and higher order logics there are theorems which are true in any model but are not provable within the symbolic system.

The term modal logic has two meanings. In a restricted sense, it means the logic of necessity and possibility. But more broadly, it also includes other logics with a similar structure: deontic logics (which deal with ideas of permission and obligation), temporal logics (which deal with the relationships of past, present and future between events), and doxastic logic (which deals with belief). What all these systems have in common is that they involve the application of unary operators (called modal operators) to statements of either propositional or predicate calculus. It is in this broader sense that the term is used here.

The various systems of modal logic include K, M, S4, S5 and B. The systems differ in having different axioms.

Reference and Further reading


See also: fuzzy logic