Jump to content

Context-free grammar

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jan Hidders (talk | contribs) at 11:08, 10 August 2002 (derivations and syntax trees). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A context-free grammar is a formal grammar in which every production rule is of the form

V -> w

where V is a nonterminal symbol and w is a string consisting of terminals and/or non-terminals. The term "context-free" comes from the feature that the variable V can always be replaced by w, no matter in what context it occurs. A formal language is context-free if there is a context-free grammar which generates it.

Context-free grammars are important because they are powerful enough to describe the syntax of programming languages; in fact, almost all programming languages are defined via context-free grammars. On the other hand, context-free grammars are simple enough to allow the construction of efficient parsing algorithms which for a given string determine whether and how it can be generated from the grammar. See LR parser and LL parser for examples.

Examples

Example 1

A simple context-free grammar is

S -> aSb | ε

where | is used to separate different options for the same nonteminal and ε stands for the empty string. This grammar generates the language {anbn : n ≥ 0} which is not regular.

Example 2

Here is a context-free grammar for syntactically correct infix algebraic expressions in the variables x, y and z:

S -> T + S | T - S | T
T -> T * T | T / T | ( S ) | x | y | z

This grammar can for example generate the string "( x + y ) * x - z * y / ( x + x )".

Example 3

A context-free grammar for the language consisting of all strings over {a,b} which contain a different number of a's than b's is

S -> U | V
U -> TaU | TaT
V -> TbV | TbT
T -> aTbT | bTaT | ε

Here, T can generate all strings with the same number of a's as b's, U generates all strings with more a's than b's and V generates all strings with less a's than b's.

Example 4

Another example of a context-free grammar generates all the well-parenthesized expressions:

S -> ( ) | ( S ) | S S

Derivations and Syntax trees

There are basically two ways to describe how in a certain grammar a string can be derived from the start symbol. The simplest way is to list the consecutive strings of symbols, beginning with the start symbol and ending with the string, and the rules that have been applied. If we introduce a strategy such as "always replace the left-most nonterminal first" then for context-free grammars the list of applied grammar rules is by itself sufficient. This is called the left derivation of a string. For example, if we take the following grammar:

(1) S -> S + S
(2) S -> 1

and the string "1 + 1 + 1" then the left-most derivation of this string is the list [ 1, 1, 2, 2, 2 ]. Analogously the rightmost derivation is defined as the list that we get if we always replace the rightmost nonterminal first. In this case this would be the list [ 1, 2, 1, 2, 2 ].

The distinction between left-most derivation and right-most derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore it is important to know whether the parser determines a left-most or a right-most derivation because this determines the order in which the the pieces of code will be executed. See for an example LL parsers and LR parsers.

A derivation also imposes in some sense a hierarchical strucutre on the string that is derived. For example the strucure of the string "1 + 1 + 1" would, according to the left-most derivation, be:

{ { { 1 }S + { 1 }S }S + { 1 }S }S

where { ... }S indicates a substring recognized as belonging to S. This hierarchy can also be seen as a tree:

               S
               | 
       S------'+'--S
       |           |
   S--'+'--S      '1'
   |       |
  '1'     '1'

This tree is called a syntax tree of the string. In this case the presented left-most and the right-most derivation define the same syntax tree, however there is another (left-most) derivation of the same string

S -> (1) S + S -> (2) 1 + S -> (1) 1 + S + S -> (2) 1 + 1 + S -> (2) 1 + 1 + 1

and this defines the following syntax tree:

           S 
           |
       S--'+'------S
       |           |
      '1'      S--'+'--S
               |       |
              '1'     '1'

If for certain strings in the language of the grammar there are more than one parsing trees then the grammar is said to be an ambiguous grammar.

Normal forms

Every context-free grammar which does not generate the empty string can be transformed into an equivalent one in Chomsky Normal Form or Greibach normal form. "Equivalent" here means that the two grammars generate the same language.

Because of the especially simple form of production rules in Chomsky Normal Form grammars, this normal form has both theoretical and practical implications. For instance, one can use the Chomsky Normal Form to construct for every context-free language a polynomial algorithm which decides whether a given string is in the language or not (the CYK algorithm).

Decision problems

It is not possible to construct a general algorithm which takes as input two context-free grammars and decides whether the two grammars generate the same language; nor is it decidable whether their languages have a single string in common. It is however possible to decide whether a given context-free grammar generates a non-empty language or not, and it is also possible to decide algorithmically whether a given context-free grammar generates an infinite language or not.

Properties of context-free languages

The union and concatenation of two context-free language is context-free; the intersection need not be. The reverse of a context-free language is context-free, but the complement need not be context-free. Every regular language is context-free because it can be described by a regular grammar. There exist context-sensitive languages which are not context-free. To prove that a given language is not context-free, one employs the pumping lemma for context-free languages. See the Chomsky hierarchy for the position of context-free languages in the hierarchy of all formal languages.