12/08/2011

Ein Weihnachtsgedicht

(C) Professor Fish, aka Ralf Lämmel


Stille Nacht! Unheilvolle Nacht!

Alles schläft; einsam wacht

nur der Bachelor-Student,

der nie pennt,

der nur rennt.

So viele Prüfungen, o weh!

So viele Prüfungen, o weh!


Stille Nacht! Unheilvolle Nacht!

Die wird wieder durchgemacht.

Bester Student, o wie lacht

bös' aus Deinem Gesicht

die Aversion gegen Bologna's Gericht.

So viele Regularien, o weh!

So viele Regularien, o weh!


Stille Nacht! Unheilvolle Nacht!

Was hat man nur aus dem Uni-Studium gemacht?

Es ist nicht Bologna allein.

Andere Trends reihen sich hier ein.

Das Denken an Deutschland in der Nacht,

hat auch den Professor um den Schlaf gebracht.


Lautes Land! Unheilvolles Land!

O Marx, gib mir Eltern,

die Kinder fordern anstatt Lehrer zu quälen.

O Lenin, gib mir Erstsemester,

die wissen wollen anstatt videozugamen.

Früher war alles besser, o weh!

Früher war alles besser, o weh!


Stille Nacht! Unheilvolle Nacht!

Soll es Weihnachten nun sein,

dann Tod den Enten und glühe der Wein.

Der Wunschzettel ist auch schon da.

Merkel & Co. machen es klar.

Merkel & Co. machen es klar.


12/07/2011

A riddle regarding type safety

Of course, I have explained to the students in my language theory class what type safety means (remember: progress + preservation) and we have studied this notion time and again for some of the TAPL languages. However, I don't feel like the notion really sinked in universally. Some students consider all this semantics and calculus stuff as obscure and there is a certain critical mass of notation, when passed, concepts are not understood anymore (by some if not many of the Bachelor-level students in my class).

Last night, I came up with a nice "semantics riddle" for type safety:

What would be a super-trivial language with a type system and an SOS semantics such that type safety is violated?


I gave the students 2 minutes to think about the question.

I emphasized several times that the language should be super-trivial.

I also asked them for a general strategy for mis-designing a language to be type-unsafe.

One idea that popped up was to support some notion of cast.

We agreed that this would not be trivial, certainly not super-trivial.


Here is my reference solution:

Syntax of expressions

e ::= v | z

(Hence, there are expressions (forms) x, y and z.)

Values (normal forms)

v := x | y

Types

t ::= a | b

(a and b are the possible types.)

Small-step relation

z -> x

(Yes, we only have one axiom.)

Typing rules

x : a

y : b

z : b

(Yes, we have only axioms here.)

Demonstration of lack of type safety

The term z is the culprit.

z : b but z -> x and x : a


Regards,
Ralf


5/30/2011

A more serious online partial evaluator

Admittedly, the illustrative partial evaluator of the previous blog post was quite limited. Consider, for example, the following attempts of partially evaluating different applications of the exponentiation function; the last attempt diverges:


> peval (lib, (Apply "exp" [Const 2, Const 3]))
Const 8

> peval (lib, (Apply "exp" [Var "x", Const 3]))
Binary Times (Var "x") (Binary Times (Var "x") (Binary Times (Var "x")(Const 1)))

> peval (lib, (Apply "exp" [Const 2, Var "n"]))
IfZero (Var "n") (Const 1) (Binary Times (Const 2) (IfZero (Binary Plus (Var "n") (Const (-1))) (Const 1) (Binary Times (Const 2) (IfZero (Binary Plus (Binary Plus (Var "n") (Const (-1))) (Const (-1))) (Const 1) (Binary Times (Const 2) (IfZero (Binary Plus (Binary Plus (Binary Plus (Var "n") (Const (-1))) (Const (-1))) (Const (-1))) (Const 1) (Binary Times (Const 2) (IfZero (Binary Plus (Binary Plus (Binary Plus (Binary Plus (Var "n") (Const (-1))) (Const (-1))) (Const (-1))) (Const (-1))) (Const 1) (Binary Times (Const 2) (IfZero ...


In the first attempt, we partially evaluate a ground term, which is as if we were using the regular interpreter. In the second attempt, we leave the base parametric, but we fix the exponent. In the third attempt, it is the other way around: we fix the base, but we leave the exponent parametric. The divergence for the third attempt is clearly undesirable. Ideally, partial evaluation of a non-ground term t should terminate whenever there is an ground instance t' of t such that regular evaluation terminates for t'. Otherwise, we could never be sure whether we can safely decompose a computation into partial evaluation followed by evaluation of the residual program.

So let us develop a more advanced partial evaluator:


peval :: Prog -> (Expr, FEnv)


The result type has changed. In addition to a transformed main expression, we also expect to obtain specialized function definitions as they are encountered by partial evaluation. That is, function definitions are specialized by statically known argument values. A demo helps:


> peval (lib, (Apply "exp" [Var "x", Const 3]))
( Apply "exp_[(n,3)]" [Var "x"],
[("exp_[(n,3)]", (["x"], Apply "times" [Var "x",Apply "exp_[(n,2)]" [Var "x"]])),
("exp_[(n,2)]", (["x"], Apply "times" [Var "x",Apply "exp_[(n,1)]" [Var "x"]])),
("exp_[(n,1)]", (["x"], Apply "times" [Var "x",Apply "exp_[(n,0)]" [Var "x"]])),
("exp_[(n,0)]", (["x"], Const 1))])


Thus, specialized function definitions were inferred for all the inductively encountered values for the exponent. Subject to a simple inlining optimization (which is not shown here for brevity but included into the open-source repository), we obtain the familiar expression for "x" to the power 3:


Apply "times" [Var "x",Apply "times" [Var "x",Apply "times" [Var "x",Const 1]]]


The partial evaluator commences as follows:


peval :: Prog -> (Expr, FEnv)
peval (fe,m) = runState (peval' m []) []
where
peval' :: Expr -> VEnv -> State FEnv Expr


That is, the expression-level partial evaluation function starts from the empty variable environment and it uses the state monad to aggregate specialized function definitions, starting also from the empty list. The type of variable environments is the same as in the original interpreter:


type VEnv = [(String,Int)]


The cases for all constructs but function application can be adopted from the simpler partial evaluator--except that we need to convert to monadic style:


peval' :: Expr -> VEnv -> State FEnv Expr
peval' (Const i) ve = return (Const i)
peval' (Var x) ve
= return (case lookup x ve of
Just v -> Const v
Nothing -> Var x)
peval' (Binary op e1 e2) ve
= do
e1' <- peval' e1 ve
e2' <- peval' e2 ve
return (case (e1', e2') of
(Const v1, Const v2) -> Const (op2f op v1 v2)
_ -> Binary op e1' e2')
peval' (IfZero e1 e2 e3) ve
= do
e1' <- peval' e1 ve
case e1' of
(Const v1) -> if (v1==0) then r2 else r3
_ -> r2 >>= \e2' -> r3 >>= \e3' -> return (IfZero e1' e2' e3')
where
r2 = peval' e2 ve
r3 = peval' e3 ve


The deviating and complicated and interesting case is the one for function application. We prepare this development by an informal specification. (Thanks again to William Cook for teaching me on this part.) The following specification relies on two important terms used in partial evaluation. That is, when partial evaluation encounters a function application with a value (say, a constant---as opposed to an expression) for a given argument position, then this position (or the associated variable) is called static; otherwise, it is called dynamic. The specification follows.

1. The applied function is looked up. 2. The arguments of the function application are partially evaluated. 3. The arguments are partitioned into static and dynamic ones. 4. The specialized function name consists of the original function name qualified by the static arguments. 5. The body of the specialized funtion is obtained by partially evaluating the original body in the variable envrironment of the static variables. 6. An application of the specialized function is constructed from the dynamic arguments. This is the result of partial evaluation.

The following function definition clarifies the step of partitioning the partially evaluated arguments (see 3. above). One list of
expressions is partioned into two lists of static versus dynamic arguments qualified by the formal argument names. Static arguments are Const expressions. Dynamic arguments are all other expressions. Thus:


partition [] [] = ([],[])
partition (n:ns) (Const i:es)
= ((n,i):ss,ds) where (ss,ds) = partition ns es
partition (n:ns) (e:es)
= (ss,(n,e):ds) where (ss,ds) = partition ns es


The following equation for function applications implements the afore-mentioned steps 1.-6., but additional details
arise. In order to deal with recursion, it is important that the specialized function is already added to the state before its body is
obtained. To this end, an undefined body is preliminarily associated with the new name, which is replaced later; see get, put, get, put. Furher, we need to distunguish function applications that are fully static from those that involve dynamic arguments because fully static function applications should not be specialized; instead, they should be evaluated.

Here are the glory details:


peval' (Apply n es) ve
= do
-- Look up function
let (ns,e) = fromJust (lookup n fe)

-- Partially evaluate arguments
es' <- mapM (flip peval' ve) es

-- Partition arguments into static and dynamic ones
let (ss,ds) = partition ns es'

case (null ss, null ds) of

-- A constant application
(True, True) -> peval' e []

-- A fully static application
(False, True) -> peval' e ss

-- A fully dynamic application
(True, False) -> return (Apply n es')

-- A mixed static application
(False, False) -> (do

-- The name that encodes the static values
let n' = n ++ "_" ++ myshowl ss

-- Construct application of specialized function
let e' = Apply n' (map snd ds)

-- See whether the specialization exists already
fe <- get
case lookup n' fe of
Just _ -> return e'
Nothing -> (do

-- Memo before possible recursion
put (fe++[(n',undefined)])

-- Partial evaluation of function body
e'' <- peval' e ss

-- Record definition of specialized function
fe' <- get
put (update (const (map fst ds,e'')) n' fe')

-- Return application of specialized function
return e'))


So much for partial evaluation.

Regards,
PF

PS: The source code of this blog post is available through the repository of the Software Language Processing Suite; see here.

5/27/2011

An illustrative online partial evaluator

Let's understand the basic mechanics of a (too?) simple online partial evaluator. (I am grateful to William Cook for encouraging the following Haskell exercise, and for sharing with me some stimulating slides and notes on the subject.) I recommend John Hatcliff's excellent and detailed course material on "Foundations of Partial Evaluation and Program Specialization"---available online. For those in a hurry, try Wikipedia. Anyway, partial evaluation, macro systems, and multi-stage programming is a lot of fun. Here is also a paper (draft) by William Cook that I can warmly recommend.

Consider the following syntax of a simple, pure, first-order, functional programming language:


type Prog = (FEnv,Expr)
type FEnv = [FDef]
type FDef = (String,([String],Expr))

data Expr
= Const Int
| Var String
| Binary Op Expr Expr
| IfZero Expr Expr Expr
| Apply String [Expr]
deriving (Show)

data Op = Plus | Times
deriving (Show)


Thus, a program consists of a main expression and bindings for functions (i.e., a "function environment" FEnv is a list of "function definitions" FDef each of them being of term form (s,(ss,e)) with s as the function name, ss the formal parameters and e the body). Evaluation of an expression is supposed to return an integer (or to diverge or to fail in reply to a dynamic type error). The functions are supposed to be of first-order: functions are neither passed as arguments, nor returned as results.

Here are some function definitions---including those for factorial and exponentiation:


lib :: FEnv
lib
= [ ("neg", (["x"], Binary Times (Var "x") (Const (-1)))),
("add", (["x", "y"], Binary Plus (Var "x") (Var "y"))),
("minus", (["x", "y"], Binary Plus (Var "x") (Apply "neg" [Var "y"]))),
("times", (["x", "y"], Binary Times (Var "x") (Var "y"))),
("inc", (["x"], Binary Plus (Var "x") (Const 1))),
("dec", (["x"], Binary Plus (Var "x") (Const (-1)))),

("fac",
( ["x"]
, IfZero (Var "x")
(Const 1)
(Apply "times" [Var "x", Apply "fac" [Apply "dec" [Var "x"]]]))),

("exp",
( ["x","n"]
, IfZero (Var "n")
(Const 1)
(Apply "times" [Var "x", Apply "exp" [Var "x", Apply "dec" [Var "n"]]])))]


Let's first develop a straightforward interpreter.

We expect interpretation to behave as follows:


> eval (lib, Apply "fac" [Const 5])
120
> eval (lib, Apply "exp" [Const 2, Const 3])
8


Here is the complete interpreter:


type VEnv = [(String,Int)]

eval :: Prog -> Int
eval (fe,m) = eval' m []
where
eval' :: Expr -> VEnv -> Int
eval' (Const i) ve = i
eval' (Var x) ve = fromJust (lookup x ve)
eval' (Binary op e1 e2) ve = f v1 v2
where
f = op2f op
v1 = eval' e1 ve
v2 = eval' e2 ve
eval' (IfZero e1 e2 e3) ve = if (v1==0) then v2 else v3
where
v1 = eval' e1 ve
v2 = eval' e2 ve
v3 = eval' e3 ve
eval' (Apply n es) ve = eval' e ve'
where
(ns,e) = fromJust (lookup n fe)
vs = map (\e -> eval' e ve) es
ve' = zip ns vs

op2f :: Op -> (Int -> Int -> Int)
op2f Plus = (+)
op2f Times = (*)


It must be emphasized that this interpreter is in no way special, unusual, or targeted at partial evaluation. One gets this interpreter naturally when using denotational semantics (i.e., compositional functional semantics) as the guiding principle. Now the key "challenge" of our effort here is to modify the interpreter so that it effectively serves as a partial evaluator. That is, the partial evaluator should produce the same result as the interpreter when faced with a ground main expression, but it should compute a residual expression otherwise. Let's think of the residual expression as the "rest of the program" that needs to be evaluated once the free variables are bound to values. For instance:


> peval (lib, Apply "exp" [Var "x", Const 3])
Binary Times (Var "x") (Binary Times (Var "x") (Binary Times (Var "x") (Const 1)))


Thus, the function "exp" is applied to a specific exponent 3, but the base remains a variable "x", and the result of partial evaluation essentially represents the code to compute "x" to the power 3: "x*x*x*1".

Our design of a partial evaluator starts from the insight that the type of the top-level function must be changed so that expressions are returned instead of integers. (A more general design would also return any number of bindings for specialized functions that were encountered along partial evaluation.) Please note that integers are trivially embedded into expressions through the constant form of expressions. Thus:


-- Interpreter
eval :: Prog -> Int
eval (fe,m) = eval' m []
where
...

-- Partial evaluator
peval :: Prog -> Expr
peval (fe,m) = peval' m []
where
...


In the chosen style of partial evaluator design, we also propagate the type change into the definition of variable environments. (This leads to a simple development, but implies some issues as we will discuss towards the end of the blog post.) Thus, variables may be bound to (free) variables:


-- Interpreter
type VEnv = [(String,Int)]

-- Partial evaluator
type VEnv = [(String,Expr)]


The cases for constant expressions differ trivially only as follows:


-- Interpreter
eval' :: Expr -> VEnv -> Int
eval' (Const i) ve = i

-- Partial evaluator
peval' :: Expr -> VEnv -> Expr
peval' (Const i) ve = Const i


Variables are to be treated in a special manner. The interpreter can only deal with ground main expressions, and---subject to other reasonable assumptions about well-typedness of programs---variables would always be bound throughout interpretation. In contrast, the partial evaluator faithfully deals with free variables: a free variable is partially evaluated to itself.


-- Interpreter
eval' (Var x) ve = fromJust (lookup x ve)

-- Partial evaluator
peval' (Var x) ve
= case lookup x ve of
Just e -> e
Nothing -> Var x


Compound expression forms are also affected non-trivially but systematically. The general idea is that an expression form may be evaluated if some or all operands can be, in turn, partially evaluated to values, or the expression form may need to be re-generated in the result otherwise. Consider the following cases for binary expressions:


-- Interpreter
eval' (Binary op e1 e2) ve = f v1 v2
where
f = op2f op
v1 = eval' e1 ve
v2 = eval' e2 ve

-- Partial evaluator
peval' (Binary op e1 e2) ve
= case (e1', e2') of
(Const v1, Const v2) -> Const (f v1 v2)
_ -> Binary op e1' e2'
where
f = op2f op
e1' = peval' e1 ve
e2' = peval' e2 ve


Thus, in the interpreter, subexpressions are recursively evaluated, and the corresponding operator is applied to the intermediate results. Subexpressions are also recursively evaluated in the partial evaluator, but the intermediate results are to be checked this time such that the operator is applied for the case of value results (constants). Otherwise, a binary expression is generated which incorporates partially evaluated operands as opposed to the original operand expressions.

The same idea is applied to the IsZero construct:


-- Interpreter
eval' (IfZero e1 e2 e3) ve = if (v1==0) then v2 else v3
where
v1 = eval' e1 ve
v2 = eval' e2 ve
v3 = eval' e3 ve

-- Partial evaluator
peval' (IfZero e1 e2 e3) ve
= case e1' of
(Const v1) -> if (v1==0) then e2' else e3'
_ -> IfZero e1' e2' e3'
where
e1' = peval' e1 ve
e2' = peval' e2 ve
e3' = peval' e3 ve


The pleasant status of IfZero is that it is enough to partially evaluate the condition to a value in order to be able to eliminate the conditional form.

Perhaps surprisingly, the cases for function application can be identical (modulo renaming):


-- Interpreter
eval' (Apply n es) ve = eval' e ve'
where
(ns,e) = fromJust (lookup n fe)
vs = map (\e -> eval' e ve) es
ve' = zip ns vs

-- Partial evaluator
peval' (Apply n es) ve = peval' e ve'
where
(ns,e) = fromJust (lookup n fe)
es' = map (\e -> peval' e ve) es
ve' = zip ns es'


In both cases, we look up the function from the function environment, we bind (partially) evaluated actual parameters to the formal parameter variables, and we proceed with the (partial) evaluation of the function's body relative to the local environment.

Exercise: Is there any issue of variable capture with the partial evaluator at hand? Can you demonstrate a problem? How would you generally rule out variable capture? Further, can you think of situations where the partial evaluator clones too much code or where it even loops, even though a non-cloning and non-looping partial evaluator would be desirable and feasible, perhaps subject to some form of memoization?

The source code of this blog post is available in the repository for the Software Language Processing Suite; see here. You would also find a not so simple partial evaluator discussed in the next blog post.

Regards,
PF

5/14/2011

Empirical studies of software languages

Dear language researchers and engineers,

if you were interested in empirical studies of software languages, what journals and conferences come to your mind as potential publication venues for such studies? In different terms, what do you think what quality journals and conferences would be popping up if you were googling (binging) intelligently for empirical studies of programming languages, modeling languages, domain-specific languages, and other software languages you can think of?

Please send your ideas by email to rlaemmel@acm.org by 19 May.

The results of this inquiry will be published while preserving anonymity of all participants.

Thanks!
Professor Fish

PS1: The reason for asking is that we are into a meta-empirical effort, continuing our SLE 2010 paper "Empirical language analysis in software linguistics" (Jean-Marie Favre and Dragan Gasevic and Ralf Lämmel and Ekaterina Pek). We are going to use content analysis, where we aim for automation for data mining to the extent possible, but substantial manual efforts remain. Therefore, scalability is limited, and we would like to narrow down the search space for relevant publications by using a short list of journals and conferences, but, at the same time, remaining transparent about what we use. So please, in the interest of science, send your journal and conference acronyms.

PS2: Please understand that I do not include any journals and conferences that are already on our mind. I do not want to bias you in any way. This is also the reason for disabling comments on this post.

5/08/2011

Language processing for dummies

I enjoyed reading recently "Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages" by Terence Parr (the ANTLR guy) as available from The Pragmatic Bookshelf. The book uses a design pattern style to present techniques for language processing. Eventually the book gets to leverage ANTLR for many problems, but it also explains language processing in terms of patterns that require regular programming effort without the help of a program generator. This is a clever way of teaching some basics of grammars, parsing, tree walking and other concepts that can be explained all too easily (elsewhere) in a too complicated manner, or in a manner requiring substantial background.

In my (very) advanced Bachelor's course Programming techniques and technologies, I face students who don't know much yet about formal languages and certainly nothing about compiler construction; they are aware though of context-free grammars (EBNFs) for the purpose of syntax definition. The course covers many programming techniques and technologies, and I am willing to dedicate two lectures to language processing. Hence, I designed patterns that are really simple and that allow me to discuss a number of language processors for 101companies in a systematic manner.

"Manual" patterns
  • The Chopper Pattern: Chop input into pieces for classification and iterator-based processing.
  • The Lexer Pattern: Transform character stream into token stream for iterator-based processing.
  • The Copy/Replace Pattern: Implement text transformation by copying tokens from input to output while replacing tokens of interest.
  • The Acceptor Pattern: Implement a context-free grammar with procedures for parsing input and signaling acceptance or rejection.
  • The Parser Pattern: Advance the Acceptor Pattern such that semantic actions are to be executed along with parsing input.

"Generative" patterns
  • The Lexer Generation Pattern: Use a lexer/parser generator to derive a character-to-token stream conversion from a grammar generatively.
  • The Acceptor Generation Pattern: Use a parser generator to derive an acceptor from a grammar generatively.
  • The Parser Generation Pattern: Use a parser generator to derive an parser from a grammar with associated semantic actions generatively.
  • The Text-to-object Pattern: Instantiate the Parser (Generation) Pattern such that the semantic actions construct objects that resemble the syntactical structure of the input.
  • The Object-to-text Pattern: Implement an operation on an object model that writes out the object graph in the underlying textual notation.

Associated 101companies implementations

Slide decks (.pdf): [1]. [2]

Lecture videos (.mov): [1]. [2]

Regards,
PF

5/05/2011

A tiny attribute grammar exercise

(C) 2011, Ralf Laemmel

Let's map Knuth' attribute grammar example to Haskell. This has been done many times. In fact, there is a long track of excellent research on the relation between attribute grammars and functional programming. For instance, see the work by Doaitse Swierstra and team over many years. Our development here is meant to just illustrate the basic mapping in an as simple and self-contained manner as possible.

One should notice that the chosen example (due to Knuth) requires laziness for the straightforward encoding that is leveraged
below. Hence, C programmers are encouraged to try to transcribe this Haskell program to their favorite language.

The following context-free grammar is assumed:


digit : '0'
digit : '1'
digits : digit
digits : digits digit
number : digits
number : digits '.' digits


That is, we deal with binary numbers---both integer and fractional numbers.

We use the following algebraic datatypes to model the grammar:


data Digit = Zero | One
data Digits = Single Digit | Several Digits Digit
data Number = Int Digits | Frac Digits Digits


Let us now associate attributes with the nonterminals:


* digit:
- inh. p -- position of digit
- syn. v -- value of digit
* digits:
- inh. p -- position of digits
- syn. v -- value of digits
- syn. l -- length of digits
* number:
- syn. v -- value of number


In Haskell, we can introduce designated type aliases to capture the names of the attributes, and the association of nonterminals with attributes is modeled with the signatures for functions that are expected to model the semantic equations. (We could also use record types, but Haskell's record system is not quite convenient in this context.) Here, the assumption is that those functions carry one argument for the syntactial part, another argument for (the tuple of) the inherited attributes, and the result corresponds to (the tuple of) the synthesized attributes. Thus:


-- Type synonyms for attributes
type Val = Float
type Pos = Int
type Len = Int

-- Functions for semantic equations
digit :: Digit -> Pos -> Val
digits :: Digits -> Pos -> (Val,Len)
number :: Number -> Val


Here are the semantic equations for the example:


digit : '0'
v.digit = 0

digit : '1'
v.digit = 2^^p.digit

digits : digit
v.digits = v.digit
l.digits = 1
p.digit = p.digits

digits : digits' digit
v.digits = v.digits' + v.digit
p.digit = p.digits
p.digits' = p.digits + 1
l.digits = l.digits' + 1

number : digits
v.number = v.digits
p.digits = 0

number : digits '.' digits'
v.number = v.digits + v.digits'
p.digits = 0
p.digits' = -l.digits'


We implement these semantic equations as equations defining the functions from syntax times inherited attributes to synthesized attributes.


digit Zero _ = 0
digit One p = 2^^p

digits (Single d) p = (digit d p,1)
digits (Several ds d) p = (v,l)
where
v = v' + v''
l = l' + 1
(v',l') = digits ds (p+1)
v'' = digit d p

number (Int ds) = fst (digits ds 0)
number (Frac ds1 ds2) = v1+v2
where
(v1,l1) = digits ds1 0
(v2,l2) = digits ds2 (-l2)


Let us evaluate 10.01.


main = do print (number n)
where
n = Frac ds ds'
ds = Several (Single One) Zero
ds' = Several (Single Zero) One

> main
2.25


Regards,
Prof. Fisch

4/28/2011

Test-driven grammar-class understanding with ANTLR

Suppose you would like to understand some grammar classes such as LL(1) and LL(*). Then you might appreciate a test suite like the following, or you would actually try to discover such a test suite yourself. I used the following suite in a lab of my "Software Language Processing" class. It is very convenient means of illustrating things. Students can copy and paste these codes now and further investigate. It helps understanding formal notions and actual, operational parser behaviors.



The first ANTLR input specifies a trivial grammar with a fixed lookahead of 1.

grammar LL1;
options { k = 1; }
s : 'a' 'x' | 'b' 'x' ;


Let's apply ANTLR:

java -classpath ".:antlr-3.2.jar" org.antlr.Tool LL1.g


ANTLR likes that grammar just fine. (There are no complains.) We are allowed to infer hence that the grammar is indeed LL(1).

The next ANTLR input combines two alternatives that are clearly not LL(1). However, we still use a lookahead of 1.

grammar NotLL1;
options { k = 1; }
s : 'a' 'x' | 'a' 'y';


Let's apply ANTLR again:

java -classpath ".:antlr-3.2.jar" org.antlr.Tool NotLL1.g
warning(200): NotLL1.g:3:3: Decision can match input such as "'a'" using multiple alternatives: 1, 2
As a result, alternative(s) 2 were disabled for that input
error(201): NotLL1.g:3:3: The following alternatives can never be matched: 2


That is, ANTLR complains that both alternatives are to be selected with the same lookahead, and hence, ANTLR announces effectively that it will (pragmatically) favor the first alternative. As a result, the second alternative can never be matched operationally.

The next ANTLR input is basically the same as before except that we use a lookahead of 2 now.

grammar LL2;
options { k = 2; }
s : 'a' 'x' | 'a' 'y';


The grammar is indeed LL(2). ANTLR likes the grammar just fine.

The next ANTLR input goes back to lookahead 1 but it enables backtracking.

grammar Backtracking;
options { backtrack = true; k = 1; }
s : 'a' 'x' | 'a' 'y';


With backtracking turned on, ANTLR just gracefully accepts the grammar even with the insufficient look ahead. It is easy to guess that the operational behavior will combine use of lookahead for the optimization of decisions with the use of backtracking for completeness. That is, whenever input cannot be matched on the grounds of lookahead-based decisions, then additional choices are to be explored through backtracking.

The next ANTLR input explores the ANTLR semantics of backtracking. That is, we retain the same grammar as before---except that we inject a semantic action for output into the first alternative---just to see whether semantic actions will be prematurely executed---even if an alternative will not be completed, and a different alternative is chosen by backtracking eventually.

grammar Delay;
options { backtrack = true; k = 1; }
s : 'a' { System.out.println(42); } 'b' 'x' | 'a' 'b' 'y';


When we apply the generated parser for the grammar to the input "aby", then no output will be printed. That is, the parser explores the alternative with the semantic action first, but it delays execution of semantic actions. Such delayed actions are a safe choice when backtracking is involved.

The next ANTLR input starts two alternatives with a kind of prefix that rules out finite lookahead for deterministic decisions.

grammar Not42;
options { k = 42; }
s : 'a'* 'x' | 'a'* 'y';


In particular, the grammar is not LL(42). This is how ANTLR reports the problem:

java -classpath ".:antlr-3.2.jar" org.antlr.Tool Not42.g
warning(200): Not42.g:3:3: Decision can match input such as "'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a' 'a'" using multiple alternatives: 1, 2
As a result, alternative(s) 2 were disabled for that input


Hence, a parser based on lookahead of 42 would not be able to make the right decision for longer inputs.

The next ANTLR input simply removes the LL(k) option. Thereby, ANTLR's infinite lookahead applies.

grammar LLstar;
s : 'a'* 'x' | 'a'* 'y';


Indeed, ANTLR likes the grammar just fine. The grammar is hence LL(*).

The next ANTLR input starts two alternatives with a kind of prefix that rules out even infinite (DFA-based) lookahead for deterministic decisions. A recursive nonterminal is used which clearly rules out DFAs for lookahead.

grammar NotLLstar;
s : p 'x' | p 'y';
p : 'a' p 'b' | ;


Indeed, ANTLR complains.

java -classpath ".:antlr-3.2.jar" org.antlr.Tool NotLLstar.g
error(211): NotLLstar.g:2:3: [fatal] rule s has non-LL(*) decision due to recursive rule invocations reachable from alts 1,2. Resolve by left-factoring or using syntactic predicates or using backtrack=true option.
make: [NotLLstar.gen] Error 1 (ignored)


ANTLR basically suggests a few options which are all worth investigating: factoring, syntactic predicates, and backtracking.

Let's try factoring first. It does not take a rocket grammar engineer to even guess that we get back to LL(1):

grammar Factoring;
options { k = 1; }
s : p ('x' | 'y');
p : 'a' p 'b' | ;


Indeed, ANTLR likes the grammar just fine. We quickly transitioned from non-LL(*) to LL(1).

Now let's try syntactic predicates. That is, we attach essentially grammar expressions as guards to the alternatives:

grammar SynPreds;
s : (p 'x') => p 'x'
| (p 'y') => p 'y';
p : 'a' p 'b' | ;


ANTLR also is Ok with that grammar. However, we should not expect too much. Essentially, we are using an operational semantics now such that the first alternative is tried first subject to the syntactic predicate to be satisfied; otherwise we continue with the second alternative which is guarded as well. No formal property for the predicates is to be checked. All reasoning is based on order. From a practical point of view, the predicates at hand are somewhat terrible because they actually run the complete alternative in the guard. (We simulate backtracking in a sense.) We should try to find a shorter prefix for the test, but this is impossible in the example at hand. However, even guards that test a lot of the actual alternative may be efficient if we could additionally leverage memoization, but we do not attempt this here.

It is reasonable to turn on backtracking again. In this case, the grammar itself does not need to be transformed:

grammar Desperate;
options { backtrack = true; }
s : p 'x' | p 'y';
p : 'a' p 'b' | ;


We can parse just fine with that grammar. Demo omitted. Of course, efficiency issues are to be expected whenever backtracking is involved, but, again, we could try to leverage memoization in some cases, or we should indeed try to factor---as it is clearly trivial and appropriate in the specific case at hand.

Happy grammar engineering!

Regards,
Ralf

1/20/2011

A tiny bit of denotational semantics

All languages and assignments in this blog post are (C) 2011 Ralf Lämmel.

This is actually an assignment for my paradigms/semantics class.

Consider the following super-trivial interpreter written in Haskell and in the direct style of denotational semantics:



data Exp = Const Int
| Add Exp Exp

eval :: Exp -> Int
eval (Const i) = i
eval (Add x y) = eval x + eval y


It's time to test:


> eval (Add (Const 20) (Const 22))
42


This incredible language handles integer constants and addition. Now assume that we are adding an "exit" form of expression. The intended semantics of a term "Exit e" is to define the final result of expression evaluation as the value of e. Hence, if an exit expression occurs inside an addition, then the addition is effectively cancelled. Thus:


data Exp = Const Int
| Add Exp Exp
| Exit Exp


For instance:


  • Add (Const 20) (Const 22) should evaluate to 42.

  • Add (Exit (Const 88)) (Const 42) should evaluate to 88.



A particularly clumsy way of writing an interpreter for the extended language is to stubbornly stick to direct style of denotational semantics. That is, we may use, for example, the result type to encode whether we are returning from a normal evaluation or whether we are supposed to exit. In such a case, we have to systematically check intermediate results to propagate exit. In our simple language, we need to propagate in the equation for addition. Thus:


-- Result type for clumsy direct style:
-- The sum serves for tagging.
-- Left tags normal result values.
-- Right tags exit values.
type R = Either Int Int

-- Standard combinator for sum processing
(\/) :: (a -> c) -> (b -> c) -> Either a b -> c
(f \/ g) (Left x) = f x
(f \/ g) (Right y) = g y

-- Tag removal
toInt :: Either Int Int -> Int
toInt = id \/ id

-- Clumsy semantic function
eval :: Exp -> R
eval (Const i) = con i
eval (Add x y) = add (eval x) (eval y)
eval (Exit x) = exit (eval x)

-- Algebra for denotations
con :: Int -> R
con = Left

add :: R -> R -> R
add x y = ((\x' -> ((Left . (x'+)) \/ Right) y) \/ Right) x

exit :: R -> R
exit = Right . toInt


The clumsy interpreter can indeed be tested as follows:


> toInt (eval (Add (Const 20) (Const 22)))
42
> toInt (eval (Add (Exit (Const 88)) (Const 42)))
88


Here are two trivial assignments that make you understand a) the basics of continuations and b) the relationship between analysis and denotational semantics. 1st assignment: Design an alternative semantics for the extended language so that you exploit continuation style instead of direct style. 2nd assignment: Revise the direct style semantics to derive a program analysis determining the purity of an expression; an expression is pure if it does not contain any reference to the Exit form of expression. Thus, we would like to be able to run tests as follows:


> eval (Add (Const 20) (Const 22)) id
42
> eval (Add (Exit (Const 88)) (Const 42)) id
88
> pure (Add (Const 20) (Const 22))
True
> pure (Add (Exit (Const 88)) (Const 42))
False


(We pass the identity continuation to the applications of eval.)

Regards,
PF