Scoped global variables in Prolog

It just so happens that I needed global variables in some Prolog code.

In fact, I needed more carefully scoped global variables. SWI-Prolog's global variables are really global. (Well, they are thread-scoped, for what it matters.) This is not good, if you need a lot of global variables and maybe even in different parts of an application.

An uninspiring approach would be to fabricate global variable names in a manner that they are scoped internally by some name prefix. It was more fun to achieve scope by means of actually using one truly global variable to provide many scoped variables. Here is a demo:

?- logvars:get(myscope, a, X).                            

?- logvars:get(myscope, a, Y).

?- logvars:get(myscope, a, X), logvars:get(myscope, a, Y).
X = Y .

?- logvars:get(myscope, a, X), logvars:get(myscope, b, Y).
true .

Here is the code:


Inlined below:

% (C) 2016 Ralf Laemmel
:- module(logvars, []).

get(+S, +N, -V): Retrieve the value V of the global variable named N
and "scoped" by the global variable S. The variable N is
"automatically" created with a fresh logical variable V as initial

get(S, N, V) :-
    ( nb_current(S, _) ->
          nb_setval(S, _) ),
    nb_getval(S, (Ns, Vs)),
    varnth0(Pos, Ns, N),
    nth0(Pos, Vs, V).   

varnth0(-Pos, ?Ns, +N): given a list Ns of atoms with a variable tail,
find the name N in Ns or, if not present, unify the first variable
position of Ns with N, and return the position as Pos. This is a
helper for get/3.

varnth0(Pos1, [N1|Ns], N2) :-
    ( N1 == N2 ->
   Pos1 = 0;
          varnth0(Pos2, Ns, N2),
          Pos1 is Pos2 + 1 ).
varnth0(0, [N1|_], N2) :-
    N1 = N2.


Megamodels of Coupled Transformations

While having fun at UvA in A'dam by means of teaching Haskell in a pre-master summer school, I also manage to walk across the street to see folks at CWI. Will be giving a presentation on coupled transformations and megamodeling.

Title: Megamodels of Coupled Transformations

Abstract: Many software engineering contexts involve a collection of coupled artifacts, i.e., changing one artifact may challenge consistency between artifacts of the collection. A coupled software transformation (CX) is meant to transform one or more artifacts of such a collection while preserving consistency. There are many forms of coupling—depending on technological space and application domain and solution approach. We axiomatize and illustrate important forms of coupling within the Prolog-based software language repository YAS (Yet Another SLR (Software Language Repository)) while relying on a higher-level predicate logic-based megamodeling language LAL for axiomatization and a lower-level megamodeling language Ueber for build management and testing.

Date and Time: 19 August, 11:00am

Related links:


An updated update on the software language book

This is an update on this update.

I happily look forward more peers using the material; please get in touch.

There are going to be these chapters; the chapters in boldface are readily available:

  1. Preface (introduction to the book)
  2. The notion of software language
  3. A DSL primer
  4. Syntax of software languages
  5. Metaprogramming
  6. Semantics of software languages
  7. Types and software languages
  8. Implementation of software languages
  9. Megamodeling for software languages
  10. Postface (summary and outlook)

Here are more details on the individual chapters (excluding pre- and postface):

  • Chapter 2 "The notion of software language" provides a lot of context, pointers, illustrations meant to be useful for any course on software languages---be it a course on programming language theory, software language engineering, domain-specific languages, or compiler construction.
  • Chapter 3 "A DSL primer" provides a fast and practical entry to the field of language design and implementation. It touches upon various subjects such as concrete and abstract syntax, parsing, code generation, textual and visual syntax, internal and external DSLs, and so on.  A simple finite-state machine language is used as the running example. Python, Java, ANTLR, and several other languages and technologies are used for implementation.
  • Chapter 4 "Syntax of software languages" develops the essence of concrete and abstract syntax based on context-free grammars for string languages, signatures for abstract tree-based syntax for some form of tree languages, and metamodels for abstract graph-based syntax for some form of graph languages in the metamodeling sense. The basic principles of parsing are discussed, without drilling though into options and challenges.
  • Chapter 5 "Metaprogramming" develops the basics of metaprogramming: representation of object programs, parsing, interpretation, rewriting, traversal, and quasiquotation. Primarily, Haskell is used for illustrative purposes. In some spots, Prolog is used to provide alternative models. In a few spots, JSON, XML, or additional programming languages are exercised. The metametalevel of concrete and abstract syntax is also covered.
  • Chapter 6 "Semantics of software languages" presents the formal semantics in an applied manner, thereby supporting a disciplined style of interpreter development. Both big-step and small-step style of operational semantics as well as denotational semantics including continuation-passing style are discussed. Examples are developed in textbook-style formal notation and Haskell is used for the illustrative realization of semantics as interpreters. In some spots, Prolog is used to provide alternative illustrations.
  • Chapter 7 "Types and software languages" provides an applied account on type systems. Type systems of a simple expression language, a functional language, and an imperative language are described. Some excerpts are implemented in Haskell (and in Prolog). Some lambda calculi are covered in an attempt to study diverse aspects of statically typed languages, e.g., polymorphism and structural versus nominal typing. Just as in the case of the chapter on semantics, the focus is on conveying basic competencies of language design, documentation, and implementation.
  • Chapter 8 "Implementation of software languages" presents several scenarios of language implementation: compilation, optimization (by means of partial evaluation), static analysis (by means of abstract interpretation), type inference, and yet others.
  • Chapter 9 "Megamodeling for software languages" engages in a macroscopic view on language processing and usage scenarios of languages in software engineering. Languages, language elements, language definitions, and language implementations are organized in models (megamodels) so that some general principles (patterns) of software-language usage are captured. Incidentally, megamodeling also helps us to organize all the entities in the software language book in terms of build management and regression testing.

The book is systematically linked to YAS:

Yet Another SLR (Software Language Repository):

The available chapters can be readily used in courses as follows:

  • General background on software languages and some basics of foundations and engineering. This may be useful as part of an advanced Bachelor or a basic Master course on compiler construction, software language engineering, domain-specific languages, or domain-specific modeling. 
  • Material on the basics of programming language theory in an applied style of presentation. This may be useful as part of an advanced Bachelor course on programming language theory or programming paradigms.

Looking forward hearing from you.



Status update on the Software Language Book

My software language book is coming together. :-)

Software Languages: An Introduction
Ralf Lämmel
To appear in Springer Verlag, 2016-17

Call for book usage

If you are doing lectures or if you are otherwise interested in the emerging book, please get in touch. I can share draft material with you, subject to agreeing with some (simple?) copyright-related protocol. Of course, I would be happy to see this work being used and receiving feedback. If you would rather want to see a more polished manuscript, then get in touch anyway, and I am happy to keep you in the loop.

Use cases of the book

Let me just explain how I use the book; please let me know about your needs. 

  • I am readily using this book now as the exclusive resource for my Bachelor-level course on programming language theory, as it covers enough aspects of formal syntax, semantics, interpretation, and type systems. The style in these areas is quite applied as opposed to theoretical. Nevertheless some advanced topics are offered in the relevant chapters, e.g., metametamodels, partial evaluation, and abstract interpretation.
  • I am also using this book to support my Master-level course on software language engineering. Especially the DSL primer and the broad coverage of the notion of software languages and the topics on software metalanguages and software language implementation help me to quickly drill into software language engineering. I should mention that, for the purposes of my research-oriented course, I don’t expect in-depth coverage of those subjects; rather I want to prepare students for research work.

Important characteristics of the book

I already mentioned that the book is written in an applied as opposed to a theoretical style. The book develops simple notations for abstract and concrete syntax which are used throughout the book. Different programming languages are used to implement functionality (semantics, type checking, transformation, translation, etc.). In the context of semantics and type systems, Haskell is preferred. In some illustrations of language implementation, Python and Java are leveraged. In a few places, Prolog is used for representing rewrite systems or deductive systems to provide executable specifications for problems of software language processing. 

There are many exercises throughout the book; they are classified at the levels "basic", "intermediate", and "advanced". Solutions for the exercises are not included into the book, but they should be made available eventually online for the basic and intermediate levels.

Timeline for finalization

Realistically, I will need all of summer 2016 to finish and mature all content. In August, I plan to provide a version for reviewing by Springer. Depending on the details of the Springer-driven process and my ability to implement change requests in a timely manner, the book could be in print in 6 months from now (Mid June 2016). I would be very sad (as if you or me cared), if it would take another year. Until then, I am happy to communicate with regard to the possible deployment of the draft material in your courses.

Status of material

These chapters are stable:
  • The notion of software language
    • Software language examples
    • Classification of software languages
    • The lifecycle of software languages
    • Software languages in software engineering
    • Software languages in academia
  • A DSL primer
    • Language concepts
    • Internal DSL style
    • Textual syntax
    • Parsing text to objects
    • Language-agnostic models
    • Language constraints
    • Interpretation
    • Code generation
    • Visual syntax
    • Summary
  • Syntax of software languages
    • Concrete syntax
    • Abstract syntax
    • The metametalevel
    • Syntax mapping
  • Semantics of software languages
    • Pragmatic interpretation
    • Big-step operational semantics
    • Small-step operational semantics
    • Denotational semantics
    • The λ-calculus
    • Partial evaluation
    • Abstract interpretation
  • Types and software languages
    • Type systems
    • Typed calculi
    • Type-system implementation

The following chapters are not ready for your eye; they are released one-by-one to those interested:
  • Preface (introduction to the book)
  • Postface (summary and outlook)
  • Software metalanguages
    • Attribute grammars
    • Rewrite systems
    • Quasi quotation
    • Templates
    • Intermediate representation
  • Software language implementation
    • Parsing
    • Pretty printing
    • Code generation
    • Test-data generation
    • Syntax-directed editing
  • Megamodeling and software languages
    • Entities of interest
    • Relationships of interest
    • Build management and testing
    • Logic-based axiomatization
    • Megamodels of coupled transformations

What’s not in the book?
  • Description of specific platforms for software language implementation such as Eclipse, EMF, TXL, Stratego/XT, and Rascal. The book aims at being technology-agnostic and somewhat timeless. Also, the assumption is that there exists strong material for all important platforms. The book comes with online material though, which covers several platforms. Diverse platforms are mentioned in the right places in the book.
  • Proper coverage of compiler construction topics such as flow analysis, intermediate or machine code-level optimization, and code generation. A cursory discussion of compiler construction topics is included though. There are clearly very good books on compiler construction on the market. Don’t expect this book to be useful for a compiler construction course.

  • A deeper discussion of current research trends in software language engineering. One might thus miss non-trivial coverage of, for example, language workbenches, projectional editing, language embedding, debugging, language integration, and advanced metalanguages, e.g., for name binding or type systems. The reason for exclusion is again that the book aims at being somewhat timeless. Pointers to research works are included, though.



Responding to reviews of rejected conference papers

This post is concerned with this overall question:

How to make good use of reviews for a rejected conference paper?

The obvious answer is presumably something like this:

Extract TODOs from the reviews. Do you work. Resubmit.

In this post, I'd like to advocate an additional element:

Write a commentary on the reviews.

Why would you respond on reviews for a rejected conference paper?

Here are the reasons I can think of:
  • R1You received a review that is clearly weak and you want to complain publicly. I recommend against this complaint model. It is unfriendly with regard to the conference, the chairs, and the reviewers. If one really needs to complain, then one should do this in a friendly manner by direct communication with the conference chair.
  • R2: You spot factual errors in an otherwise serious review and you want to defend yourself publicly. There is one good reason for doing this. Just write it off your chest. There is two good reasons for not doing it. Firstly, chances are that your defense is perceived as an unfriendly complaint; see above. Secondly, why bother and who cares? For instance, sending your defense to the chairs would be weird and useless, I guess.
  • R3: You want to make good use of the reviews along revision and document this properly.

R3 makes sense to me. 

R3 is what this post is about.

We respond to reviews anyway when working on revisions of journal submissions because we have to. One does not make it through a major revision request for a journal paper unless one really makes an effort to properly address the reviewer requests.

Some conferences run a rebuttal model, but this is much different. Rebuttal is about making reviewers understand the paper; revision of a journal paper is more about doing enough of presentational improvements or bits of extra thinking and even research so that a revision is ultimately accepted. 

In the case of a rejected conference paper and its revision, I suggest that a commentary is written in a style, as if the original reviewers were to decide on the revision, even though this will not happen, of course. It remains to be decided on a case-by-case basis whether and how and when the commentary should be made available to whom for what purpose. 

Not that I want my submissions to be rejected, but it happens because of competition and real issues in a paper or the underlying research. My ICMT 2016 submission was received friendly enough, but rightly rejected. The paper is now revised and the paper's website features the ICMT 2016 notification and my commentary on the reviews. In this particular case, I estimated that public access to the notification and my commentary will do more good than bad. At the very least, I can provide a show case for what I am talking about in this blog post.

With the commentary approach, there are some problems that I can think of:
  • P1: Reviewers or conference chairs feel offended. Without being too paranoid, the reviewers or the chairs could receive the commentary as a criticism of their work. For instance, the chair may think that some review was not strong enough to be publicly exposed as a data point of the conference. I have two answers. Firstly, an author should make an effort to avoid explicit or subliminal criticism. (We know how to do this because of how we deal with journal reviews.) Secondly, dear reviewers and chairs, maybe the world would be a better place if more of the review process would be transparent?
  • P2: Prior reviews and your commentary could be misused by reviewers. There is some good reason for not exposing reviewers to other reviews of the same paper (or a prior version thereof), not until they have casted their vote at least, because they may just get biased or they may use these other views without performing a thorough analysis on their own. This is a valid concern. This problem may call for some rules as to i) what conferences are eligible for passing reviews and commentary to each other and ii) when and how commentary can be used by reviewers. 
  • P3: Your commentary is perceived as putting pressure on reviewers of the revision. At this stage of our system, I don't propose that reviewers should be required in any way to consider the commentary on a previous version of a paper because reviewing is already taking too much time. All what I am saying is that reviewers should be given the opportunity to access previous reviews and the author's commentary, at least at some stage of the review process. Reviewers are welcome to ignore the commentary. In fact, some reviewing models may be hard to unite with the notion of commentary. For instance, I don't now whether it would work for the double blind model.

In summary, commentary on rejected conference submissions is a bit like unit testing. We should do it because it helps us to test our interpretation of the reviews in a systematic manner. Without such testing we are likely to i) complain non-constructively about the reviews; ii) ignore legitimate and critical issues pointed out by the reviews; iii) as a consequence, resubmit suboptimal revisions and busy program committees. So we do not really write this commentary for future reviewers; we rather write the commentary for us. However, we write it in a style that it could be used for good by future reviewers. 

Once the community gets a bit more used to this idea, we could deal with commentaries pretty much in the same way as with optional appendices in some conferences. One risk is the one of bias when reviewers are exposed to previous reviews and author claims in the commentary. Another risk is that a badly implemented process for commentaries would just cause more work for both program committees and authors. Maybe, I am thinking a bit too revolutionary here, but I am looking forward a system where we break out of the static nature of program committees and allow for review results and author responses to be passed on from conference to conference. I am thinking of a more social process of reviewing and revision.



CS intro != Programming intro

Yes, "CS intro != Programming intro". Of course, it is not; maybe it would never occur to you that it could!? Well, in some places, though, the CS 101 ("introduction to CS") course ends up being essentially that, at least for some initial part. In some places (Koblenz included; see my OOPM course which is under attack in this post), the CS 101 course certainly ends up trying to also introduce the basics of programming (because the lecturer thought he/she should).

Here is why I think this is wrong and I apologize for coming forward so late:

  • Teaching the trivial level of programming ("getting you going with for-loops and such" :-)) just doesn't belong at the university. Maybe it never belonged there, but it certainly doesn't belong there anymore (for like 10-20 years) because it is just such a basic skill and there is so much guidance out there (Youtube etc.) that a contemporary student wanting to study CS must (or should) simply be beyond the point of needing such a trivial introduction.
  • At a "mediocre" university like mine (not overwhelmed exactly with brilliant students like this), we may get some otherwise possibly smart students however certainly lacking the most basic programming skills. Trying to get them up to speed in a university-style course is just frustrating everyone. It certainly has bored me near to death.
As a result, I am changing my CS 101 course (11 ECTS on paper, which is crazy in itself), per immediately (effective per end of October), as follows:
  • I decouple the objective of teaching basics of programming from other aspects of the course by running two tracks in parallel for some time. I collect these other aspects of the course, which are not about basics of actual programming, as the "theory track". The "programming track" is just there for those who need it. I expect it to be skipped by some students. The theory track will not make any assumptions about previous programming skills.
  • In the theory track, I spend several weeks overall on algebraic specification with applications to abstract data types and what is maybe best called "algebraically oriented domain modeling". See my recent slide deck (.pdf), in that instance Haskell-focused, though. This is joint work with Magne Haveraaen, University of Bergen. Welcome to the university! Students need to understand algebraic signatures and logic formulae. I am going to be nice with the students :-) So I shall limited coverage to pragmatic application of loose algebraic specifications; no term algebra, no quotient algebra, no Herbrand universe, no nothing. Structural induction is in, though! Sorry.
  • As the two tracks have progressed enough, synergy happens such that we implement specifications in Java or Python. Appropriate axioms from the algebraic specifications will lend themselves as assertions and as (parametrized) unit tests. We are also well equipped to discuss design by contract. We will also be prepared to appreciate the beauty (?) of UML class diagrams with OCL constraints. Algebraic specifications and logics are everywhere.
  • Logistically, I will be working mostly without slides. Rather I will use the whiteboard and different editors. The 101wiki of the 101companies project will host code and explanations and links to Wikipedia and other online resources. I will distribute some lectures notes, but no wall of text is going to replace my previous wall of slides. I want to get across (through focused lecture note material) some basic notions and for the rest I use online resources including commented specifications and programs as well as links to Wikipedia et al.
This gets us through 2/3 of the course. The remaining 1/3 of the course will use a single, sequential timeline. I will think about it, as it gets closer, but here are some thoughts:
  • Coverage of EBNF as another modeling method. Great! There is this wonderful connection between algebraic signatures and context-free grammars. Maybe, I will do term algebras anyway and manage to bore everyone to death.
  • Very basics of complexity theory. This can be done as soon as students are fluent in basic algorithmic programming. The same is true for the very basics of program verification. Coverage of program verification (Hoare logic) will definitely benefit from dealing with logic formulae in the theory track earlier.
  • There is a bunch of non-very basic programming topics that I tend to cover such as pointers, encapsulation, interfaces, modularity, inheritance, generics, exceptions. Some order needs to be determined. I have managed in the past.

Luckily, in the second semester, I teach "introduction to functional programming".  It is, of course, wonderful (in the view of the lecturer anyway) to make the students see the connection between algebraic specification and functional programming.


A tiny proof checker in Prolog

The Prolog code for this post is available on GitHub.

I am crazy enough to teach Hoare logic (axiomatic semantics) in the first course for our CS students. For instance, consider the following program (in some simple imperative language), which is supposed to compute quotient q and remainder r according to Euclidian division for operands x and y:

q = 0;
r = x;
while (r >= y) {
   r = r - y;
   q = q + 1; 

Given natural numbers x and y, the postcondition of the program is this:

x == q*y + r && q >= 0 && r>=0 && r < y

The precondition is this is:

x >= 0 && y > 0

We are supposed to prove the "triple" consisting of precondition, program, and postcondition.

Let's face it, verification according to Hoare logic is not easily understood by beginners. Many of my students aren't mathematically / logically fit, when they enter the curriculum. So it is a pain in the neck for them to listen and for me to present. However, I think it is still valuable to expose the students to Hoare logic early on, also in the hopes of related topics such as logics and verification to show up again later so that understanding arises through iterations.

So far I have avoided Hoare logic in my course on programming language theory, as I felt that operational and denotational semantics are more presentable in a programming-based fashion. That is, one can obviously write interpreters both in the operational and the denotational style, while the assumption is here that interpreters (say, programs) are easily digested by the average CS student as opposed to proofs and logics.

This time around, I am going to add axiomatic semantics to said course. To this end, I am presenting axiomatic semantics mainly through a simple proof checker that is developed in Prolog. (As reading material I recommend Chapter 6 of Hanne Riis Nielson, Flemming Nielson: Semantics with Applications: A Formal Introduction, Wiley, 1992 because it is easy enough and excellent overall and available online.) In this manner, the programmatic path is also exercised for axiomatic semantics. As a neat side effect, this also allows me to talk about concepts such as rewriting, normalization, simplification, again, in a programming-based fashion. (These concepts are close to my heart as a software language engineer and programming language researchers.)

Here is a quick look at the proof checker.

The rules of Hoare logic are (of course) represented as definite clauses. For example:

/* In the case of an assignment, its precondition is calculated from
 * its postcondition by substituting the LHS variable by the RHS
 * expression.

proof(P, assign(X, E), Q) :-
    subst(Q, X, E, P).

/* In the case of a statement sequence seq(S1, S2), the proof requires
 * subproofs for S1 and S2 subject to an intermediate assertion that
 * serves as postcondition of S1 and precondition of S2. To this end,
 * a statement sequence is represented here as a term with functor
 * seq_/3 (as opposed to the functor seq/2 of the original abstract
 * syntax) so that the extra intermediate assertion can be expressed.

proof(P, seq_(S1, R, S2), Q) :-
    proof(P, S1, R),
    proof(R, S2, Q).

/* In the case of a while-loop while(E, S), the precondition of the
 * loop is per definition the invariant I of the loop and the
 * postcondition must be equivalent to and(I, not(E)). A subproof is
 * due for the body S of the loop such that the invariant follows as
 * postcondition of S for the precondition and(I, E), i.e., the
 * invariant and the loop's condition hold at the beginning of the
 * execution of the body.

proof(I, while(E, S), Q) :-
    sameAs(and(I, not(E)), Q),
    proof(and(I, E), S, I).

/* The precondition of a proof can be strengthened. While
 * simplification/normalization of assertions is performed
 * automatically, strengthening must be explicitly requested. (This is
 * well in line with the usual extra rule of Hoare logic for
 * strengthening the precondition.) In the following notation, we
 * assume pseudo syntax to express the precondition before
 * strengthening, whereas the precondition of the triple is the
 * strengthened one.

proof(P, pre(R, S), Q) :-
    implies(P, R),
    proof(R, S, Q).

subst/4 is substitution of variables by expressions within assertions.
implies/2 is logical implication.
sameAs/2 is equivalence of assertions modulo normalization.
Some rewrite rules of normalization are included for illustration:

% Reflexivity
rewrite(eq(X, X), true).
rewrite(geq(X, X), true).

% Prefer right-associativity over left-associativity
rewrite(and(and(E1, E2), E3), and(E1, and(E2, E3))).
rewrite(or(or(E1, E2), E3), or(E1, or(E2, E3))).

% Unit laws
rewrite(and(true, E), E).
rewrite(and(E, true), E).
rewrite(or(false, E), E).
rewrite(or(E, false), E).
rewrite(add(number(0), X), X).
rewrite(add(X, number(0)), X).

Here is a simple (incomplete) axiomatization of implication:

% Implication complemented by normalization
implies(E1, E2) :-
    normalize(E1, E3),
    normalize(E2, E4),
    implies_(E3, E4).

% Search rules for implication
implies_(E, E).
implies_(_, true).
implies_(and(E1, E2), and(E3, E4)) :-
    implies_(E1, E3),
    implies_(E2, E4).
implies_(and(E1, _), E2) :-
    implies_(E1, E2).
implies_(and(_, E1), E2) :-
    implies_(E1, E2).
implies_(greater(X, number(N1)), greater(X, number(N2))) :-
    N1 > N2.

Here is a simple test predicate for proving correctness of an if-statement:

% Proof for a simple if-statement that computes the max of a and b
prove_if :-
      greater(var(a), var(b)),
      pre(true, assign(r, var(a))),
      pre(true, assign(r, var(b)))
     or(eq(var(r), var(a)), eq(var(r), var(b)))

In abstract syntax, Euclidean division looks like this:

    assign(q, number(0)),   
      assign(r, var(x)),
        geq(var(r), var(y)),
          assign(r, sub(var(r), var(y))),
          assign(q, add(var(q), number(1)))))))

Here is the proof for Euclidian division; it contains the loop invariant and all other details of the proof.

% Proof for Euclidean division
prove_div :-

      % "As declared" precondition of the program
      % x >= 0 && y > 0
        geq(var(x), number(0)), % As computed
        greater(var(y), number(0))), % Vacuously added for division by zero

      % Strengthen precondition

        % Computed (weakest) precondition of the program
        geq(var(x), number(0)),

        % Beginning of program code

          % q = 0;
          assign(q, number(0)),   

          % Intermediate assertion
            eq(var(x), add(mul(var(q), var(y)), var(x))),
              geq(var(q), number(0)),
              geq(var(x), number(0)))),

            % r = x;
            assign(r, var(x)),
            % Intermediate assertion = invariant for while loop
              eq(var(x), add(mul(var(q), var(y)), var(r))),
                geq(var(q), number(0)),
                geq(var(r), number(0)))),

              % Loop condition                     
              geq(var(r), var(y)),
              % Strengthen precondition
                % Computed precondition of body
                  eq(var(x), add(mul(add(var(q), number(1)), var(y)), sub(var(r), var(y)))),
                    geq(add(var(q), number(1)), number(0)),
                    geq(sub(var(r), var(y)), number(0)))),
                % Loop body
                  % r = r - y;
                  assign(r, sub(var(r), var(y))),

                  % Intermediate assertion
                    eq(var(x), add(mul(add(var(q), number(1)), var(y)), var(r))),
                      geq(add(var(q), number(1)), number(0)),
                      geq(var(r), number(0)))),
                  % q = q + 1;
                  assign(q, add(var(q), number(1))))))))),

      % "As declared" postcondition of the program
      % x == q*y + r && q >= 0 && r>=0 && r < y
        eq(var(x), add(mul(var(q), var(y)), var(r))),
          geq(var(q), number(0)),
            geq(var(r), number(0)),
            not(geq(var(r), var(y))))))

It just works:

?- prove_div.