Logic Programming with LPL0 - An Introductory Guide

Thomas Sjöland
CSALAB / Computer Systems Architechture Laboratory
TTDS / Department of Telecommunications and Computer Systems
KTH / The Royal Institute of Technology
Stockholm
Sweden
February 1984

(reformatted 1999-09-12 with Netscape Composer)
TRITA-CS-8404
Logic Programming represents a powerful new metaphor for programming. Besides the standard procedural reading of a computer program a logic program has also a declarative reading in first order predicate logic. This gives many programs a natural appearance that can be a strong help in the creation of error-free executable code. A Logic Programming language is suitable also as a specification language for system specification. Sometimes the logic specifications are executable which reduces the semantic gap between the specification and implementation languages. Problem areas where Logic Programming is particularly useful are a.o. :
This document is a brief introduction to logic programming using a language in the PROLOG-family, LPL0, which has been implemented under the operating system UNIX at CSALAB.

Table Of Contents

  • Logic as a Programming Language -- The background of LPL0.
  • Representing a Program as Horn Clauses.
  • Representing Data in a Logic Program.
  • Logic Variables -- Binding.
  • Proving Goals -- Matching and Unification.
  • Controlling the Search -- the predicates '/' and 'false'.
  • Negation.
  • Functions.
  • Input and Output -- the function 'char_infile'.
  • Some Examples.

  •  

    Logic as a Programming Language -- The background of LPL0.

    The idea of using logic as a programming language has been present in the computer science community for a long time. Theorem provers were implemented early, but with only limited capacity regarding speed and storage. The use of the so called resolution principle as suggested by Robinson in the mid-sixties was an important contribution. The idea of using a limited, but efficient, algorithm for proving theorems as the basis of a programming language, where proofs of propositions are equivalent to the construction of terms, led to the Marseille implementation of Prolog in 1972. Since then, several implementations of Prolog have been made with somewhat varying computational strategies in different cases. Particularly noteworthy is Dec-10 Prolog by David Warren et. al. which includes a compiler that generates efficient machine-code.

    Currently, Logic Programming is attracting the interest of many researchers in the field of computer science. Like functional languages it lends itself to a parallel execution with automatic distribution of small tasks over a network of processing elements.

    A language in the PROLOG-family but excluding some of the non- logical features of PROLOG, called LPL0, has been implemented in the UNIX-environment at our lab by Dan Sahlin and Seif Haridi. This subset has much in common with PROLOG but it also includes a functional interpretation of deterministic predicates and arithmetic expressions and some other features that make it different from other PROLOGs. The syntax of the language differs slightly from that of Dec-10 PROLOG, but the execution mechanism is basically the same. The system includes tail recursion optimization and a garbage collector.

    The system was originally intended as an implementation of the logic programming language, LPL, described by Hansson, Haridi and Tarnlund in [ClTa], but several of the characteristics of this language were excluded during the implementation, mainly due to implementation difficulties. Another system based on Natural Deduction and implemented as an interpreter in the language described here, LPL0, is named GEPR (or sometimes LPL) and is described in [HaSa2]. The implementation of the LPL0 language is described in [HaSa1].

    Representing a Program as Horn Clauses. Horn Clauses are a subset of the statements of Predicate Logic named after the logician Alfred Horn. A Horn clause has two parts: The head and the body. The body may be empty and the clause is then called an "assertion". An assertion represents a simple fact about its parameters, while the body of a Horn Clause represents conditional facts that have to be true for the head statement to be valid.

    Example:

    p(17).
    p(x) <- x<8 & q(x).
    p(x) <- q(x) or p(x-1).

    In english the above example could be stated as follows:

    The property p holds for the constant 17.
    The property p holds for a term x if x<8 and q holds for x.
    The property p holds for x if q holds for x or p holds for x-1.
    When several clauses exist with the same name, they represent alternative facts and rules about this relation. The connective 'or' in the body of a statement also represents alternative facts about the relation, at least one of which should be true for the relation to hold. The connective '&' connects the goals in the body together, all of which should be true for that alternative of the relation to hold.

    A program can be understood in either of two ways. First it can be seen as a set of Horn clauses (and functions) stating facts about data. This is the declarative reading of a logic program. Second, considering the particular execution mechanism used in the logic programming system at hand, it can be viewed as a program describing a particular execution. In order to control the program flow, certain non-logical (or extra-logical) predicates are used which do not express any facts about the terms we are talking about. This is the "Procedural Reading" of a logic program.

    The execution of a logic program could be considered as a constructive proof procedure, where the system attempts at proving a given goal statement in a given environment of computation. If a proof succeeds under some restrictions on the variables in the given goal, these restrictions are shown as "bindings" of the variables.

    Representing Data in a Logic Program.

    The data of a logic program is represented as datastructures. A datastructure is a named frame containing subcomponents which are datastructures, constants or variables. In LPL0 every datastructure has a fixed number of components (fixed arity). A special case of datastructures is the constant which could be considered to be a datastructure without subcomponents. Its only property is its name. A special type of constant is the integer number which can occur in meaningful arithmetic expressions. From the point of view of logic, an integer is not different from any other constant. The arithmetic expressions could be viewed as syntactic sugar for relations on numbers. To discriminate a datastructure from a variable or a function the syntactic convention has been adopted that the name of a datastructure always begins with a capital letter. Any string may, though, be considered to be a datastructure identifier if it is contained within single quotes.

    A list is a very convenient form for datarepresentation, which is well exploited in the programming language LISP. In LPL0 lists can also be used although they are not strictly necessary. One could also argue that datastructures can be represented by a list with a constant, denoting the name of the functor, in the first position of the list. The dotted pair of LISP, e.g. '(A.B)' is in LPL0 denoted '[A|B]'. The notation '(A B C)' in LISP for the list of three elements A, B and C is written [A,B,C] in LPL0. The same list can be written as cons cells.

    In LISP: (A.(B.(C.NIL)))
    In LPL0: [A|[B|[C|[]]]]

    Here are some examples of constants:
     

    X Y123 Hello This_is_a_very_long_constantname_indeed
    'x' 'y123' 'etc' 123 -4711

    Here are some examples of datastructures:
    (a constant is a datastructure with arity=0)
     

    Hello Tree(l,r) Data(Data(d,e),Data(D,E))
    XXX(YYY(z)) '+' 'hello' -4711
    List(A,List(B,List(C,List(D,NIL))))
    Tree(left,node,right)

    Here are some examples of lists:

    The empty list: []
    List with one element: [A]
    List with two elements: [A,B]
    List with head and tail: [h|t] or [h,..t]
    List of lists: [[a,b],[c,d],[e,f,g|t]|t1]

    Logic Variables -- Binding.

    The variables of an ordinary imperative language (C, PASCAL, FORTRAN) are abstractions of the computer memory and the imperative program describes successive manipulations that are to be undertaken on these memory cells in order to transform the input data into the output. The order of these manipulations is therefore in many cases crucial in imperative programs.

    A variable in a logic program is a term which can come to denote other terms during the execution of a proof. Once a variable has been bound to some structure, or to another variable, it is not possible to release this binding during the current branch of the proof. A successful proof leads to some particular binding pattern on the included variables. These bindings represent the minimum restrictions made on the values of the variables to make the proof valid. The order in which the bindings have been made does not matter for the correctness of the proof. (It might however affect termination of the program).

    A logic variable may be in either of two states during the execution of a logic program. It starts in the unbound state and during the computation of a proof it may become bound. A bound variable cannot become unbound during the program execution except in the case when a branch of a proof fails and another branch is tried ("backtracking").

    The syntactic convention adopted in LPL0 for representing variables is that variables begin with a lower case letter. The same convention is used for predicates and functions, but the syntactic rules keeps the understanding of the program unambiguous. Here are some examples of variables:

    x y123 hello this_is_a_very_long_variablename_indeed

    Proving Goals -- Matching -- Unification.

    When attempting to prove an atomic goal, 'p(args)', the LPL0-interpreter finds the clauses in the database with the name 'p' and tries to prove one of them. This is accomplished through first matching the arguments, 'args' in the goal against the terms in the head of the clause. This matching operation tries to 'unify' the two matched terms, i.e. to check if they can be made to represent the same values. The process is called unification. A unification may succeed or fail depending on the data involved. If it succeeds, some bindings of the variables might be produced, representing the minimal restrictions necessary for this branch of the proof to succeed. If the clause has a body, the goals of the body must also be proven. If the unification or the proof of the body fails, the alternative clauses for the predicate 'p' are tried. If no alternatives remain, the proof of the clause 'p' fails on the next higher level. When an assertion, i.e. a clause for 'p' without a body, is proved, the proof of the current goal is finished and the resulting bindings can be presented to the next higher level, which for the topmost goal is the user. If no proof can be found, the computation of this branch of the proof fails. If alternative branches remain, that are not yet tried, the system tries to prove them. This mechanism is called backtracking and since the proof proceeds to the bottom goal in each branch before trying any alternative and since the goals of the body of a clause are tried in a left-to-right order, the execution mode of LPL0 (and most other PROLOGs) is called depth-first, left-to-right with backtracking.

    The proof mode used in PROLOG is normally the most efficient proof- mechanism regarding space-consumption since most of the memory allocated for the proof of a branch can be reclaimed after each branch. It is also simple to manage for the programmer when he uses non-logical predicates like 'write' or 'read' in the program clauses. Breadth first interpreters, i.e. proof procedures that try all alternatives concurrently (or variants thereof) are not so common since their memory requirements often develop exponentially during a proof. A breadth first interpreter has, though, the advantage of providing the prooftheoretical property of 'fairness', i.e. all true statements which follow from the clauses of a program are provable (as long as the amount of available memory is sufficient). A looping clause like 'p(x) <- p(x)' which might prevent the standard interpreter from finding a solution simply by not terminating causes no problem (except from using resources) in a fairness-preserving interpreter. Since such a clause may inhibit the computation completely if depth-first semantics is used, it must be considered as a programming error in LPL0, even though it may have a reasonable declarative meaning.

    Sorting Of Clauses Due To Instantiation Pattern.

    In LPL0 a special compiler mechanism can be used to produce different versions of the code for a relation depending on the static instantiation pattern of the parameters in the relation, where it is called: For each unique instantiation pattern the compiler produces a special version of the code where the goal clauses in the body of the statement have been rearranged in order to try the goals with the "most instantiated" arguments before those with less instantiated arguments. The syntactic convention used for this feature is the percent sign preceding the terms in the head of a relation. Sometimes this feature can be used to guarantee termination of programs that would otherwise loop indefinitely.

    An example:

    father(%x,%y) <- male(x) & parent(x,y).
    The body of the clause is sorted in the different versions of the code depending on the instantiation pattern of the variables:

    Four cases should be handled:

    The statement is transformed into different, but declaratively equivalent, versions in the four cases:

    Instantiation percentage:

    male parent
    0%   0%     1) father(x,y) <- male(x) & parent(x,y).
    100% 0%     2) father(x,y) <- male(x) & parent(x,y).
    0%  50%     3) father(x,y) <- parent(x,y) & male(x).
    100% 100%   4) father(x,y) <- male(x) & parent(x,y).

    Of course, in this case the analysis resulted in only two distinct cases and therefore only two versions of the code is being produced.

    Controlling the Search -- the predicates '/' and 'false'.

    The execution of a Horn Clause program can be viewed as a search tree traversal where the search strategy is depth-first left-to-right with backtracking. Sometimes when a solution to a subproblem has been found, no other solutions to it or to earlier proved subgoals of the current goal need to be considered.

    By using the non-logical primitive predicate '/', normally named 'cut' or 'slash', you reach the effect of cutting away alternative branches to subgoals in the clause that is currently being proved. The alternatives on a 'higher' level, i.e. to the clause which the current goal is a part of are, though, kept. This can decrease the amount of unnecessary computation.

    N.B. '/' must be used with care since some correct solutions may be inhibited by it.

    The primitive predicate 'false' is used in order to prevent a branch from succeeding, thereby forcing backtracking to occur. This may be useful for instance in connection with I/O.

    Negation.

    The implementation of negation in the PROLOG-brand of logic languages is based on the "closed world assumption". This principle is in ordinary English: That which is not stated explicitly is not the case.

    The facts of the database in a Horn Clause program are considered to be positive statements about the "world". The negated statement 'not p(x)' should succeed if the proof of the statement 'p(x)' fails and it should fail if the proof of the statement 'p(x)' succeeds. The way this is implemented is through two clauses:

    not_p(x) <- p(x) & / & false. not_p(x).
    This works perfectly as long as the variable x is fully instantiated when the call to 'not_p(x)' is being performed. But consider the following program:
    p() <- not_animal(x) & x=House.
    animal(Cow).
    not_animal(x) <- animal(x) & / & false. not_animal(x).

    The declarative understanding of this program is intuitively clear:

    A house is not an animal so the program should succeed.

    Unfortunately it fails because the call 'not_animal(x)' fails since there is something which can be bound to the as yet unbound variable 'x', namely 'Cow', which certainly is an animal. That the clause is really stating something about a House is not clear to the interpreter at this stage and the '/' takes away the possibility for the program to succeed in this case!

    By reordering the goals on the body of the top goal 'p()' we reach a program that behaves correctly and succeeds:

    p() <- x=House & not_animal(x).
    A slight modification to the original example gives this program:
    p() <- not_not_animal(x) & x=House.
    animal(Cow).
    not_animal(x) <- animal(x) & / & false. not_animal(x).
    not_not_animal(x) <- not_animal(x) & / & false.
    not_not_animal(x).
    The way this program is stated it will succeed in proving that a 'House' is NOT a non-animal, therefore it proves that a 'House' is an animal!

    Now this may indeed suggest that something is very wrong with the standard execution mechanism for PROLOG with respect to negation. But a closer inspection shows that we were a little too quick in our statement about the declarative meaning of the program above. Declaratively speaking the program states that the property not_animal holds for any term and also that the property not_not_animal holds for any term. The '/' used in the first of the cases is used in order to rule out the alternatives that were not intended, but this scheme does not work when the terms are not instantiated upon calling the predicate. So this implementation of negation in PROLOG does not correctly reflect the meaning of negation in logic unless the terms are fully instantiated (ground). It has sometimes been argued that the suggestive shorthand 'not p(x)' should not be used in prolog but instead the user should be expected to implement "negation" using the primitives '/' and 'false'.

    Unfortunately this type of behaviour of a program where non-logical predicates are used can be difficult to spot in a more realistic program and therefore one must be cautious when using negated clauses. Problems of this kind are overcome in PROLOGs where a more advanced execution mechanism is used, e.g. MU-PROLOG [Na].

    Functions

    In LPL0 you can use functions in the cases when your program represent a deterministic computation. This makes many programs clearer.

    A function can be seen as a special relation where there is one unique value in the output domain for each input tuple. This means, that once a function has succeeded in supplying a value, no alternative branches need to be tried. This leads to a more efficient usage of the memory in run time, if functions are used instead of relations where possible.

    In LPL0 a function may not be run "backwards", i.e. the input parameters must be instantiated when calling a function. The compiler sorts the declared clauses for the functions and the goals in the bodies of the functions with respect to static knowledge about the instantiation pattern. Therefore one must not assume any particular order of the execution when using functions, but instead state the definitions of the functions in a way that is declaratively true and trust the compiler with the work of ordering the clauses. An exception to this principle makes it possible for a user to explicitly state that one alternative should be tried last as a 'catch-all' case by using the 'otherwise' statement.

    If '/' is used explicitly in a function it is not moved by the compiler. The compiler produces warnings whereever it can detect a possible ambiguity in the definition of a function.

    Input and Output -- the predicate 'char_infile'.

    While I/O represents no particular theoretical problem to an imperative language it certainly is one to a language in the PROLOG-family. The normal implementation of I/O in PROLOG uses non-logical predicates like 'get' and 'put' to read and write a character on the output device respectively. By using predicates like 'seek' and 'tell' files in the file system can be handled. This is quite natural if you regard the Horn clause program without respect to the declarative meaning of the clauses involved. But if you want to see the program as clauses stating truths about the world these predicates with side-effects on the outside world are dubious!

    It is possible to overcome most of the difficulties with input by implementing input as streams. This is accomplished in LPL0 using the predicate 'char_infile' in order to associate a list with an input file. After this predicate has succeeded the list contains the ascii-code of the characters of the file and can be unified with other lists in order to check the content. In this way any I/O-stream can be incorporated in a logic language in an acceptable way logically speaking. For compatibility reasons LPL0 also contains the predicates 'read' and 'get' which work as "true" predicates with side-effects in the ordinary way.

    Output has to be implemented as predicates with side-effect. In LPL0 there are 'write(x)' which writes 'x' on standard-output and 'put(i)' which writes the ascii-code denoted by 'i' on standard- output. If the output predicates only occur last in the top goal it is guaranteed that no output is generated from any failing branch.

    Some Examples

    In this section some examples of LPL0-programs are discussed in some detail from the programmers point of view. Hopefully these examples can clarify some of the fog obscuring the domain of Logic Programming. For more examples of logic programs the reader is adviced to read for instance the book "Programming in PROLOG" by Clocksin and Mellish. Even though LPL0 is not PROLOG, and some of the programs described in the PROLOG literature will not work in LPL0, most of the differences are of a syntactic nature and most PROLOG programs will be executable in LPL0 also.

    Ex 1: A Logic Program Representing Family Relations.

    /* Family Relations. */
    prog() <- father(x,y) & write('The father of ') &
    write(y) & write('is') & write(x).
    father(x,y) <- male(x) & parent(x,y).
    male(Sture). male(Ture). female(Anna). female(Gun). female(Lisa).
    grandparent(x,y) <- parent(x,z) & parent(z,y).
    grandfather(x,y) <- male(x) & grandparent(x,y).
    grandmother(x,y) <- female(x) & grandparent(x,y).
    grandson(x,y) <- male(x) & grandparent(y,x).
    granddaughter(x,y) <- female(x) & grandparent(y,x).
    brother(x,y) <- male(x) & parent(z,x) & parent(z,y) & x/=y.
    sister(x,y) <- female(x) & parent(z,x) & parent(z,y) & x/=y.
    cousin(x,y) <- grandparent(z,x) & grandparent(z,y) & x/=y.
    parent(Ture,Anna). parent(Ture,Gun).
    parent(Sture,Ture). parent(Sture,Anna).
    parent(Anna,Lisa).
    /* End of program for family relations. */

    This program, though small for reasons of example, is a typical example of a simple database program. The declarative reading of the program is straightforward:

    The first clause is the statement that we want to prove. It is true if the statement father(x,y) is true for some values of variables x and y. (The predicate 'write' is always true in the declarative reading). The rest of the clauses are the programming environment.

    The procedural reading of this program gives also some information.

    A translation into English could be:

    For clause 1: In order to prove prog, try to prove that the relation father holds for some pair of terms (x,y) and write out the specified text ("prove the predicate write") and the values bound to the variables (x,y) after the proof.

    For clause 2: In order to prove that the relation father holds for some given pair of terms (x,y), try to prove the relation male for the term x and the relation parent for (x,y). For clauses 3 etc: It is true that male holds for the constant term Sture, etc.

    Ex 2: The relation member of a list.

    member(a,[a|_]). member(a,[_|t]) <- member(a,t).
    This relation expresses facts about membership in a list. The meaning of the first clause is that an element, a, is member of an arbitrary list where a is the first element. The symbol '_' stands for the 'void' variable which can denote any term.

    The second clause expresses the fact that the element a is a member of the list [_|t] if a is a member of the rest of the list denoted by t.

    A proof of the relation (predicate) member will succeed if the query is matched against a goal fulfilling the conditions stated in any of these two clauses. The second clause in the definition of member is recursively defined.

    Ex 3: Appending lists.

    append([],l,l).
    append([h|t],l,[h|x]) <- append(t,l,x).
    This definition states what it means for two lists to be appended. When executed the predicate can append two lists.

    I.e. after attempting to prove the statement append([A,B],[C,D,E],x) the variable x will be bound to the list [A,B,C,D,E].

    But moreover it can also be used for decomposing lists!

    If one attempts to prove for instance: append(x,[G,E,F],[A,B,C,G,E,F]) the resulting binding to the variable x will be the list [A,B,C].

    Ex 4: A parser for s-expressions. An s-expression is a general representation form for data used in LISP and in some PROLOG dialects, especially those embedded in a LISP environment.

    A Backus-Naur grammar describing what a legal s-expression is is:  

    <s-expr> ::= <s-atom> | '(' <s-exprs> [ '.' <s-expr> ] ')'
    <s-exprs> ::= <s-expr> [<s-exprs>]
    <s-atom> ::= [<blanks>] <non_blanks> [<blanks>]
    <non_blanks> ::= <non_blank> [<non_blanks>]
    <blanks> ::= <blank> [<blanks>]

    A non-blank is an ascii-character with a value greater than 32 (decimal). A blank is an ascii-character with a value less than or equal to 32.

    The input to a program is represented as a list of characters (ascii-values).

    The predicate sExpr(l1,l2,s) succeeds if the beginning of the list l1 is a list of characters representing an s-expression. As a result of the computation the structure s contains a datastructure (a list) representing the s-expression. The rest of the input text is stored in the list l2.  

    p() <- sExpr("(A.B)",out,s) & write(out) & write(s).
    sExpr(in,out,s) <-
        (blanks(in,i0) or in=i0)
        & (sAtom(i0,out,s)
        or
        lpar(i0,i1)
        & sExprs(i1,i2,s,last)
        & (dot(i2,i3)
        & sExpr(i3,i4,last)
        or
        i4=i2
        & last=[])
        & rpar(i4,out)).
    
    
    
    sExprs(in,out,[h|t],last) <-
        sExpr(in,i1,h)
        & (sExprs(i1,out,t,last)
        or out=i1 & t=last).
    
    
    
    sAtom(in,out,a) <- nonBlanks(in,out,a).
    
    
    
    nonBlanks(in,out,[h|t]) <-
        nonBlank(in,i1,h)
        & (nonBlanks(i1,out,t)
        or
        i1=out
        & t=[]).
    
    
    
    blanks(in,out) <-
        blank(in,i1)
        & (blanks(i1,out) or i1=out).
    
    
    
    blank(in,out) <- in=[c|out] & c<=32.
    nonBlank(in,out,c) <- in=[c|out] & c>=48.
    lpar(in,out) <- in=[#(|out].
    rpar(in,out) <- (blanks(in,i1) or in=i1) & i1=[#)|out].
    dot(in,out) <- (blanks(in,i1) or in=i1) & i1=[#.|out].

    Some comments could be made about this program. The first thing one can notice is the close similarity of the grammar representation of the syntax to be parsed and the logic program that actually expresses these facts and which is able to perform the task. The usage of the logical connectives '&' and 'or' has not been explained but these connective have their expected meaning. It should be noted that in the general case the connective 'or' represents a division of the proof into two independent parts whereas the connective '&' connects two subproofs which should both hold for the statement to be true. In the case of LPL0 (and PROLOG), where Horn Clauses are the only allowed relational statements, these two connectives ('&' and 'or') are the only ones that can occur in the body of a clause.

    Ex. 5: append as a function. If the append program should be used strictly for appending lists it could be expressed as a function instead of as a relation:  

    append([],l)=l. append([h|t],l)=[h|append(t,l)].

    Ex 6: Unifying Sets represented as lists.

    Sets cannot be represented directly as objects in LPL because the structured types of the language, lists and dataconstructors, have a defined order-relation. Sets can be represented by the existing datatypes in a convenient way. Here I will use a tagged list to represent a set. E.g. Set([A,B,C]) represents the set {A B C} and so does Set([B,A,C]) or Set([C,A,B]).

    It may be interesting to unify two sets to see if the two sets are equal.

    /* This predicate can be used to compactify the list representing the set. */
    uniquelist([],[]).
    uniquelist([h|t],l) <-
        member(h,t)
        & uniquelist(t,l)
        or
        l=[h|l1]
        & uniquelist(t,l1).
    
    
    
    /* In order to unify sets represented by lists this predicate applies.
        By using the help-predicate 'allmember' it can unify nested sets.
        To differ between sets and ordinary lists the sets are 'tagged' with
        the dataconstructor 'Set(_)'. */
    
    
    
    unifySets(Set(s1),Set(s2)) <-
        allmember(s1,s2) &
        allmember(s2,s1).
    
    
    
    allmember(s1,s2) <-
        s1=[]
        or
        s1=[h|t]
        & (member(h,s2)
        & allmember(t,s2)
        or
        h=Set(_)
        & member(Set(l2),s2)
        & unifySets(h,Set(l2))).
    member(x,[x|_]).
    member(x,[_|t]) <- member(x,t).

    Discussion

    Implementing set-unification has some problems in that the unification of two sets often succeeds in several ways. This gives an unnecessary overhead in that exactly the same solution is found in several different ways. Think of ways to avoid this from happening and try to formulate the code for such a unification algorithm.

    Appendix: A Grammar For Horn Clause Statements.

    <clause> ::= <head> [ '<-' <body> ]
    <head> ::= <goal>
    <body> ::= <goal> [ '&' <body> ]
    <goal> ::= <name> '(' <termlist> ')'
                | <cut>
                | <false>
    <termlist> ::= <term> [ ',' <termlist> ]
    <term> ::= <variable>
                | <dataconstructor> [ '(' termlist ')' ]
                | <list>
                | <integer>
                | <expression>
    <list> ::= '[' <term> [ <moreterms> ] [ '|' <term> ] ']'
    <moreterms> ::= ',' <term> [ <moreterms> ]
    <expression> ::= <term> <operator> <term>
                | <operator> <term>
    <operator> ::= '=' | '<' | '<=' | '>=' | '>' | '+' | '-' | '*' | '/'
    <cut> ::= '/'
    <false> ::= 'false'

    If functions are included the definition of <term> should be extended with:
     

    <term>   ::= <variable>
                | <dataconstructor> [ '(' termlist ')' ]
                | <constant>
                | <integer>
                | <expression>
                | <func>
    <func>    ::= <name> '(' <termlist> ')'
    <funcdef> ::= <func> '=' <term>

    References

    [Na]         An Introduction to MU-Prolog..
                    Lee Naish
                    Technical Report 82/2 (Revised July 1983)
                    Department of Computer Science
                    University of Melbourne

    [HaSa1]   An Abstract Machine for LPL0.
                    Seif Haridi & Dan Sahlin
                    CSALAB TTDS/KTH Stockholm 1982

    [HaSa2]   Evaluation of Logic Programs Based On Natural Deduction. (Corrected Draft)
                    Seif Haridi & Dan Sahlin
                    CSALAB TTDS/KTH Stockholm June 1983

    [Wa]         Logic Programming and Compiler Writing.
                    David Warren
                    (in Software Practice and Experience Vol 10, 97-125 1980)

    [ClTa]        Properties of a Logic Programming Language
                     Åke Hansson, Seif Haridi*, Sten-Åke Tärnlund UPMAIL
                    *TTDS/CSALAB (In "Logic Programming", edited by Clark & Tärnlund, Academic Press 1982)