Selected papers authored or co-authored by M.H. van Emden

  1. Title:
    Contributions to the compositional semantics of first-order predicate logic
    Author: Philip Kelly and M.H. van Emden

    Abstract:
    Henkin, Monk and Tarski gave a compositional semantics for first-order predicate logic. We extend this work by including function symbols in the language and by giving the denotation of the atomic formula as a composition of the denotations of its predicate symbol and of its tuple of arguments. In addition we give the denotation of a term as a composition of the denotations of its function symbol and of its tuple of arguments.

    Date: December 2015
    pdf

  2. Title:
    The lambda mechanism in lambda calculus and in other calculi
    Author: M.H. van Emden

    Abstract:
    A comparison of Landin's form of lambda calculus with Church's shows that, independently of the lambda calculus, there exists a mechanism for converting functions with arguments indexed by variables to the usual kind of function where the arguments are indexed numerically. We call this the "lambda mechanism" and show how it can be used in other calculi. In first-order predicate logic it can be used to define new functions and new predicates in terms of existing ones. In a purely imperative programming language it can be used to provide an Algol-like procedure facility.

    Date: March 2015
    pdf

  3. Title:
    Logic programming beyond Prolog
    Author: M.H. van Emden

    Abstract:
    A program in pure Prolog is an executable specification. For example, merge sort in Prolog is a logical formula, yet shows creditable performance on long lists. But such executable specifications are a compromise: the logic is distorted by algorithmic considerations, yet only indirectly executable via an abstract machine. This paper introduces relational programming, a method that solves the difficulty with Prolog programming by a separation of concerns. It requires writing three texts: (1) the axioms, a logical formula that specifies the problem and is not compromised by algorithmic considerations, (2) the theorem, a logical formula that expresses the algorithm yet follows the axioms, and (3) the code, a transcription of the theorem to C++. Correctness of the code relies on the logical justification of the theorem by the axioms and on a faithful transcription of the theorem to C++. Sorting is an example where relational programming has the advantage of a higher degree of abstractness: the data to be sorted can be any C++ data type that satisfies the axioms of linear order, while the Prolog version is limited to linked lists. Another advantage of relational programs is that they can be shown to have a model-theoretic and fixpoint semantics equivalent to each other and analogous to those of pure Prolog programs.

    Date: December 2014
    pdf

  4. Title:
    Matrix code
    Author:
    M.H. van Emden

    Abstract:
    Matrix Code gives imperative programming a mathematical semantics and heuristic power comparable in quality to functional and logic programming. A program in Matrix Code is developed incrementally from a specification in pre/post-condition form. The computations of a code matrix are characterized by powers of the matrix when it is interpreted as a transformation in a space of vectors of logical conditions. Correctness of a code matrix is expressed in terms of a fixpoint of the transformation. The abstract machine for Matrix Code is the dual-state machine, which we present as a variant of the classical finite-state machine.

    Date: February 2013
    pdf

  5. Entry in bibtex format:
    @techreport{DCS345IR,
      author={M.H. van Emden},
      title={Discovering algorithms with Matrix Code},
      institution={Department of Computer Science,
                   University of Victoria},
      number={Report DCS-345-IR},
      note = {{\tt http://arxiv.org/pdf/1203.2296v2.pdf}, May 2012.}
    }
    
    Abstract:
    In first-year programming courses it is often difficult to show students how an algorithm can be discovered. In this paper we present a program format that supports the development from specification to code in small and obvious steps; that is, a discovery process. The format, called Matrix Code, can be interpreted as a proof according to the Floyd-Hoare program verification method. The process consists of expressing the specification of a function body as an initial code matrix and then growing the matrix by adding rows and columns until the completed matrix is translated in a routine fashion to compilable code. As worked example we develop a Java program that generates the table of the first N prime numbers.
    pdf
  6. Entry in bibtex format:
    @techreport{DCS343IR,
      author={P. Kelly and M.H. van Emden},
      title={Relational semantics
             for databases and predicate calculus},
      institution={Department of Computer Science,
                   University of Victoria},
      number={Department of Computer Science report DCS-343-IR},
      note = {{\tt http://arxiv.org/pdf/1202.0474v3.pdf}, February 2012.}
    }
    
    Abstract:
    The relational data model requires a theory of relations in which tuples are not only many-sorted, but can also have indexes that are not necessarily numerical. In this paper we develop such a theory and define operations on relations that are adequate for database use. The operations are similar to those of Codd's relational algebra, but differ in being based on a mathematically adequate theory of relations. The semantics of predicate calculus, being oriented toward the concept of satisfiability, is not suitable for relational databases. We develop an alternative semantics that assigns relations as meaning to formulas with free variables. This semantics makes the classical predicate calculus suitable as a query language for relational databases.
    pdf
  7. Entry in bibtex format:
    @techreport{RR746,
      author={A. Nait Abdallah and M.H. van Emden},
      title={Constraint propagation as information maximization},
      institution={Department of Computer Science,
                   University of Western Ontario},
      number={Research Report 746},
      note = {{\tt http://arxiv.org/pdf/1201.5426v1.pdf}, January 2012.}
    }
    
    Abstract:
    Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
    pdf
  8. Entry in bibtex format:
    @article{vnmdn14,
      author={M.H. van Emden},
      title={Matrix Code},
      journal={Science of Computer Programming},
      pages={3--21},
      year = {2014}
    }
    
    Abstract:
    Matrix Code gives imperative programming a mathematical semantics and heuristic power comparable in quality to functional and logic programming. A program in Matrix Code is developed incrementally from a specification in pre/post-condition form. The computations of a code matrix are characterized by powers of the matrix when it is interpreted as a transformation in a space of vectors of logical conditions. Correctness of a code matrix is expressed in terms of a fixpoint of the transformation. The abstract machine for Matrix Code is the dual-state machine, which we present as a variant of the classical finite-state machine.
    pdf
  9. Entry in bibtex format:
    @article{vnmdn14,
      author={M.H. van Emden and A. Vellino},
      title={From Chinese Room to Human Window},
      journal={ICGA Journal},
      pages={127--139},
      year = {2010}
    }
    
    Abstract:
    The debate in philosophy and cognitive science about the Chinese Room Argument has focused on whether it shows that machines can have minds. We present a quantitative argument which shows that Searle’s thought experiment is not relevant to Turing’s Test for intelligence. Instead, we consider a narrower form of Turing’s Test, one that is restricted to the playing of a chess endgame, in which the equivalent of Searle’s argument does apply. An analysis of time/space trade-offs in the playing of chess endgames shows that Michie’s concept of Human Window offers a hint of what a machine’s mental representations might need to be like to be considered equivalent to human cognition.
    pdf
  10. Entry in bibtex format:
    @techreport{vnmdICLP,
      author={M.H. van Emden},
      title={Integrating Interval Constraints into Logic Programming},
      institution={Department of Computer Science, University of Victoria},
      number={DCS-133-IR},
      note = {Paper arXiv:1002.1422 in Computing Research Repository (CoRR),
      January 2010.}
    }
    
    Abstract:
    The CLP scheme uses Horn clauses and SLD resolution to generate multiple constraint satisfaction problems (CSPs). The possible CSPs include rational trees (giving Prolog) and numerical algorithms for solving linear equations and linear programs (giving CLP(R)). In this paper we develop a form of CSP for interval constraints. In this way one obtains a logic semantics for the efficient floating-point hardware that is available on most computers. The need for the method arises because in the practice of scheduling and engineering design it is not enough to solve a single CSP. Ideally one should be able to consider thousands of CSPs and efficiently solve them or show them to be unsolvable. This is what CLP/NCSP, the new subscheme of CLP described in this paper is designed to do.
    pdf
  11. Entry in bibtex format:
    @techreport{edmemd08,
      author={W.W. Edmonson and M.H. van Emden},
      title={Interval Semantics for Standard Floating-Point Arithmetic},
      institution={Department of Computer Science, University of Victoria},
      number={DCS-323-IR},
      note = {Computing Research Repository
              (http://arxiv.org/abs/0810.4196), 23 October 2008}
    }
    
    Abstract:
    If the non-zero finite floating-point numbers are interpreted as point intervals, then the effect of rounding can be interpreted as computing one of the bounds of the result according to interval arithmetic. We give an interval interpretation for the signed zeros and infinities, so that the undefined operations 0*inf, inf - inf, inf/inf, and 0/0 become defined. In this way no operation remains that gives rise to an error condition. Mathematically questionable features of the floating-point standard become well-defined sets of reals. Interval semantics provides a basis for the verification of numerical algorithms. We derive the results of the newly defined operations and consider the implications for hardware implementation.
    pdf
  12. Entry in bibtex format:
    @techreport{vemoa06,
      author={M.H. van Emden and B. Moa},
      title={The Fundamental Theorems of Interval Analysis},
      institution={Department of Computer Science, University of Victoria},
      number={DCS-316-IR},
      date = {December 2, 2006},
      note = {Computing Research Repository (http://arxiv.org/corr/home)}
    }
    
    Abstract:
    Expressions are not functions. Confusing the two concepts or failing to define the function that is computed by an expression weakens the rigour of interval arithmetic. We give such a definition and continue with the required re-statements and proofs of the fundamental theorems of interval arithmetic and interval analysis.
    pdf
  13. Entry in bibtex format:
    @techreport{vnmdSTPCS,
      author={M.H. van Emden},
      title={Set-Theoretic Preliminaries for Computer Scientists},
      institution={Department of Computer Science, University of Victoria},
      number={DCS-304-IR},
      note = {Paper cs.DM/0607039 in Computing Research Repository (CoRR),
      July 2006.
      }
    }
    
    Abstract:
    The basics of set theory are usually copied, directly or indirectly, by computer scientists from introductions to mathematical texts. Often mathematicians are content with special cases when the general case is of no mathematical interest. But sometimes what is of no mathematical interest is of great practical interest in computer science. For example, non-binary relations in mathematics tend to have numerical indexes and tend to be unsorted. In the theory and practice of relational databases both these simplifications are unwarranted. In response to this situation we present here an alternative to the ``set-theoretic preliminaries'' usually found in computer science texts. This paper separates binary relations from the kind of relations that are needed in relational databases. Its treatment of functions supports both computer science in general and the kind of relations needed in databases. As a sample application this paper shows how the mathematical theory of relations naturally leads to the relational data model and how the operations on relations are by themselves already a powerful vehicle for queries.

    Downloadable from CoRR, the Computing Research Repository.

  14. Entry in bibtex format:
    @techreport{vnmdn06b,
      author={M.H. van Emden and B. Moa},
      title={Computational Euclid},
      institution={Department of Computer Science, University of Victoria},
      number={DCS-315-IR},
      month = {June},
      year = {2006}
    }
    
    Abstract:
    We analyse the axioms of Euclidean geometry according to standard object-oriented software development methodology. We find a perfect match: the main undefined concepts of the axioms translate to object classes. The result is a suite of C++ classes that efficiently supports the construction of complex geometric configurations. Although all computations are performed in floating-point arithmetic, they correctly implement as semi-decision algorithms the tests for equality of points, a point being on a line or in a plane, a line being in a plane, parallelness of lines, of a line and a plane, and of planes. That is, in accordance to the fundamental limitations to computability requiring that only negative outcomes are given with certainty, while positive outcomes only imply possibility of these conditions being true.

    Downloadable from CoRR, the Computing Research Repository.

  15. Entry in bibtex format:
    @techreport{vnmdn06b,
      author={M.H. van Emden and S.C. Somosan},
      title={Object-Oriented Frameworks as Basis 
             for Modularity in Program-Language Design},
      institution={Department of Computer Science, University of Victoria},
      number={DCS-310-IR},
      month = {March},
      year = {2006}
    }
    
    Abstract:
    For the right application, the use of programming paradigms such as functional or logic programming can enormously increase productivity in software development. But powerful paradigms may come with exotic programming languages, while the management of software development dictates language standardization.

    This dilemma can be resolved by using component technology at the system design level. Here the units of deployment are object-oriented frameworks. It is conventional to analyze an application by object-oriented modeling. In the new approach, the analysis identifies the programming paradigm that is ideal for the application; development starts with object-oriented modeling of the paradigm. In our approach, a paradigm translates to an object-oriented framework so that it is no longer necessary to embody a programming paradigm in a language dedicated to it.

    Downloadable from CoRR, the Computing Research Repository.

  16. Entry in bibtex format:
    @inproceedings{vnmdn06,
        author = {M.H. van Emden},
        title={Compositional Semantics for the Procedural Interpretation of Logic},
        booktitle={Proc. Intern. Conf. on Logic Programming},
        editor={S. Etalle and M. Truszczy\'nski},
        publisher={Springer Verlag},
        number =  {LNCS 4079},
        pages = {315 -- 329},
        year = {2006}
    }
    
    
    Abstract:
    Semantics of logic programs has been given by proof theory, model theory and by fixpoint of the immediate-consequence operator. If clausal logic is a programming language, then it should also have a compositional semantics. Compositional semantics for programming languages follows the abstract syntax of programs, composing the meaning of a unit by a mathematical operation on the meanings of its constituent units. The procedural interpretation of logic has only yielded an incomplete abstract syntax for logic programs. We complete it and use the result as basis of a compositional semantics. We present for comparison Tarski's algebraization of first-order predicate logic, which is in substance the compositional semantics for his choice of syntax. We characterize our semantics by equivalence with the immediate-consequence operator.

    A slightly earlier research report version is downloadable from CoRR, the Computing Research Repository.

  17. Entry in bibtex format:
    @misc{mhve06rev,
      author={M.H. van Emden},
      title={Review of Apt's ``Principles of Constraint Programming'' and of
             Dechter's ``Constraint Processing},
      journal={SIAM Review},
      volume={48},
      number={2},
      pages={400 -- 404},
      year = {2006}
    }
    
    Abstract:
    Two books on constraint programming are reviewed in the SIAM Review, a journal of the Society for Industrial and Applied Mathematics. As constraint programming grew up in, and lives in, the Artificial Intelligence community, the review is largely an account of the background and history of constraint programming, emphasizing its roots in engineering and its connections with Operations Research.

    pdf

  18. Entry in bibtex format:
    @inproceedings{vnmdn03,
      author={M.H. van Emden},
      title={Using the Duality Principle to improve lower bounds for the global
             minimum in nonconvex optimization},
      booktitle={Second COCOS workshop on intervals and optimization},
      year = 2003
    }
    
    pdf
  19. Entry in bibtex format:
    @collection{vnmoa03a,
      author = {M.H. van Emden and B. Moa},
      title={Termination criteria in the {M}oore-{S}kelboe Algorithm
             for Global Optimization by Interval Arithmetic},
      booktitle={Frontiers in Global Optimization},
      editor={C.A. Floudas and P.M. Pardalos},
      publisher={Kluwer Academic Publishers},
      year = {2003}
    }
    
    Abstract:
    We investigate optimization with an objective function that has an unknown and possibly large number of local minima. Determining what the global minimum is we call the fathoming problem ; where it occurs we call the localization problem . Another problem is that in practice often not only the global minimum is required, but also possibly existing points that achieve near-optimality, yet are far from the point at which the global minimum occurs. To support such a requirement, we define the delta-minimizer, the set of points at which the objective function is within delta of the global minimum. We present a modification of the Moore-Skelboe algorithm that returns two sets of boxes. One set contains a delta minimizer for a delta d1; the other is contained within a delta minimizer for a delta d2. In this way one can detect whether low values are concentrated around the global minimum or whether there is a large area with objective function values that are close to the global minimum. We include a proof of correctness of the algorithm.

    ps ... pdf

  20. Entry in bibtex format:
    @inproceedings{vnmdn02,
      author={M.H. van Emden},
      title={Combining Numerical Analysis and Constraint Processing by Means
    of Controlled Propagation and Redundant Constraints},
      booktitle={First COCOS workshop on intervals and optimization; text
                 downloadable from CoRR},
      year = 2002
    }
    
    Abstract:
    In principle, interval constraints provide tight enclosures for the solutions of several types of numerical problem. These include constrained global optimization and the solution of nonlinear systems of equalities or inequalities. Interval constraints finds these enclosures by a combination of propagation and search. The challenge is to extend the ``in principle'' to problems of practical interest. In this paper we describe the concept of controlled propagation. It uses this in conjunction with redundant constraints to combine numerical analysis algorithms with constraint processing. The resulting combination retains the enclosure property of constraint processing in spite of rounding errors. We apply this technique in an algorithm for solving linear algebraic equations that initially simulates interval Gaussian elimination and then proceeds to refine the result with propagation and splitting. Application of our approach to nonlinear equations yields an algorithm with a similar relation to Newton's method.

    pdf

  21. Entry in bibtex format:
    @article{hckvnmdn01,
      author={T. Hickey and Q. Ju and M.H. van Emden},
      title={Interval Arithmetic: from Principles to Implementation},
      journal={Journal of the ACM},
      year = 2001
    }
    
    Abstract:
    We start with a mathematical definition of a real interval as a closed, connected set of reals. Interval arithmetic operations (addition, subtraction, multiplication and division) are likewise defined mathematically and we provide algorithms for computing these operations assuming exact real arithmetic. Next, we define interval arithmetic operations on intervals with IEEE 754 floating point endpoints to be sound and optimal approximations of the real interval operations and we show that the IEEE standard's specification of operations involving the signed infinities, signed zeros, and the exact/inexact flag are such as to make a sound and optimal implementation more efficient. From the resulting theorems we derive data that are sufficiently detailed to convert directly to a program for efficiently implementing the interval operations. Finally we extend these results to the case of general intervals, which are defined as connected sets of reals that are not necessarily closed.
    ps pdf
  22. Entry in bibtex format:
    @article{vnmdn99a,
      title={Algorithmic Power from Declarative Use of
    				Redundant Constraints}, 
      author={M.H. van Emden},
      journal={Constraints},
      year={1999},
      pages={363--381}
    }
    
    Abstract:
    Interval constraints can be used to solve problems in numerical analysis. In this paper we show that one can improve the performance of such an interval constraint program by the declarative use of constraints that are redundant in the sense of not needed to define the problem. The first example shows that computation of an unstable recurrence relation can be improved. The second example concerns a solver of nonlinear equations. It shows that, by adding as redundant constraints instances of Taylor's theorem, one can obtain convergence that appears to be quadratic.
    ps ... pdf
  23. Entry in bibtex format:
    @collection{vnmdnShkrtwn,
        author={M.H. van Emden},
        title={The Logic Programming Paradigm in Numerical Computation},
        booktitle={The Logic Programming Paradigm},
        editor={Krzysztof R. Apt and Victor W. Marek
             and Miroslaw Truszczynski and David S. Warren},
        publisher={Springer-Verlag},
        pages={257--276},
        year={1999}
    }
    
    Abstract:
    Although CLP(R) is a promising application of the logic programming paradigm to numerical computation, it has not addressed what has long been known as ``the pitfalls of [numerical] computation''. These show that rounding errors induce a severe correctness problem wherever floating-point computation is used. Independently of logic programming, constraint processing has been applied to problems in terms of real-valued variables. By using the techniques of interval arithmetic, constraint processing can be regarded as a computer-generated proof that a certain real-valued solution lies in a narrow interval. In this paper we propose a method for interfacing this technique with CLP(R). This is done via a real-valued analogy of Apt's proof-theoretic framework for constraint processing.
    ps ... pdf
  24. Entry in bibtex format:
    @article{hcqvn99,
      title={Interval Constraint Plotting for Interactive Visual Exploration
                          of Implicitly Defined Relations}, 
      author={Timothy J. Hickey and Zhe Qiu and Maarten H. van Emden},
      journal={Reliable Computing},
      pages={81--92},
      volume={6},
      year={2000}
    }
    
    Abstract:
    Conventional plotting programs adopt techniques such as adaptive sampling to approximate, but not to guarantee, correctness and completeness in graphing functions. Moreover, implicitly defined mathematical relations can impose an even greater challenge as they either cannot be plotted directly, or otherwise are likely to be misrepresented. In this paper, we address these problems by investigating interval constraint plotting as an alternative approach that plots a hull of the specified curve. We present some empirical evidence that this hull property can be achieved by a O(n) algorithm. Practical experience shows that the hull obtained is the narrowest possible whenever the precision of the underlying floating-point arithmetic is adequate. We describe IASolver, a Java applet that serves as test bed for this idea.
    ps ... pdf
  25. Entry in bibtex format:
    @article{vnmd97,
      title={Value Constraints in the {CLP} {S}cheme},
      author={M.H. van Emden},
      year={1997},
      journal={Constraints},
      volume={2},
      pages={163--183}
    }
    
    Abstract:
    We define value constraints, a method for incorporating constraint propagation into logic programming. It is a subscheme of the CLP scheme and is applicable wherever one has an efficient method for representing sets of possible values. As examples we present: small finite sets, sets of ground instances of a term, and intervals of reals with floating-point numbers as bounds. Value constraints are defined by distinguishing two storage management strategies in the CLP scheme. In value constraints the infer step of the CLP scheme is implemented by Waltz filtering. We give a semantics for value constraints in terms of set algebra that gives algebraic characterizations of local and global consistency. The existing extremal fixpoint characterization of chaotic iteration is shown to be applicable to prove convergence of Waltz filtering.
    ps
  26. Entry in bibtex format:
    @inproceedings{vnmd97pacrim,
      title={Object-oriented programming as the end of history in
    programming languages},
      author={M.H. van Emden},
      year={1997},
      booktitle={1997 IEEE Pacific Rim Conference on Communications, Computers, and Signal Processing},
      volume={2},
      pages={981--984},
      publisher = {IEEE}
    }
    
    Abstract:
    In the past, the invention of a new programming paradigm led to new programming languages. We show that C++ is a perfectly adequate dataflow programming language, given some suitable definitions. Reasons are mentioned for believing that this is also the case for logic and functional programming. We conclude that object-oriented programming may remove the need for languages motivated by specific programming paradigms.
    pdf
  27. Entry in bibtex format:
    @inproceedings{mcheng95,
      author={M.H.M. Cheng and D.Stott Parker and M.H. van Emden},
      title = {A Method for Implementing Equational Theories As Logic Programs},
      booktitle = {Proceedings of the Twelfth International Conference on 
                   Logic Programming},
      year={1995},
      pages={497--511},
      editor={Leon Sterling},
      publisher={MIT Press}
    }
    
    Abstract:
    Equational theories underlie many fields of computing, including functional programming, symbolic algebra, theorem proving, term rewriting and constraint solving. In this paper we show a method for implementing many equational theories with a limited class of logic programs. We define regular equational theories, a useful class of theories, and illustrate with a number of examples how our method can be used in obtaining efficient implementations for them. The significance of our method is that: * It is simple and easy to apply. * Although executable, it supports separation of concerns between specification and implementation. * Its class of logic programs execute with impressive efficiency using Prolog. * It permits interesting compilation and optimization techniques that can improve execution efficiency still further. * It offers perspectives on term rewriting and functional programming evaluation strategies, how they can be compiled, and how they can be integrated with logic programming effectively.
    ps
  28. Entry in bibtex format:
    @inproceedings{ldrSwnklsVE95,
      author={W.J. Older and G.M. Swinkels and M.H. van Emden},
      title = {Getting to the real problem: experience with BNR
    Prolog in OR},
      booktitle = {Proceedings of the Third Conference on 
                   Practical Applications of Prolog},
      year={1995}
    }
    
    Abstract:
    Although job-shop scheduling is a much studied problem in OR, it is based on an unrealistic restriction, which is needed to make the problem computationally more tractable. In this paper we drop the restriction. As a result we encounter a type of cardinality constraint for which we have to develop a new method: translation to a search among alternative sets of inequalities between reals. Our solution method depends on logic programming: we run a specification and rely on the underlying interval constraint-solving machine of BNR Prolog to reduce the search space to a feasible size. In this way, by making the programming task trivial, it is possible to tackle the real problem rather than a related one for which code happens to be already written.
    pdf
  29. Entry in bibtex format:
    @article{emden92a,
      author={M.H. van Emden},
      title={Mental ergonomics as basis for new-generation
          computer systems},
      journal={},
      volume={2},
      year={1992},
      pages={133--153}
    }
    @inproceedings{emden92a,
      author = {M.H. van Emden},
      title={Mental ergonomics as basis for new-generation
          computer systems},
      booktitle = {Proceedings of the International Conference
        on Fifth-Generation Computer Systems 1992},
        publisher = {Ohmsha},
        year = 1992,
        pages = {1149--1156}
    }
    
    Abstract:
    Reliance on Artificial Intelligence suggests that Fifth-Generation Computer Systems were intended as a substitute for thought. The more feasible and useful objective of a computer is as an aid to thought suggests mental ergonomics rather than Artificial Intelligence as the basis for new-generation computer systems. This objective, together with considerations of software technology, suggests logic programming as a unifying principle for a computer aid to thought.
    pdf
  30. Entry in bibtex format:
    @article{emden92b,
      author={M.H. van Emden},
      title={Structured Inspections of Code},
      journal={Software Testing, Verification, and Reliability},
      volume={2},
      year={1992},
      pages={133--153}
    }
    
    Abstract:
    Cleanroom programming and code inspections independently provide evidence that it is more efficient to postpone the testing of code to a later stage than is usually done. This paper argues that an additional gain in quality and efficiency of development can be obtained by structuring inspections by means of an inspection protocol. The written part of such a protocol is prepared by the programmer before the inspection. It is modeled on Floyd's method for the verification of flowcharts. However, the protocol differs from Floyd's method in being applicable in practice. Structured inspections gain this advantage by not attempting to be a proof; they are no more than an articulation of existing forms of inspection. With the usual method of structured programming it may be difficult to prepare the inspection protocol. On the other hand, Assertion-Driven Programming (of which an example is included in this paper) not only facilitates protocol preparation, but also the coding itself.
    ps
  31. M.H. van Emden: "Rhetoric versus modernism in computing" J Logic Computation (1992) 2 (5): 551-555.
    pdf
  32. Entry in bibtex format:
    @inproceedings{cvr90,
        author = {M.H.M. Cheng and M.H. van Emden and B.E. Richards},
        title = {On {W}arren's Method for Functional Programming in Logic},
        booktitle = {Logic Programming: Proceedings of the Seventh International
                     Conference},
        address = {Jerusalem},
        editor = {David H.D. Warren and Peter Szeredi},
        publisher = {MIT Press},
        year = 1990,
        pages = {546--560}
    }
    
    Abstract:
    Although Warren's method for the evaluation in Prolog of expressions with higher-order functions appears to have been neglected, it is of great value. Warren's paper needs to be supplemented in two respects. He showed examples of a translation from lambda-expressions to clauses, but did not present a general method. Here we present a general translation program and prove it correct with respect to the axioms of equality and those of the lambda-calculus. Warren's paper only argues in general terms that a structure-sharing Prolog implementation can be expected to efficiently evaluate the result of his translation. We show a comparison of timings between Lisp and a structure-copying implementation of Prolog. The result suggests that Warren's method is about as efficient as the Lisp method for the evaluation of Lambda expressions involving higher-order functions.
  33. pdf
  34. @inproceedings{condAnsw88,
      author={M.H. van Emden},
      title = {Conditional Answers for Polymorphic Type Inference},
      booktitle = {Logic Programming: Proceedings of the International
        Conference and Symposium, volume I},
      editors = {R.A. Kowalski and K.A. Bowen},
      year={1988},
      publisher = {MIT Press},
      pages = {590--603}
    }
    
    Abstract:
    J.H. Morris showed that polymorphic type inference can be done by unification. In this paper we show that the unification problem can be automatically generated by resolution and factoring acting on a theory of type inheritance in the form of Horn clauses. The format is a variant of SLD-resolution as used in logic programming. In this variant, programs have an empty least Herbrand model, so that all derivations are ``failed'' in the conventional sense. Yet \emph{conditional answers} provide as much and as securely justified information as do the successful answers exclusively used in conventional programs.

    pdf

  35. Entry in bibtex format:
    @inproceedings{cvl88,
      author={M.H.M. Cheng and M.H. van Emden and J.H.M. Lee},
      title = {Tables as a User Interface for Logic Programs},
      booktitle = {Proceedings of the International Conference on Fifth
                   Generation Computer Systems 1988},
      year={1988},
      month = {November--December},
      address = {Tokyo, Japan},
      publisher = {Ohmsha, Ltd},
      pages = {784--791}
    }
    
    Abstract:
    Spreadsheets have introduced two advantages not typically available in user interfaces to logic programs: the exploratory use of a computer and a two-dimensional interface. In this paper we show that not only spreadsheets, but also tables (in the sense of relational databases) have these valuable features. We compare spreadsheets and tables, giving possibly the first clear distinction between the two and suggest a common generalization. We show that tables, as a user interface for logic programs, can be derived from a dataflow model of queries (which we call TuplePipes), which provides also the buffering needed when Prolog is interfaced with a relational database. We report on Tupilog, a prototype implementation of logic programming allowing four query modes, one of which is TuplePipes.
    pdf
  36. Entry in bibtex format:
    @article{vnm86,
        author={M.H. van Emden},
        title={Quantitative Deduction and Its Fixpoint Theory},
        journal={Journal of Logic Programming},
        year={1986},
        volume={4},
        pages={37--53}
    }
    
    Abstract:
    Logic programming provides a model for rule-based reasoning in expert systems. The advantage of this formal model is that it makes available many results from the semantics and proof theory of first-order predicate logic. A disadvantage is that in expert systems one often wants to use, instead of the usual two truth values, an entire continuum of ``uncertainties'' in between. That is, instead of the usual ``qualitative'' deduction, a form of ``quantitative'' deduction is required. We present an approach to generalizing the Tarskian semantics of Horn clause rules to justify a form of quantitative deduction. Each clause receives a numerical attenuation factor. Herbrand interpretations, which are subsets of the Herbrand base, are generalized to subsets which are fuzzy in the sense of Zadeh. We show that as result the fixpoint method in the semantics of Horn clause rules can be developed in much the same way for the quantitative case. As for proof theory, the interesting phenomenon is that a proof should be viewed as a two-person game. The value of the game turns out to be the truth value of the atomic formula to be proved, evaluated in the minimal fixpoint of the rule set. The analog of the PROLOG interpreter for quantitative deduction becomes a search of the game tree (= proof tree) using the alpha-beta heuristic well known in game theory.
    pdf
  37. Entry in bibtex format:
    @article{vkw76,
        author={M.H. van Emden and R.A. Kowalski},
        title={The Semantics of Predicate Logic as a Programming Language},
        journal=JACM,
        year={1976},
        volume={23},
        number={4},
        pages={733--742}
    }
    
    Abstract:
    Sentences in first-order predicate logic can be usefully interpreted as programs. In this paper the operational and fixpoint semantics of predicate logic programs are defined, and the connections with the proof theory and model theory of logic are investigated. It is concluded that operational semantics is a part of proof theory and that fixpoint semantics is a special case of model-theoretic semantics.
    pdf
  38. Entry in bibtex format:
    @techreport{vnmdn74,
      author={M.H. van Emden},
      title={First-order predicate logic as a high-level program 
             language},
      institution={School of Artificial Intelligence,
                   University of Edinburgh},
      number={MIP-R-106},
      year={1974}
    }
    
    
    Abstract:
    This paper presents an argument in support of the thesis that first-order predicate logic would be a useful next step in the development towards higher-level program languages. The argument is conducted by giving a description of Kowalski's system of logic which is sufficiently detailed to investigate its computation behaviour in the two examples discussed: a version of the "quicksort" algorithm and a top-down parser for context-free languages.
    pdf
  39. "An analysis of complexity" by M.H. van Emden.
    Mathematical Centre Tracts #35, 1971.
    Abstract:
    Complexity of a system is defined in terms of interactions expressed as Shannon's information-theoretic entropy. As a result the system can be hierarchically decomposed into levels of subsystems of the basis of entropy. Applications to data-based classification and to solving of systems of linear equations are presented.
    This monograph was accepted as doctoral thesis by the Faculty of Mathematics and Natural Sciences of the University of Amsterdam. It supercedes the author's paper "Hierarchical decomposition of complexity", Machine Intelligence 5, B. Meltzer and D. Michie (eds.), Edinburgh University Press, 1970.
    pdf
  40. home page.