Combination of Decision Procedures
Topic: Nelson-Oppen combinations, latest advances including non-disjoint signatures, deterministic and non-deterministic algorithms, fusion methods.
Title: The combination problem in unsorted first-order logic
We will start with a brief introduction to classical first-order logic and an overview of decision problems in this logic, and their combinations. We will discuss how decision problems for first-order theories can be defined as validity problems for certain classes of formulas, and survey a number of them. Then we will see that combinations of validity problems are in essence just interpolation problems, and that combining decision procedures amounts to computing interpolants, or proving their non-existence, in finite time.
We will concentrate on combining decision problems for universal formulas and discuss in detail the foremost combination method for such problems, originally due to Nelson and Oppen. We will see that it is possible to give a straightforward proof of correctness of the method by using a very simple and general model-theoretic result about interpolation problems.
This general result can also be used to relax the two main requirements of the Nelson-Oppen method: the "stable infiniteness" of the component theories and the disjointness of their signatures. We will discuss some possible extensions of the method, and their limitations, to theories that may not be stably infinite or that may share function or predicate symbols.
Title: The combination problem in sorted first-order logics
The results presented in the first talk are framed in the context of unsorted first-order logic. However, in many of the application domains in which combination problems arise, using a sorted logic is more appropriate as these domains are naturally typed. This talk will deal with the issue of lifting known combination results to sorted versions of first-order logic.
We will first briefly introduce many-sorted logic and then extended it to a fairly general order-sorted logic (ie., a sorted logic with subsorts). Then we will argue that, while the main issues for the combination problems in sorted logics are the same as in the unsorted case, certain combination results are in a sense easier to obtain because of the restrictions imposed by the sorted language. We will do this by presenting a version of the Nelson-Oppen method for the given sorted logic and discussing its correctness and possible extensions.
Topic: CVC Lite + HOL light; Non-clausal decision heuristic; Partial functions.
Title: Abstract Congruence Closure, Binomial Ideals, and Algebraically Closed Fields
AbstractWe will obtain algorithms for congruence closure using the generic Nelson Oppen combination framework. We will then extend this algorithm to handle associative and commutative function symbols. This generalizes to the Grobner basis algorithm for deciding the uniform word problem over algebraically closed fields.
Title: CVC Lite: An Efficient Theorem Prover Based on Combination of Decision Procedures
Validity checkers, which are theorem provers based on combination of efficient decision procedures, have become quite popular in the formal verification community in the recent years.
CVC Lite is an implementation of a validity checker for first-order logic with several interpreted theories such as equality, uninterpreted functions, arithmetic, and theory of arrays. It is a next generation tool developed upon our previous experience with SVC (Stanford Validity Checker) and CVC (Cooperating Validity Checker). The tool is designed to be very efficient, easy to use as a C++ library and as a stand-alone executable, and convenient to extend with new features, including new decision procedures.
In this talk, I will go over the underlying theory of CVC Lite, its high-level architecture, and some important features and design decisions. Specifically, the topics include:
Topic: ICS tool
Title: UCLID: Deciding Combinations of Theories via Eager Translation to SAT.
UCLID incorporates a decision procedure for a combination of the theories of uninterpreted functions and equality, integer linear arithmetic, and constrained lambda expressions (used to model arrays, e.g.). This procedure is unique among current SAT-based decision procedures in two ways. First, it is based on an "eager" translation to SAT; i.e., a translation of the input formula to a equi-satisfiable Boolean formula in a single step. Secondly, one of its encoding techniques is based on exploiting the "small-model" property for the theories handled.
In this talk, we will describe the following aspects of UCLID:
We will also make theoretical and empirical comparisons with other decision procedures for the same logic.
UCLID website: http://www.cs.cmu.edu/~uclid
Title:Verifun: A Theorem Prover Using Lazy Proof Explication
Theorem provers based on "cooperating decision procedures" have been used in program verification since they were first proposed by Nelson and Oppen in 1979. One such prover, `Simplify', was developed by Detlefs, Nelson and Saxe at the DEC/Compaq Systems Research Center (SRC) and used in a variety of program checking tools, including ESC/Modula-3, ESC/Java, Calvin, and Microsoft's SLAM tool. Experience with using Simplify over a number of years, however, pointed to two shortcomings in its design. First, the backtracking search used in Simplify for propositional inference had been far surpassed by modern SAT-solving techniques based on learning. Second, from each inconsistency detected by the theories, the backtracking search gained only one bit of information, viz, the infeasibility of the current assignment. This often led Simplify to discover the "same" inconsistencies repeatedly during a proof.
The `Verifun' theorem prover was developed at Compaq/HP SRC in order to overcome the shortcomings of Simplify. In contrast to Simplify, Verifun employed a new style of interaction in which the propositional search was performed by an independent SAT solver, and the theories explicated proofs at each encountered inconsistency, in order to prune multiple infeasible assignments at a time. Because of its intended use in program verification, Verifun was designed for formulae involving propositional logic, equality with uninterpreted function symbols, linear arithmetic and quantification. In this talk, I will describe the architecture of Verifun, discuss some key design choices, and relate some experiences with its performance.
This work was done jointly at SRC with Cormac Flanagan and Jim Saxe, with help from summer intern Simon Ou.
Title: Combining Canonizers and Solvers
AbstractIf a first order theory has an efficient canonizer (that reduces terms to normal forms) and solver (that produces general solutions to systems of equations), then it has an efficient decision procedure for open formulas that can also efficiently cooperate in the Nelson-Oppen framework. This talk deals with the question of obtaining a canonizer and a solver for the disjoint combination of two (or more) theories with canonizers and solvers. The main results are that the canonizers often do combine but solvers do not.
AbstractImplementing efficient algorithms for combining decision procedures has been a challenge and their correctness precarious. The Nelson-Oppen combination algorithm is well understood at the basic level, but correctness becomes an issue as soon as we attempt to describe it at a lower level that explicates important optimization features, or to incorporate Shostak's algorithm into it. This talk will present a flexible Nelson-Oppen framework as an inference system with transitions that are fine-grained enough to model most of the mechanisms currently used in combining decision procedures. The system allows transparent correctness proofs and directly leads to a modular implementation.
Topic: Modularity and Refinement of Inference Systems for Equality
Title:: Combination results for many sorted theories with overlapping signatures
In these talks, We will present a combination result for many-sorted first order theories whose signatures may share common symbols (i.e. overlapping or non-disjoint signatures), extending the recent results by Ghilardi for the unsorted case. The focus of the talks will be on three aspects of the result, namely
In the first talk, we will motivate the problem, and present the key insights behind the notions (e.g. T0-compatibility) and theorems needed by this approach to establish consistency, completeness and decidability. We will also discuss the connection between T0-compatibility and stably-infiniteness of a theory.
In the second talk, details of the proof constructions of the Union Consistency Theorem and the Union Completeness Theorem shall be presented. Time permitting, CVCL (CVC Lite) completeness result and decidability theorems will be discussed as well.
Topic: TSAT++ theorem prover
Separation Logic (SL) is a decidable logic combining Boolean logic with a fragment of linear arithmetic, able to compactly capture the behavior of a large class of infinite-state systems. It is not surprising, then, that recently a number of interesting problems of AI (planning, scheduling, temporal reasoning) and Formal Methods (safety of hardware, bounded model checking of real-time systems) have been recast as decision problems of SL-formulae. Hence, the need of efficient decision procedures for this logic. In this talk I present TSAT++, an efficient SAT-based decision procedure for Separation Logic implementing the lazy-approach. I will discuss about the TSAT++'s architecture as well as the optimization techniques (both borrowed from the AI and Formal Methods literature and new) implemented in it. Finally, I ll show some comparative evaluation against state-of-the-art systems on a wide range of benchmarks.
Topic: Randomized join algorithms for linear arithmetic and uninterpreted functions.
Join algorithms can be seen as generalization of decision procedures. They are used in program analysis to merge the sets of facts at a join point. A join algorithm for a theory seems to be considerably harder to design than a decision procedure. Randomization may be used as a tool to obtain efficient and simpler join algorithms. In this talk, I will describe randomized join algorithms for the theory of linear arithmetic, and the theory of uninterpreted functions. I will also motivate and present the challenges behind the problem of making join algorithms context-sensitive, and the problem of combining join algorithms.