In an earlier paper we have shown, how one can successfully use constraint satisfaction techniques for proving and solving formulae in the first-order predicate language over the reals (i.e., real first-order constraints). This approach was restricted to inputs that contain inequality symbols such as <=, but no equality symbols (=) or disequality symbols (!=). In this paper we lay the basis for extending this approach to inputs that contain (dis)equalities. This considerably widens the applicability ofconstraint programming in areas such as control engineering, and builds a strong bridge to areas such as quantifier elimination, automated theorem proving, and computer algebra.
Hybrid methods that combine constraint programming with mathematical programming make essential use of continuous relaxations for global constraints. We state a relaxation for the cumulative constraint. In particular we identify a family of facet-defining inequalities and a much larger family of valid inequalities.
Abstract GSAT is a local search algorithm for propositional satisfiability problems (SAT) that has proven highly effective for solving certain classes of large SAT problems. GSAT starts from a randomly generated truth assignment and tries to reduce the number of violated clauses by iteratively flipping some variables' truth value. GSATs effectiveness arises from the speed of a single flip, since this allows a large space of possible solutions to be explored. It does not examine any inter-relationship between the clauses of the problem it attacks. The class of SAT problems with binary clauses (2SAT) is highly tractable (linear time solvable), and some SAT problems, such as graph colouring, contain a high proportion of 2SAT information. In this paper we show how we can alter GSAT to take into account the 2SAT clauses, so that it never investigates truth assignments that violate a binary clause. This reduces the search space considerably. We give experimental results illustrating the benefit of our new approach on hard 3SAT problems involving a substantial 2SAT component.
Model-based applications in engineering, such as diagnosis, configuration or interactive decision-support systems, require embedded constraint solvers with challenging capabilities. Not only consistency checking and solving, but also the computation of (minimal) conflicts and explanations are required. Moreover, realistic models of engineered systems often require the usage of very expressive constraint languages, which mix continuous and discrete variable domains, linear and non-linear equations, inequations, and even procedural constraints. A positive feature of the models of typical engineered systems is, however, that their corresponding constraint problems have a bounded and even relatively small density (induced width). We present here our relational constraint solver RCS that has been specifically designed to address these requirements. RCS is based on variable elimination, exploiting the low-density property. To analyze a set of constraints, RCS builds a so-called aggregation tree by joining the input constraints and eliminating certain variables after every single join. The aggregation tree is then used to compute solutions, as well as explanations and conflicts. We also report some preliminary experimental results obtained with a prototype implementation of this framework.
Abstract Scheduling problems considered in the literature are often static (activities are known in advance and constraints are fixed). However, every schedule is subject to unexpected events. In these cases, a new solution is needed in a preferably short time and as close as possible to the current solution. In this paper, we present an exact approach for solving dynamic Resource-Constrained Project Scheduling Problems or \rcpsp. This approach combines explanation-based constraint programming and operational research techniques.
Abstract Experimental results are given on the scaling of the Pure Random Walk version (PRWSAT) of WalkSAT. PRWSAT is very simple because of the absence of heuristics: not only the clause is selected at random, but also the literal within that clause. The main result is that, despite the simplicity and absence of heuristics, it has non-trivial behavior on Random 3-SAT. There appears to be a threshold at a clause/variable ratio of about 2.65. Below the threshold, problems are solved in a tightly-distributed and linear number of flips. Above the threshold scaling appears to be non-polynomial. The simplicity and the nontrivial threshold make it a good candidate for theoretical analysis.
In this work we address the question of whether and how parallel local search, which simultaneously applies more than one local move, exhibits the criticality and parallelism phenomenon when performed on structured instances. We investigate the behavior of a parallel version of GSAT as a function of the number T of parallel flips on structured SAT instances. First, we experimentally show that also for structured instances there exists an optimal value of parallelism which enables the algorithm to reach the optimal performance. Second, by analyzing the frequency of node degree of the graphs associated with the SAT instances, we observe that an asymmetric and not regular distribution strongly affects the algorithm performance with respect to T. Finally, we provide a method that, given an instance, enables to set T to the optimal value, so as to effectively apply multi-flip moves to boost local search.
The aim of this paper is to extend the connection between the constraint network and the propositional logic fields. This work is based on the literal encoding of the satisfiability problem (SAT) as a binary constraint satisfaction problem (CSP). Using this encoding, arc consistency is equivalent to unit resolution. We generalize this result by relating the path consistency concept with inference rules in the propositional logic field. Van Beek and Dechter have identified a property for binary constraints called "row convexity" and shown its usefulness in deciding when path consistency is sufficient to guarantee that a solution can be found without backtracking. We relate the row convexity of CSPs with the pure literal rule and then use this result to propose a measure characterizing satisfiable and unsatisfiable 3-SAT instances. The correlation between the computational results allows us to validate this measure.
Substitutability and interchangeability in constraint satisfaction problems (CSPs) have been used as a basis for search heuristics, solution adaptation and abstraction techniques. In this paper, we consider how the same concepts can be extended to soft constraint satisfaction problems (SCSPs). We introduce two notions: threshold alpha and degradation delta for substitutability and interchangeability, (alpha-substitutability/interchangeability and delta-substitutability/interchangeability respectively). We show that they satisfy analogous theorems to the ones already known for hard constraints. In alpha-interchangeability, values are interchangeable in any solution that is better than a threshold alpha, thus allowing to disregard differences among solutions that are not sufficiently good anyway. In delta-interchangeability, values are interchangeable if their exchange could not degrade the solution by more than a factor of delta. We give efficient algorithms to compute delta/alpha-interchangeable sets of value for a large class of SCSPs.
We present a generic framework to model and solve real-world constraint problems with incomplete or inconsistent data. Such problems are often simplified at present to tractable deterministic models, in order to find a solution. So doing, however, can lead us to solve the wrong problem or to introduce unsatisfiability because of the approximations made. The certainty closure framework addresses the data uncertainty by specifying what can be said with certainty, given what is known about the data. The aim is to provide a formal framework that considers all three aspects: (1) the modelling of uncertain data, (2) the definition of a closure to an uncertain constraint problem, (3) the guarantee that the closure derived is correct, i.e. holds under every satisfiable realisation of the data. In this paper we address these points by defining the certainty closure and showing how it can be derived by transformation to an equivalent certain problem. We demonstrate that the framework is applicable to various computational domains and show its use in an uncertain network optimisation problem.
For NP-hard constraint satisfaction problems the existence of a feasible solution cannot be decided efficiently. Applying a tree search often results in the exploration of parts of the search space that do not contain feasible solutions at all. Redundant constraints can help to detect inconsistencies of partial assignments higher up in the search tree. Using the social golfer problem as an example we show how complex redundant constraints can be propagated incompletely using local search heuristics.
Historically, discrete minimization problems in constrained logical programming were modeled with the help of an isolated bounding constraint on the objective that is to be decreased. To overcome this frequently inefficient way of searching for improving solutions, the notion of optimization constraints was introduced. Optimization constraints can be viewed as global constraints that link the objective with other constraints of the problem at hand. We present an arc consistency algorithm for the minimum weight all different constraint which is an optimization constraint that consists in the combination of a linear objective with an all different constraint.
A module system is desirable for the construction of large CSP models out of pieces. Such module system, which is long overdue, should have a clear and clean semantics so as to facilitate declarative reasoning. We introduce six operators for manipulating and transforming CSP models, namely intersection, union, channeling, induction, negation, and complementation. For each operator, we give its syntactic construction rule, define its set-theoretic meaning, and also examine its algebraic properties, all illustrated with examples where appropriate. Our work can shed light on the design of future CSP-based module system.
Arc-consistency algorithms are widely used to prune the search-space of Constraint Satisfaction Problems (CSPs). They use support-checks to find out about the properties of CSPs. They use arc-heuristics to select the constraint and domain-heuristics to select the values for their next support-check. We will demonstrate that domain-heuristics can significantly enhance the average time-complexity of existing arc-consistency algorithms. We will combine Alan Mackworth's AC-3 and John Gaschnig's DEE and equip the resulting hybrid with a double-support domain-heuristic thereby creating an arc-consistency algorithm called AC-3d, which has an average time-complexity which can compete with AC-7 and which improves on AC-7's space-complexity. AC-3d is easy to implement and requires the same data structures as AC-3. We will present experimental results to justify our average time-complexity claim.