CDM Algorithms for propositional Logic

1
CDM
Algorithms for propositional Logic
Boolean Decision Problems
SAT Solvers
Refutation and Resolution
Binary Decision Diagrams
Klaus Sutner
Carnegie Mellon University
Fall 2013
Decision Problems
3
Satisfiability
A Boolean formula ϕ.
Is ϕ satisfiable?
Problem:
Instance:
Question:
Tautology
A Boolean formula ϕ.
Is ϕ a tautology?
Problem:
Instance:
Question:
Equivalence
Two Boolean formulae ϕ and ψ.
Are ϕ and ψ equivalent?
4
The reason it is worthwhile to distinguish between these three problems is
computational complexity: given a certain representation, a particular problem
can be tackled with such-and-such resources.
Recall the standard decision problems associated with propositional formulae.
Problem:
Instance:
Question:
All One
Other than that, they are very closely related:
ϕ satisfiable iff ¬ϕ not a tautology
ϕ satisfiable iff ϕ, ⊥ not equivalent
ϕ, ψ equivalent iff (ϕ ⇔ ψ) a tautology
Of course, negation or the introduction of the connective “ ⇔ ” can ruin the
representation of a formula.
Boolean Functions
5
As we have seen, any Boolean formula ϕ(x1 , x2 , . . . , xn ) naturally defines an
n-ary Boolean function ϕ
b : 2n → 2 .
Representation
There are two important and drastically different ways to represent a Boolean
function.
In terms of Boolean functions, the three problems translate into
A Boolean formula ϕ(x1 , x2 , . . . , xn ), in particular a formula in a
particular normal form such as NNF, CNF, DNF.
f 6= 0
f =1
f =g
Of course, this version obscures the representation of the functions and
therefore muddles complexity issues.
A custom designed data structure, a so-called binary decision diagram.
As we will see, the representation is critical for complexity issues.
For the time being, we will deal with representations inspired by the formula
perspective.
6
Preprocessing
7
Negation Normal Form (NNF)
8
Definition
A formula is in Negation Normal Form (NNF) if it only contains negations
applied directly to variables.
It seems reasonable to preprocess the input a bit: we can try to bring the input
into some particularly useful syntactic form, a form that can then be exploited
by the algorithm.
A literal is a variable or a negated variable.
There are two issues:
Theorem
The transformation must produce an equivalent formula (but see below
about possible additions to the set of variables).
For every formula, there is an equivalent formula in NNF.
The transformation should be fast.
The algorithm pushes all negation signs inward, until they occur only next to a
variable.
Here are some standard methods to transform propositional formulae.
Rules for this transformation:
We will focus on formulae using ¬, ∧, ∨, a harmless assumptions since we can
easily eliminate all implications and biconditionals.
¬(ϕ ∧ ψ) ⇒ ¬ϕ ∨ ¬ψ
¬(ϕ ∨ ψ) ⇒ ¬ϕ ∧ ¬ψ
¬¬ϕ ⇒ ϕ
As Parse Trees
9
Push negations down to the leaves.
¬
¬
ϕ
ψ
Note that this requires pattern matching: we have to find the handles in the
given expression.
¬
ϕ
10
This is an example of a rewrite system: replace the LHS by the RHS of a
substitution rule, over and over, until a fixed point is reached.
∧
∨
Rewrite Systems
¬(p ∧ (q ∨ ¬r) ∧ s ∧ ¬t)
¬p ∨ ¬(q ∨ ¬r) ∨ ¬s ∨ t
ψ
¬p ∨ (¬q ∧ r) ∨ ¬s ∨ t
¬
∧
ϕ
Also note that the answer is not unique, here are some other NNFs for the
same formula:
∨
ψ
¬
¬
ϕ
ψ
Min- and Max-Terms
So we are left with a formula built from literals using connectives ∧ and ∨.
The most elementary such formulae have special names.
Definition
A minterm is a conjunction of literals.
A maxterm is a disjunction of literals.
Note that a truth table is essentially a listing of all possible 2n full minterms
over some fixed variables x1 , x2 , . . . , xn combined with the corresponding truth
values of the formula ϕ(x1 , . . . , xn ).
By forming a disjunction of the minterms for which the formula is true we get a
(rather clumsy) normal form representation of the formula.
The reason we are referring to the normal form as clumsy is that it contains
many redundancies in general. Still, the idea is very important and warrants a
definition.
¬p ∨ ¬s ∨ t ∨ (¬q ∧ r)
(¬p ∨ ¬s ∨ t ∨ ¬q) ∧ (¬p ∨ ¬s ∨ t ∨ r)
Conversion to NNF is a search problem, not a function problem.
11
Disjunctive Normal Form
Definition
A formula is in Disjunctive Normal Form (DNF) if it is a disjunction of
minterms (conjunctions of literals).
In other words, a DNF formula is a “sum of products” and looks like so:
(x11 ∧ x12 ∧ . . . ∧ x1n1 ) ∨ (x21 ∧ . . . ∧ x2n2 ) ∨ . . . ∨ (xm1 ∧ . . . ∧ xmnm )
where each xij is a literal.
W V
In short: i j xij .
If you think of the formula as a circuit DNF means that there are two layers:
an OR gate on top, AND gates below. Note that this only works if we assume
unbounded fan-in and disregard negation.
12
Conversion to DNF
13
Expression Trees
14
Theorem
Here is the corresponding operation in terms of the expression tree.
For every formula, there is an equivalent formula in DNF.
∧
Step 1: First bring the formula into NNF.
ϕ
Step 2: Then use the rewrite rules
ϕ ∧ (ψ1 ∨ ψ2 ) ⇒ (ϕ ∧ ψ1 ) ∨ (ϕ ∧ ψ2 )
∨
ψ1
(ψ1 ∨ ψ2 ) ∧ ϕ ⇒ (ψ1 ∧ ϕ) ∨ (ψ2 ∧ ϕ)
∨
∧
ϕ
ψ2
∧
ψ1
ϕ
ψ2
✷
Note that we have created a second copy of ϕ (though in an actual algorithm
we can avoid duplication by sharing subexpressions; nonetheless the ultimate
output will be large).
Exercise
Prove that these rules really produce a DNF. What is the complexity of this
algorithm?
Example
15
Complexity
16
Computationally, there is one crucial difference between conversion to NNF and
conversion to DNF:
First conversion to NNF.
¬(p ∨ (q ∧ ¬r) ∨ s ∨ ¬t
The size of the formula in NNF is linear in the size of the input.
¬p ∧ (¬q ∨ r) ∧ ¬s ∧ t
The size of the formula in DNF is not polynomially bounded by the size of
the input.
¬p ∧ ¬(q ∧ ¬r) ∧ ¬s ∧ t
Reorder, and then distribute ∧ over ∨.
Thus, even if we have a perfect linear time implementation of the rewrite
process, we still wind up with an exponential algorithm.
¬p ∧ ¬s ∧ t ∧ (¬q ∨ r)
(¬p ∧ ¬s ∧ t ∧ ¬q) ∨ (¬p ∧ ¬s ∧ t ∧ r)
Exercise
Construct an example where conversion to DNF causes exponential blow-up.
DNF and Truth Tables
17
One reason DNF is natural is that one can easily read off a canonical DNF for
a formula if we have a truth table for it.
For n variables, the first n columns determine 2n full minterms (containing
each variable either straight or negated).
10011
⇒
x1 x 2 x3 x4 x5
Select those rows where the formula is true, and collect all the corresponding
minterms into a big disjunction.
Done!
Note the resulting formula has O(2n ) conjunctions of n literals each.
Example
18
Unfortunately, this approach may not produce optimal results: for p ∨ q we get
p
0
0
1
1
q
0
1
0
1
(p ∨ q)
0
1
1
1
So brute force application of this method yields 3 full minterms:
(p ∧ ¬q) ∨ (¬p ∧ q) ∨ (p ∧ q)
Clearly, we need some simplification process. More about this later in the
discussion of resolution.
Remember?
19
Conjunctive Normal Form (CNF)
20
Definition
This is essentially the same method we used to get the expressions for sum and
carry in the 2-bit adder.
A formula is in Conjunctive Normal Form (CNF) if it is a conjunction of
maxterms (disjunctions of literals).
In a Boolean algebra, one talks about sums of products of literals instead of
DNF.
The maxterms are often referred to as clauses in this context. So, a formula in
CNF looks like
^_
xij .
Hence, any Boolean expression can be written in sum-of-products form.
i
j
Is there also a product-of-sums representation?
Theorem
Sure . . .
For every formula, there is an equivalent formula in CNF.
Unlike with ordinary arithmetic, in Boolean algebra there is complete symmetry
between meet and join.
Again start with NNF, but now use the rules
ϕ ∨ (ψ1 ∧ ψ2 ) ⇒ (ϕ ∨ ψ1 ) ∧ (ϕ ∨ ψ2 )
(ψ1 ∧ ψ2 ) ∨ ϕ ⇒ (ψ1 ∨ ϕ) ∧ (ψ2 ∨ ϕ)
Blow-Up
21
ϕ = (p10 ∧ p11 ) ∨ (p20 ∧ p21 ) ∨ . . . (pn0 ∧ pn1 )
For a propositional variable p we let qp = p and introduce a clause {p}.
Otherwise we introduce clauses as follows:
is in DNF, but conversion to CNF using our rewrite rule produces the
exponentially larger formula
^
_
22
No, but we have to be careful with the conversion process to control blow-up.
Instead of preserving the underlying set of propositional variables, we extend it
by a new variable qψ for each subformula ψ of φ.
The formula
ϕ≡
Tseitin’s Trick
q¬ψ : {qψ , q¬ψ } , {¬qψ , ¬q¬ψ }
pif (i)
qψ∨ϕ : {¬qψ , qψ∨ϕ } , {¬qϕ , qψ∨ϕ } , {¬qψ∨ϕ , qϕ , qψ }
f :[n]→2 i∈[n]
qψ∧ϕ : {qψ , ¬qψ∧ϕ } , {qϕ , ¬qψ∧ϕ } , {¬qψ , ¬qϕ , qψ∧ϕ }
Exercise
Show that there is no small CNF for ϕ: the 2n disjunctions of length n must all
appear.
The intended meaning of qψ is pinned down by these clauses, e.g.
qψ∨ϕ ≡ qψ ∨ qϕ
Are CNF-based (or DNF-based) algorithms then useless in practice?
Example
23
Preserving Satisfiability
24
Consider again the formula
Theorem
ϕ = (p10 ∧ p11 ) ∨ (p20 ∧ p21 ) ∨ . . . ∨ (pn0 ∧ pn1 )
Set Bk = (pk0 ∧ pk1 ) and Ak = Bk ∨ Bk+1 ∨ . . . ∨ Bn for k = 1, . . . , n . Thus,
ϕ = A1 and all the subformulae other than variables are of the form Ak or Bk .
The clauses in the Tseitin form of ϕ are as follows (we ignore the variables):
qAk :
qBk , ¬qBk ∧Ak−1 , qAk−1 , ¬qBk ∧Ak−1 , ¬qBk , ¬qAk−1 , qBk ∧Ak−1
qBk : {¬pk0 , qBk } , {¬pk1 , qBk } , {¬qBk , pk1 , pk0 }
Exercise
Make sure you understand in the example how any satisfying assignment to ϕ
extends to a satisfying assignment of the Tseitin CNF, and conversely.
The set C of clauses in Tseitin CNF is satisfiable if, and only if, φ is so
satisfiable. Moreover, C can be constructed in time linear in the size of φ.
Proof.
⇒ : Suppose that σ |= C.
An easy induction shows that for any subformula ψ we have [[ψ]]σ = [[qψ ]]σ .
Hence [[φ]]σ = [[qφ ]]σ = 1 since {qφ } is a clause in C.
⇐: Assume that σ |= φ.
Define a new valuation τ by τ (qψ ) = [[ψ]]σ for all subformulae ψ. It is easy to
check that τ |= C.
✷
Knights in Shiny Armor
25
Some Questions
26
Exercise
How hard is it to convert a formula to CNF?
Exercise
Exercise
Construct a formula Φn that is satisfiable if, and only if, the n × n chessboard
has the property that a knight can reach all squares by a sequence of
admissible moves.
Show how to convert directly between DNF and CNF.
What would your formula look like in CNF and DNF?
Show: In CNF, if a clause contains x and x, then we can remove the whole
clause and obtain an equivalent formula.
Exercise
Exercise
Construct a formula Φn that is satisfiable if, and only if, the n × n chessboard
admits a knight’s tour: a sequence of admissible moves that touches each
square exactly once.
Suppose a formula is in CNF.
How hard is it to check if the formula is a tautology?
Exercise
Exercise
Again, what would your formula look like in CNF and DNF?
Suppose a formula is in DNF.
How hard is it to check if the formula is a tautology?
Exercise
How about checking whether a formula in DNF (or CNF) is a contradiction?
Satisfiability Testing
2
28
Truth tables allow one to check any property (tautology, contradiction,
satisfiability). But: the table has exponential size and so is useless in practice
where one often deals with formulae with thousands of variables.
Boolean Decision Problems
Recall that Satisfiability testing is enough in the sense that
SAT Solvers
ϕ is a tautology iff ¬ϕ is not satisfiable.
Refutation and Resolution
Binary Decision Diagrams
ϕ is a contradiction iff ¬ϕ is a tautology iff ϕ is not satisfiable.
Again, these equivalences coexist uneasily with normal forms. For example, if ϕ
is in CNF then ¬ϕ can be easily converted into DNF, but CNF is far off.
So for algorithms that depend on a specific form of the input there may be a
problem if conversion is slow.
Normal Forms and Meaning
29
It should be noted that CNF and DNF are not particularly useful for a human
being when it comes to understanding the meaning of a formula (NNF is not
quite as bad). But that’s not their purpose: they provide a handle for
specialized algorithms to test validity and satisfiability. We’ll focus on the
latter.
First note that one can perform various cleanup operations without affecting
satisfiability in CNF.
We can delete any clause that contains a literal and its negation.
We can delete any clause that contains another clause (as a subset).
The last step is justified by the equivalence
ϕ ∧ (ϕ ∨ ψ) ≡ ϕ
Example: CNF Tautology Testing
Here is a very small example. We verify that Peirce’s Law
((A → B) → A) → A
is a tautology. Rewriting the implications we get
¬(¬(¬A ∨ B) ∨ A) ∨ A
which turns into
(¬A ∨ B ∨ A) ∧ (¬A ∨ A)
By the first simplification rule we are done.
30
SAT Algorithms
31
There is an old, but surprisingly powerful satisfiability testing algorithm due to
Davis and Putnam, originally published in 1960. Modern versions of the
algorithm (some of them commercial and proprietary) are still widely used
today.
The Davis/Putnam Paper
32
It is worth noting that the original paper goes by the title
A Computing Procedure for Quantification Theory
Thus, the real target is predicate logic (first order logic) rather than
propositional logic. They use a refutation method based on Herbrand universes.
The method produces a sequence of larger and larger propositional formulae
obtained from the negation of the given formula, that each must be tested for
satisfiability. If a non-satisfiable formula appears the algorithm terminates (in
which case the original formula is proven valid), otherwise it continues
indefinitely. Each round employs what is now the classical Davis/Putnam
method.
As the authors point out, their method yielded a result in a 30 minute
hand-computation where another algorithm running on an IBM 704 failed after
21 minutes. The variant presented below was first implemented by Davis,
Longman and Loveland in 1962 on an IBM 704.
The Main Idea
33
In this context, the disjunctions in CNF are often called clauses. Since the order
of terms in a clause does not matter, one usually writes them as sets of literals.
DPLL assumes that the input formula is in CNF and performs certain simple
cleanup operations – until they apply no longer.
So a whole formula in CNF might be written as
Then it bites the bullet: it picks a variable and explicitly tries to set it to “true”
and “false”, respectively.
{x, y, u} , {x, y, u} , {x, u}
Recurse.
A clause is a unit clause iff it contains just one literal.
The wary algorithm designer will immediately suspect exponential behavior, but
as it turns out in many practical cases the algorithm performs very well.
Unit clauses are easy to deal with:
The only way to satisfy a single literal x is by setting σ(x) = 1.
34
Suppose the formula ϕ is given in CNF. We are trying to solve the decision
problem Satisfiability.
The basic idea of the propositional part of the algorithm is beautifully simple:
Unit Clause Elimination (UCE)
Davis/Putnam Algorithm
Note that an empty clause corresponds to ⊥: there are no literals that one
could try to set to a truth value that would render the whole clause true.
35
Pure Literal Elimination (PLE)
A pure literal in a CNF formula is a literal that occurs only directly, but not
negated. So the formula may contain a variable x but not x or, conversely, only
x but not x.
Note that once we decide σ(x) = 1, we can perform
Unit Subsumption: delete all clauses containing x, and
Clearly, if the formula contains x but not x we can simply set σ(x) = 1 and
remove all the clauses containing the variable.
Unit Resolution: remove x from all remaining clauses.
Likewise, if the formula contains x but not x we can set σ(x) = 0 and remove
all clauses containing the negated variable.
This process is called unit clause elimination.
The crucial point is: a CNF formula ϕ containing unit clause {x} is satisfiable
iff there is an assignment σ setting x to true, and satisfying ϕ′ obtained from ϕ
by UCE.
This may sound utterly trivial, but note that in order to do PLE efficiently we
should probably keep a counter for the total number of occurrences of both x
and x for each variable.
36
More on PLE
37
PLE lemma
38
Here is a closer look at PLE. Let Φ be in CNF, x a variable. Define
Proposition
Φ+
x : the clauses of Φ that contain x positively,
−
∗
If Φ+
x or Φx is empty then Φ is equivalent to Φx .
Φ−
x : the clauses of Φ that contain x negatively, and
In other words, one can replace Φ by Φ∗x : Φ is satisfiable iff Φ∗x is satisfiable.
Φ∗x : the clauses of Φ that are free of x.
Since Φ∗x is smaller than Φ (unless x does not appear at all) this step simplifies
the problem of deciding satisfiability.
So we have the partition
−
∗
Φ = Φ+
x ∪ Φx ∪ Φx
Of course, we get stuck when all variables have positive and negative
occurrences.
This partition gives rise to a trie data structure.
The DPLL Algorithm
39
Example
Perform unit clause elimination until no unit clauses are left.
After three unit clause elimination steps (no pure literal elimination) and one
split on d we get the answer “satisfiable”:
Perform pure literal elimination, call the result ψ.
If an empty clause has appeared, return false.
1
If all clauses have been eliminated, return true.
2
Splitting: otherwise, cleverly pick one of the remaining literals, x.
Recursively test both
ψ, {x}
and
3
4
5
ψ, {x}
1
2
3
4
So this is dangerously close to brute-force search. The algorithm still succeeds
beautifully in the real world since it systematically exploits all possibilities to
prune irrelevant parts of the search tree.
5
41
Note that this algorithm implicitly also solves the search problem: we only need
to keep track of the assignments made to literals. In the example, the
corresponding assignment is
σ(b) = 0, σ(c) = σ(a) = σ(d) = 1
The choice for e does not matter.
Note that we also could have chosen σ(e) = 1 and ignored d.
Exercise
Implement a version of the algorithm that returns a satisfying truth assignment
if it exists.
How about all satisfying truth assignments?
{a,b,c} {a,!b} {a,!c} {c,b} {!a,d,e} {!b}
{a,c}
{a,!c} {c}
{!a,d,e}
{a}
{!a,d,e}
{d,e}
We could also have used pure literal elimination (on d):
for satisfiability.
Return true if at least one of the branches returns true, false otherwise.
Finding an Assignment
40
{a,b,c} {a,!b} {a,!c} {c,b} {!a,d,e} {!b}
{a,b,c} {a,!b} {a,!c} {c,b}
{!b}
{a,c}
{a,!c} {c}
{a}
-
Correctness
42
Claim
The Davis/Putnam algorithm is correct: it returns true if, and only if, the input
formula is satisfiable.
Proof.
Suppose ϕ is in CNF and has a unit clause {x}. Then ϕ is satisfiable iff there
is a satisfying truth assignment σ such that σ(x) = 1.
But then σ(C) = 1 for any clause containing x, so Unit Subsumption does not
affect satisfiability. Also, σ(C) = σ(C ′ ) for any clause containing x, where C ′
denotes the removal of literal x. Hence Unit Resolution does not affect
satisfiability either.
Suppose z is a pure literal. If σ satisfies ϕ then σ ′ also satisfies ϕ where
σ(u) =
σ(u)
1
if u = z,
otherwise.
Correctness, contd.
43
Let x be any literal in ϕ. Then by Shannon expansion
Davis/Putnam In Practice
44
Bad News: DPLL may take exponential time!
ϕ ≡ (x ∧ ϕ[1/x]) ∨ (¬x ∧ ϕ[0/x])
In practice, though, Davis/Putnam is usually quite fast.
It is not entirely understood why formulae that appear in real-world problems
tend to produce only polynomial running time when tackled by Davis/Putnam.
But splitting checks exactly the two formulae on the right for satisfiability;
hence ϕ is satisfiable if, and only if, at least one of the two branches returns
true.
Take the notion of “real world” here with a grain of salt. For example, in
algebra DPLL has been used to solve problems in the theory of so-called quasi
groups (cancellative groupoids). In a typical case there are n3 Boolean
variables and about n4 to n6 clauses; n might be 10 or 20.
Termination is obvious.
✷
Note that in Splitting there usually are many choices for x. This provides an
opportunity to use clever heuristics to speed things up. One plausible strategy
is to pick the most frequent literal. Why?
Example: Exactly One
The point is that instances seem to have to be maliciously constructed to make
DPLL perform poorly.
45
The Real World
46
Neither UCE nor PLE applies here, so the first step is a split.
{{!a,!b},{!a,!c},{!a,!d},{!a,!e},{!b,!c},{!b,!d},{!b,!e},{!c,!d},
{!c,!e},{!d,!e},{a,b,c,d,e}}
If you want to see some cutting edge problems that can be solved by SAT
algorithms (or can’t quite be solved at present) take a look at
{{!a},{!a,!b},{!a,!c},{!a,!d},{!a,!e},{!b,!c},{!b,!d},{!b,!e},
{!c,!d},{!c,!e},{!d,!e},{a,b,c,d,e}}
http://www.satcompetition.org
{{!b},{!b,!c},{!b,!d},{!b,!e},{!c,!d},{!c,!e},{!d,!e},{b,c,d,e}}
http://www.satlive.org
{{!c},{!c,!d},{!c,!e},{!d,!e},{c,d,e}}
{{d},{d,e},{!d,!e}}
Try to implement DPLL yourself, you will see that it’s brutally hard to get up
to the level of performance of the programs that win these competitions.
True
Of course, this formula is trivially satisfiable, but note how the algorithm
quickly homes in on one possible assignment.
Refutation
Boolean Decision Problems
SAT Solvers
3
Refutation and Resolution
Binary Decision Diagrams
48
In summary, DPLL is a practical and powerful method to tackle fairly large
instances of Satisfiability (many thousands of variables).
The main idea in DPLL is to organize the search for a satisfying
truth-assignment in a way that often circumvents the potentially exponential
blow-up.
Here is a crazy idea: How about the opposite approach?
How about systematically trying to show there cannot be satisfying
truth-assignment?
As with DPLL, the procedure should be usually fast, but on occasion may
blow-up exponentially.
Resolvents
49
Suppose x is a variable that appears in clause C and appears negated in clause
C′:
50
The CNF formula
ϕ = {{x, y, z} , {x, y} , {y, z} , {y}}
C ′ = {x, z1 , . . . , zl }
C = {x, y1 , . . . , yk }
Example
admits the following ways to compute resolvents:
Then we can introduce an new clause, a resolvent of C and C ′
Res({x, y, z} , {x, y}) = {y, z}
Res({y, z} , {y}) = {z}
D = {y1 , . . . , yk , z1 , . . . , zl }
Res({y, z} , {z}) = {y}
Res({y} , {y}) = ∅
Proposition
This iterative computation of resolvents is called resolution.
C ∧ C ′ is equivalent to C ∧ C ′ ∧ D.
The last resolvent corresponds to the empty clause, indicating that the original
formula is not satisfiable.
We write Res(C, C ′ ) = D, but note that there may be several resolvents.
(Bad) Notation
51
Resolution
52
More precisely, given a collection of clauses Φ, let Res(Φ) be the collection of
all resolvents of clauses in Φ plus Φ itself.
It is a sacred principle that in the context of resolution methods one writes the
empty clause thus:
Set
Res⋆ (Φ) =
✷
[
Resn (Φ)
n
Yup, that’s a little box, like the end-of-proof symbol or the necessity operator
in modal logic. What were they smoking.
so Res⋆ (Φ) is the least fixed point of the resolvent operator applied to Φ.
We will show that
As we will see shortly, ✷ is a resolvent of Φ if, and only if, the formula is a
contradiction.
Lemma
Φ is a contradiction if, and only if, ✷ ∈ Res⋆ (Φ).
DAG Perspective
One often speaks of a resolution proof for the un-satisfiability of Φ as a
directed, acyclic graph G whose nodes are clauses. The following degree
conditions hold:
The clauses of Φ have indegree 0.
Each other node has indegree 2 and corresponds to a resolvent of the two
predecessors.
There is one node with outdegree 0 corresponding to the empty clause.
The graph is in general not a tree, just a DAG, since nodes may have
outdegrees larger than 1 (a single clause can be used in together with several
others to produce resolvents).
Other than that, you can think of G as a tree with the clauses of Φ at the
leaves, and ✷ at the root.
53
Exercise
Just to familiarize yourself with resolutions, show the following.
Suppose we have a resolution proof for some contradiction Φ.
For any truth-assignment σ there is a uniquely determined path from a clause
of Φ to the root ✷ such that for any clause C along that path we have:
σ(C) = 0.
Proof?
Where to start?
54
More Resolution
55
Correctness
56
Lemma
There are two issues we have to address:
For any truth-assignment σ we have
σ(C) = σ(C ′ ) = 1
Correctness: any formula with resolvent ✷ is a contradiction.
implies
σ(Res(C, C ′ )) = 1
Completeness: any contradiction has ✷ as a resolvent.
Proof.
If σ(yi ) = 1 for some i we are done, so suppose σ(yi ) = 0 for all i.
Note that for a practical algorithm the last condition is actually a bit too weak:
there are many possible ways to construct a resolution proof, since we do not
know ahead of time which method will succeed we need some kind of
robustness: it should not matter too much which clauses we resolve first.
Since σ satisfies C we must have σ(x) = 1. But then σ(x) = 0 and thus
σ(zi ) = 1 for some i.
Hence σ satisfies Res(C, C ′ ).
On the upside, note that we do not necessarily have to compute all of Res⋆ (Φ):
if ✷ pops up we can immediately terminate.
✷
It follows by induction that if σ satisfies Φ it satisfies all resolvents of Φ.
Hence resolution is correct: only contradictions will produce ✷.
Completeness
57
Proof contd.
58
Assume n > 1 and let x be a variable.
Let Φ0 and Φ1 be obtained by performing unit clause elimination for {x} and
{x}.
Theorem
Resolution is complete.
Note that both Φ0 and Φ1 must be contradictions.
Hence by IH ✷ ∈ Res⋆ (Φi ).
Proof.
Now the crucial step: by repeating the “same” resolution proof with Φ rather
than Φi , i = 0, 1, we get ✷ ∈ Res⋆ (Φ) if this proof does not use any of the
mutilated clauses.
By induction on the number n of variables.
We have to show that if Φ is a contradiction then ✷ ∈ Res⋆ (Φ).
n = 1: Then Φ = {x} , {x}.
Otherwise, if mutilated clauses are used in both cases, we must have
In one resolution step we obtain ✷.
{x} ∈ Res⋆ (Φ) from Φ1 , and
Done.
{x} ∈ Res⋆ (Φ) from Φ0 .
Hence ✷ ∈ Res⋆ (Φ).
A Simple Algorithm
59
✷
Efficiency
60
It is clear that we would like to keep the number of resolvents introduced in the
resolution process small. Let’s say that clause ψ subsumes clause ϕ if ψ ⊆ ϕ:
ψ is at least as hard to satisfy as ϕ.
So how large is a resolution proof, even one that uses the subsumption
mechanism?
We keep a collection of “used” clauses U which is originally empty. The
algorithm ends when C is empty.
Can we find a particular problem that is particularly difficult for resolution?
Pick a clause ψ in C and move it to U .
Add all resolvents of ψ and U to C except that:
• Tautology elimination: delete all tautologies.
• Forward subsumption: delete all resolvents that are subsumed by a clause.
• Backward subsumption: delete all clauses that are subsumed by a resolvent.
Recall that there is a Boolean formula EOk (x1 , . . . , xk ) of size Θ(k2 ) such
that σ satisfies EOk (x1 , . . . , xk ) iff σ makes exactly one of the variables
x1 , . . . , xk true.
EOk (x1 , . . . , xk ) = (x1 ∨ x2 . . . ∨ xk ) ∧
Exercise
Show that this algorithm (for any choice of ψ in the first step) is also correct
and complete.
Note that formula is essentially in CNF.
^
1≤i<j≤k
¬(xi ∧ xj )
Das Dirichletsche Schubfachprinzip
61
Pigeonhole Principle
62
Lemma (Pigeonhole Principle)
We need a formula that expresses PHP in terms of these variables.
There is no injective function from [n + 1] to [n].
We have variables xij where 1 ≤ i ≤ m and 1 ≤ j ≤ n.
Φmn =
This sounds utterly trivial, but the Pigeonhole Principle is a standard
combinatorial principle that is used in countless places.
^
EOn (xi1 , xi2 , . . . , xin )
i≤m
The classical proof is by induction on n.
Then Φmn is satisfiable iff m ≤ n.
Alternatively, we could translate the PHP into a Boolean formula (for any
particular value of n, not in the general, parametrized version).
In particular we ought to be able to use resolution to show that Φn+1,n is a
contradiction.
Idea: Variable xij is true iff pigeon i sits in hole j (or, in less ornithological
language, f (i) = j).
Exponential Lower Bound
63
By completeness there must be a resolution proof showing that Φn+1,n is a
contradiction.
Easy Cases
64
One might wonder if there is perhaps a special class of formulae where a
resolution type approach is always fast.
But:
We can think of a clause
{x1 , x2 , . . . , xr , y1 , y2 , . . . , ys }
Theorem
as an implication:
Every resolution proof for the contradiction Φn+1,n has exponential length.
x1 ∧ x2 ∧ . . . ∧ xr → y1 ∨ y2 ∨ . . . ∨ ys
When s = 1 these implication are particularly simple.
Proof is quite hairy.
Horn Formulae
65
Definition
A formula is a Horn formula if it is in CNF and every clause contains at most
one un-negated variable.
Horn Clauses
66
In other words, a Horn formula has only Horn clauses, and a Horn clause is
essentially an implication of the form
C = x1 ∧ x2 ∧ . . . ∧ x n → y
Example:
where we allow y = ⊥.
ϕ = {x, y, z} , {y, z} , {x}
or equivalently
We also allow single un-negated variables (if you like: ⊤ → x).
Note that if we have unit clauses xi then a resolvent of these and C will be y.
ϕ = x ∧ y → z, y ∧ z → ⊥, x
This gives rise to the following algorithm.
Marking Algorithm
67
Truth Assignment
68
Testing Horn formulae for Satisfiability
We can read off a satisfying truth-assignment if the formula is satisfiable:
Mark all variables x in unit clauses {x}.
If there is a clause x1 ∧ x2 ∧ . . . ∧ xn → y such that all the xi are marked,
mark y. Repeat till a fixed point is reached.
σ(x) =
If ⊥ is ever marked, return No.
(
1
0
x is marked,
otherwise.
Otherwise, return Yes.
Then σ(Φ) = 1.
Moreover, τ (Φ) = 1 implies that
You can also think of this as a graph exploration algorithm: node y is marked
only if all predecessor nodes xi are already marked. (Careful though, y can be
the RHS of several clauses.)
∀ x (τ (x) ≤ σ(x))
so σ is the “smallest” satisfying truth-assignment.
Note that Marking Algorithm is linear in the size of Φ (in any reasonable
implementation).
FSMs and Boolean Functions
Boolean Decision Problems
SAT Solvers
70
Suppose we have some n-ary Boolean function f (x). We can represent f by
the collection of all the Boolean vectors a ∈ 2n such that f (a) = 1.
Here is a trick: think of this collection as a binary language, a subset of 2n .
4
Refutation and Resolution
This language is trivially finite, so there is a finite state machine that accepts
it. In fact, the partial minimal DFA is really just a DAG: all words in the
language correspond to a path from the initial state to the unique final state.
Binary Decision Diagrams
UnEqual
71
You have already seen an example: the UnEqual language
Lk = { uv ∈ 22k | u 6= v ∈ 2k }
It turns out that the state complexity of Lk is 3 · 2k − k + 2 = Θ(2k ).
This is uncomfortably large since, as a Boolean function, this language is quite
simple:
UE(u, v) =
This formula clearly has size Θ(k).
_
ui ⊕ vi
The Fix
72
The reason our DFA is large but the formula is small is that the machine has to
contend with the fact that ui is far away from vi (separated by k − 1 bits on
the input tape).
In the formula, there is no sense of distance.
We can address this problem by changing the language to
L′k = { x ∈ 22k | ∃ i (x2i 6= x2i+1 ) }
Here we assume 0-indexing. In other words, x is the shuffle of u and v,
corresponding to a Boolean function
UE′ (x) =
_
x2i ⊕ x2i+1
Much Better
73
Optimization
74
The state complexity of L′k is much smaller than for Lk : 5k.
As a DFA, the machines above are fine. But we are really interested in
representing Boolean functions and don’t necessarily need to read the whole
input: UE′ (0, 1, y) = 1 no matter what y is. So, the evaluation of the Boolean
function should stop right after the first two bits and return the output.
0
0
1
1
2
0
1
3
0
4
In the diagram, we could have a direct forward transition to the accept state.
1
Likewise, if some prefix of the arguments determines the value false
f (u, v) = 0 we should directly jump to a special “failure” state.
0, 1
So we wind up with a DAG: there are two special exit nodes corresponding to ff
and tt. Transitions with source at level i correspond to reading variable xi ,
where 0 ≤ i < n.
A basic component of the minimal DFA for L′k .
If-Then-Else
75
If-Then-Else Normal Form
76
Here is a slightly more algebraic way to get at these diagrams. Let
ite(x, y1 , y0 ) = (x ∧ y1 ) ∨ (¬x ∧ y0 )
It follows that we can define if-then-else normal form (INF): the only allowed
operations are if-then-else and constants.
Together with the Boolean constants, if-then-else provides yet another basis:
More precisely, we can construct INF by using Shannon expansion:
ϕ(x, y) = ite(x, ϕ(1, y), ϕ(0, y))
¬x = ite(x, 0, 1)
x ∧ y = ite(x, y, 0)
Note that in the resulting expression, tests are performed only on variables (not
compound expressions). We are only interested in INF with this additional
property.
x ∨ y = ite(x, 1, y)
x ⇒ y = ite(x, y, 1)
x ⊕ y = ite(x, ite(y, 0, 1), y)
Example
77
The INF of ϕ = (x1 ⇐⇒ y1 ) ∧ (x2 ⇐⇒ y2 ) is
Example
78
Much more productive is to share common subexpressions (such as t000 and
t110 ):
ϕ = ite(x1 , t1 , t0 )
t0 = ite(y1 , 0, t00 )
t1 = ite(y1 , t11 , 0)
t00 = ite(x2 , t001 , t000 )
t11 = ite(x2 , t111 , t110 )
t000 = ite(y2 , 0, 1)
t001 = ite(y2 , 1, 0)
t110 = ite(y2 , 0, 1)
ϕ = ite(x1 , t1 , t0 )
t0 = ite(y1 , 0, t00 )
t1 = ite(y1 , t00 , 0)
t00 = ite(x2 , t001 , t000 )
t000 = ite(y2 , 0, 1)
t001 = ite(y2 , 1, 0)
t111 = ite(y2 , 1, 0)
Strictly speaking, we should substitute all the expressions into the first line–a
very bad idea.
We can now interpret these expressions as nodes in a particular DAG.
Binary Decision Diagrams
79
Ordered BDDs
80
Fix an ordering x1 < x2 < . . . < xn on Var. For simplicity assume that
xi < 0, 1.
Fix a set Var = {x1 , x2 , . . . , xn } of n Boolean variables.
Definition
Definition
A binary decision diagram (BDD) (over Var) is a rooted, directed acyclic graph
with two terminal nodes (out-degree 0) and interior nodes of out-degree 2.
The interior nodes are labeled in Var.
A BDD is ordered (OBDD) if the label sequence along any path is ordered.
Thus
var(u) < var(lo(u)), var(hi(u))
We write var(u) for the labels.
The successors of an interior node are traditionally referred to as lo(u) and
hi(v).
In the correspond INF the variables are always ordered in the sense that
We can think of the terminal nodes as being labeled by constants 0 and 1,
indicating values false and true.
Reduced Ordered BDDs
ite(x, ite(y, t1 , t2 ), ite(z, t3 , t4 )) implies x < y, z
81
A Representation
82
By a straightforward induction we can associate any BDD with root u with a
Boolean function fu :
Definition
A OBDD is reduced (ROBDD) if it satisfies
Uniqueness: for all nodes u, v:
f0 = 0
f1 = 1
fu = ite(var(u), flo(u) , fhi(u) )
var(u) = var(v), lo(u) = lo(v), hi(u) = hi(v) implies u = v
Non-Redundancy: for all nodes u:
lo(u) 6= hi(u)
If the BDDs under consideration are also ordered and reduced we get a useful
representation.
The uniqueness condition corresponds to shared subexpressions: we could
merge u and v.
Theorem
Non-redundancy corresponds to taking shortcuts: we can skip ahead to the
next test.
For every Boolean function f : Bn → B there is exactly one ROBDD u such
that f = fu .
Reduction
83
Decision Problems and BDDs
Suppose we have constructed ROBDD for a Boolean function f .
Suppose we have a BDD for a function and would like to transform it into the
(unique) ROBDD.
Then it is trivial to check if f is a tautology: the DAG must be trivial.
We can use a bottom-up traversal of the DAG to merge or eliminate nodes that
violate Uniqueness or Non-Redundancy.
Likewise, it is trivial to check if f is satisfiable. In fact, it is straightforward to
count the number of satisfying truth assignments (all paths from the root to
terminal node 1).
This traversal requires essentially only local information and can be handled in
(expected) linear time using a hash table.
Incidentally, an interesting option is to make sure that all BDD that appear
during a computation are already reduced: one can try to fold the reduction
procedure into the other operations.
And if we have another function g and we want to test equivalence the test is
trivial: they have to have the same tree.
Of course, this is cheating a bit: we need to worry about the computational
effort required to construct the ROBDDs in the first place. But, once the DAG
is done, everything is easy.
84
Operations on BDDs
85
More Operations
86
Reduce Turn an OBDD into an ROBDD.
If we keep our BDDs reduced, satisfiability is easy to check: the ROBDD must
be different from 0.
Apply Given two ROBDDs u and v and a Boolean operation ⋄,
determine the ROBDD for fu ⋄ fv .
In fact, we can count satisfying truth assignments in O(|u|).
Restrict Given a ROBDD u and a variable x, determine the ROBDD for
fu [a/x].
Of course, there is no way to avoid exponential worst-case cost when building a
n
BDD representing some Boolean function from scratch: after all, there are 22
functions to be represented.
The apply operation can be handled by recursive, top-down algorithm since
ite(x, s, t) ⋄ ite(x, s′ , t′ ) = ite(x, s ⋄ s′ , t ⋄ t′ )
In particular, ROBDDs for multiplication can be shown to require exponential
size.
Running time is O(|u| |v|).
Restrict can be handled by making the necessary modifications, followed by a
reduction.
Quantifiers
87
In principle, we can even handle quantified Boolean formulae by expanding the
quantifiers:
In general, the size of a BDD depends drastically on the chosen ordering of
variables (see the UnEqual example above).
∃ x ϕ(x) ≡ ϕ[0/x] ∨ ϕ[1/x]
It would be most useful to be able to construct a variable ordering that works
well with a given function. In practice, variable orderings are computed by
heuristic algorithms and can sometimes be improved with local optimization
and even simulated annealing algorithms.
∀ x ϕ(x) ≡ ϕ[0/x] ∧ ϕ[1/x]
So this comes down to restriction, a (quadratic) apply operation, followed by
reduction.
Alas, it is NP-hard to determine whether there is an ordering that produces a
BDD of size at most some given bound.
Alas, since SAT is just a problem of validity of an existentially quantified
Boolean formula, this is not going to be fast in general.
Summary
Satisfiability of Boolean formulae is a very expressive problem: lots of
other combinatorial (decision) problems can be rephrased as a Satisfiability
problem.
No polynomial time algorithm is known to tackle Satisfiability in general.
There are good reasons to believe that no such algorithm exists.
The Davis/Putnam algorithm often handles Satisfiability in polynomial
time, on very large instances.
There are commercial versions of the algorithm using clever strategies and
data structures.
Resolution is a refutation based method for satisfiability testing.
Binary decision diagrams often make it possible to manipulate very
complex Boolean functions.
Variable Ordering
89
88