Heuristic Search

Heuristic Search
Blai Bonet
Branch and bound
Universidad Sim´
on Bol´ıvar, Caracas, Venezuela
c 2015 Blai Bonet
Goals for the lecture
Lecture 12
Branch and bound
BnB performs depth-first traversal of search tree using linear memory
Further, it maintains global variables to store current best solution
and its cost
• Branch and bound algorithm
BnB prunes nodes whose cost + heuristic estimate is bigger than or
equal to current cost because they can’t lead to better solutions
• Parallelisation and super-linear speed ups
• Branch and cut algorithm
Assumptions:
– Heuristic function is admissible
– At certain depth, all nodes are terminal (either goals or dead ends)
(like TSP)
c 2015 Blai Bonet
Lecture 12
c 2015 Blai Bonet
Lecture 12
Branch and bound: pseudocode
1
2
Properties of branch and bound
global unsigned best-bound = ∞
global Node best-solution = null
3
4
5
6
7
8
% Branch and bound
Node branch-and-bound():
Node root = make-root-node(init())
depth-first-branch-and-bound(root)
return best-solution
– Complete: it is a complete algorithm (assuming safeness)
% Depth-first visit for branch and bound
void depth-first-branch-and-bound(Node n):
% base cases
f = n.g + h(n.state)
if f > best-bound return
if n.state.is-goal()
best-bound = n.g
best-solution = n
return
– Time complexity: O(bd )
– Optimality: yes if heuristic is admissible
9
10
11
12
13
14
15
16
17
18
– Space complexity: O(bd)
Time and space complexities calculated in canonical search tree
with branching factor b and height d
19
20
21
22
% depth-first recursion
foreach <s,a> in n.state.successors()
depth-first-branch-and-bound(n.make-node(s,a))
c 2015 Blai Bonet
Lecture 12
c 2015 Blai Bonet
Performance of branch and bound
Lecture 12
Parallelisation and speed up
Branch and bound is amenable to parallelisation:
Two importants events in any run of branch and bound:
– Maintain the global variables in shared memory
– Optimal solution is found (elapsed time to find optimal solution)
– Parallelise the traversal of the branches in any way you want
– Algorithm terminates (elapsed time moment it finds optimal
solution)
If tseq and tpar refer to the time spent by the sequential and parallel
versions of branch and bound, the speed up achieved is
The second time is known as the time spent to prove the optimality
of the solution
In worst case, the second time is zero, but generally it is much bigger
than the first time
c 2015 Blai Bonet
Lecture 12
speedup =
tseq
tpar
I.e., speed up measures how much faster is the parallel algorithm
Theoretically, speedup ≤ n when parallelisation is over n processors
i.e. speed up is at most linear in number of cores
c 2015 Blai Bonet
Lecture 12
Observed super-linear speed ups
Branch and cut
Pruning in BnB can be understood as follows:
However, often parallelised BnB shows super-linear speed up
– At node n, BnB computes lower bound f (n) = g(n) + h(n)
on the cost of all solutions below n
How come? Is the theory wrong?
Parallelised BnB explores tree more uniformly and finds good solutions
quicker, which translates into more pruning
Theory isn’t wrong: it says that speed up must be measured against
best sequential algorithm
Conclusion: sequential BnB can be improved by simulating parallel
machine within a single core
– Lower bound is compared with current upper bound on cost of
optimal solution
– If lower bound f (n) is bigger than upper bound, prune node n as
no solution below n can improve best current solution
More general idea involving computation of lower bounds and
maintenance of upper bounds
Branch and cut algorithms adopt this view. BnB is special case
c 2015 Blai Bonet
Lecture 12
Summary
• BnB maintains an upper bound on optimal solution cost
• The search tree is explored in depth-first fashion, pruning branches
that can’t lead to improvements on current upper bound
• Parellelisation and multi-threading may result in improved
performance
c 2015 Blai Bonet
Lecture 12
c 2015 Blai Bonet
Lecture 12