Lecture 9. Conclusion.

Algorithms (IX)
Guoqiang Li
School of Software, Shanghai Jiao Tong University
Q: What we have learned in Algorithm?
Algorithm Design
Algorithm Design
Algorithm Design
• Basic methodologies:
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
• Greedy Algorithm (Chapter 5)
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
• Greedy Algorithm (Chapter 5)
• Linear Programming (Chapter 7)
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
• Greedy Algorithm (Chapter 5)
• Linear Programming (Chapter 7)
• Simplex.
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
• Greedy Algorithm (Chapter 5)
• Linear Programming (Chapter 7)
• Simplex.
• System Design.
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
• Greedy Algorithm (Chapter 5)
• Linear Programming (Chapter 7)
• Simplex.
• System Design.
• Duality.
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
• Greedy Algorithm (Chapter 5)
• Linear Programming (Chapter 7)
• Simplex.
• System Design.
• Duality.
• Randomized Algorithm
Algorithm Design
• Basic methodologies:
• Divide and Conquer (Chapter 2)
• Master Theorem
• Recursion (Chapter 0 & 1)
• Advanced topics:
• Dynamic Programming (Chapter 6)
• Filling in Squares.
• Complexity.
• Greedy Algorithm (Chapter 5)
• Linear Programming (Chapter 7)
• Simplex.
• System Design.
• Duality.
• Randomized Algorithm
• Approximation Algorithm
Algorithm Analysis
Algorithm Analysis
Algorithm Analysis
• Basic Knowledge
Algorithm Analysis
• Basic Knowledge
• Big-O Notation (Chapter 0)
Algorithm Analysis
• Basic Knowledge
• Big-O Notation (Chapter 0)
• Recursion and Loop (Chapter 1)
Algorithm Analysis
• Basic Knowledge
• Big-O Notation (Chapter 0)
• Recursion and Loop (Chapter 1)
• geometrical Analysis (Chapter 2)
Algorithm Analysis
• Basic Knowledge
• Big-O Notation (Chapter 0)
• Recursion and Loop (Chapter 1)
• geometrical Analysis (Chapter 2)
• Advanced Methodology:
Algorithm Analysis
• Basic Knowledge
• Big-O Notation (Chapter 0)
• Recursion and Loop (Chapter 1)
• geometrical Analysis (Chapter 2)
• Advanced Methodology:
• Probability Analysis (Chapter 1)
Algorithm Analysis
• Basic Knowledge
• Big-O Notation (Chapter 0)
• Recursion and Loop (Chapter 1)
• geometrical Analysis (Chapter 2)
• Advanced Methodology:
• Probability Analysis (Chapter 1)
• Amortized Analysis (Chapter 5)
Algorithm Analysis
• Basic Knowledge
• Big-O Notation (Chapter 0)
• Recursion and Loop (Chapter 1)
• geometrical Analysis (Chapter 2)
• Advanced Methodology:
• Probability Analysis (Chapter 1)
• Amortized Analysis (Chapter 5)
• Competition Analysis (Chapter 9)
Standard Algorithm
Standard Algorithms
Standard Algorithms
• Arithmetic Algorithm
Standard Algorithms
• Arithmetic Algorithm
• Sorting
Standard Algorithms
• Arithmetic Algorithm
• Sorting
• Searching
• Strongly Connected Components
Standard Algorithms
• Arithmetic Algorithm
• Sorting
• Searching
• Strongly Connected Components
• Finding shortest paths in graphs
Standard Algorithms
• Arithmetic Algorithm
• Sorting
• Searching
• Strongly Connected Components
• Finding shortest paths in graphs
• Minimum spanning trees in graphs
Standard Algorithms
• Arithmetic Algorithm
• Sorting
• Searching
• Strongly Connected Components
• Finding shortest paths in graphs
• Minimum spanning trees in graphs
• Matchings in bipartite graphs
Standard Algorithms
• Arithmetic Algorithm
• Sorting
• Searching
• Strongly Connected Components
• Finding shortest paths in graphs
• Minimum spanning trees in graphs
• Matchings in bipartite graphs
• Maximum flows in networks
Advanced Data Structure
Advanced Data Structure
Advanced Data Structure
• Trees, Graphs
Advanced Data Structure
• Trees, Graphs
• Kripke Structure, Automata
Advanced Data Structure
• Trees, Graphs
• Kripke Structure, Automata
• Priority Queue (Chapter 4)
Advanced Data Structure
• Trees, Graphs
• Kripke Structure, Automata
• Priority Queue (Chapter 4)
• Disjoint Set (Chapter 5)
What Is Important in Algorithm under My Concern?
I. Proof and Witness
Proof and Witness
• Mathematical Induction.
Proof and Witness
• Mathematical Induction.
• Proof of iff, =, ⇔
Proof and Witness
• Mathematical Induction.
• Proof of iff, =, ⇔
• Deduction.
Examples
log n! = Θ(n log n)
Examples
log n! = Θ(n log n)
A directed graph has a cycle if and only if its depth-first search reveals
a back edge.
Examples
log n! = Θ(n log n)
A directed graph has a cycle if and only if its depth-first search reveals
a back edge.
An undirected graph is bipartite if and only if it contains no cycles of
odd length.
II. Reductions
Bipartite Matching
BOYS
GIRLS
Al
Alice
Bob
Beatrice
Chet
Carol
Dan
Danielle
Bipartite Matching
Al
Alice
Bob
Beatrice
Chet
Carol
Dan
Danielle
s
t
Reductions
• We want to solve Problem P.
• We already have an algorithm that solves Problem Q.
• If any subroutine for Q can also be used to solve P, we say P
reduces to Q.
• Often, P is solvable by a single call to Q’s subroutine, which
means any instance x of P can be transformed into an instance y
of Q such that P(x) can be deduced from Q(y).
III. Divide-and-Conquer Vs. Dynamic Programming
Master Theorem
If T (n) = aT (dn/be) + O(nd ) for some constants a > 0, b > 1 and
d ≥ 0, then

d

if d > logb a
O(n )
d
T (n) = O(n log n) if d = logb a


O(nlogb a )
if d < logb a
The Proof of the Theorem
Proof:
• Assume that n is a power of b.
• The size of the subproblems decreases by a factor of b with each
level of recursion, and therefore reaches the base case after logb n
levels - the the height of the recursion tree.
• Its branching factor is a, so the k-th level of the tree is made up of
ak subproblems, each of size n/bk .
ak × O(
n d
a
) = O(nd ) × ( d )k
bk
b
• k goes from 0 to logb n, these numbers form a geometric series
with ratio a/bd , comes down to three cases.
The Proof of the Theorem
• The ratio is less than 1.
Then the series is decreasing, and its sum is just given by its first
term, O(nd ).
• The ratio is greater than 1.
The series is increasing and its sum is given by its last term,
O(nlogb a )
• The ratio is exactly 1.
In this case all O(log n) terms of the series are equal to O(nd ).
The Problem
• When a spell checker encounters a possible misspelling, it looks
in its dictionary for other words that are close by.
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
The Problem
• When a spell checker encounters a possible misspelling, it looks
in its dictionary for other words that are close by.
• Q: What is the appropriate notion of closeness in this case?
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
The Problem
• When a spell checker encounters a possible misspelling, it looks
in its dictionary for other words that are close by.
• Q: What is the appropriate notion of closeness in this case?
• A natural measure of the distance between two strings is the
extent to which they can be aligned, or matched up.
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
The Problem
• When a spell checker encounters a possible misspelling, it looks
in its dictionary for other words that are close by.
• Q: What is the appropriate notion of closeness in this case?
• A natural measure of the distance between two strings is the
extent to which they can be aligned, or matched up.
• Technically, an alignment is simply a way of writing the strings
one above the other.
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
The Problem
• The cost of an alignment is the number of columns in which the
letters differ.
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
The Problem
• The cost of an alignment is the number of columns in which the
letters differ.
• And the edit distance between two strings is the cost of their best
possible alignment.
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
The Problem
• The cost of an alignment is the number of columns in which the
letters differ.
• And the edit distance between two strings is the cost of their best
possible alignment.
• Edit distance is so named because it can also be thought of as the
minimum number of edits
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
The Problem
• The cost of an alignment is the number of columns in which the
letters differ.
• And the edit distance between two strings is the cost of their best
possible alignment.
• Edit distance is so named because it can also be thought of as the
minimum number of edits
• insertions, deletions, and substitutions of characters needed to
transform the first string into the second.
S
S
−
U
N O W
N N −
Cost: 3
Y
Y
−
S
S
U
N O W
N − −
Cost: 5
−
N
Y
Y
A Dynamic Programming Solution
• When solving a problem by dynamic programming, the most
crucial question is,
A Dynamic Programming Solution
• When solving a problem by dynamic programming, the most
crucial question is, What are the subproblems?
A Dynamic Programming Solution
• When solving a problem by dynamic programming, the most
crucial question is, What are the subproblems?
• Our goal is to find the edit distance between two strings
x[1, . . . , m] and y[1, . . . , n]
.
A Dynamic Programming Solution
• When solving a problem by dynamic programming, the most
crucial question is, What are the subproblems?
• Our goal is to find the edit distance between two strings
x[1, . . . , m] and y[1, . . . , n]
• For every i, j with 1 ≤ i ≤ m and 1 ≤ j ≤ n, let
.
A Dynamic Programming Solution
• When solving a problem by dynamic programming, the most
crucial question is, What are the subproblems?
• Our goal is to find the edit distance between two strings
x[1, . . . , m] and y[1, . . . , n]
• For every i, j with 1 ≤ i ≤ m and 1 ≤ j ≤ n, let
• E(i, j): the edit distance between some prefix of the first string,
x[1, . . . , i], and some prefix of the second, y[1, . . . , j].
.
A Dynamic Programming Solution
• When solving a problem by dynamic programming, the most
crucial question is, What are the subproblems?
• Our goal is to find the edit distance between two strings
x[1, . . . , m] and y[1, . . . , n]
• For every i, j with 1 ≤ i ≤ m and 1 ≤ j ≤ n, let
• E(i, j): the edit distance between some prefix of the first string,
x[1, . . . , i], and some prefix of the second, y[1, . . . , j].
• E(i, j) =
min{1 + E(i − 1, j), 1 + E(i, j − 1), diff(i, j) + E(i − 1, j − 1)},
where diff(i, j) is defined to be 0 if x[i] = y[j] and 1 otherwise.
.
An Example
Edit distance between EXPONENTIAL and POLYNOMIAL,
subproblem E(4, 3) corresponds to the prefixes EXPO and POL. The
rightmost column of their best alignment must be one of the following:
O
−
or
−
L
O
L
or
Thus, E(4, 3) = min{1 + E(3, 3), 1 + E(4, 2); 1 + E(3, 2)}.
(b)
(a)
j−1 j
n
i−1
i
m
GOAL
E
X
P
O
N
E
N
T
I
A
L
0
1
2
3
4
5
6
7
8
9
10
11
P
1
1
2
2
3
4
5
6
7
8
9
10
O
2
2
2
3
2
3
4
5
6
7
8
9
L
3
3
3
3
3
3
4
5
6
7
8
8
Y
4
4
4
4
4
4
4
5
6
7
8
9
N
5
5
5
5
5
4
5
4
5
6
7
8
O
6
6
6
6
5
5
5
5
5
6
7
8
M
7
7
7
7
6
6
6
6
6
6
7
8
I
8
8
8
8
7
7
7
7
7
6
7
8
A
9
9
9
9
8
8
8
8
8
7
6
7
L
10
10
10
10
9
9
9
9
9
8
7
6
Chapter by Chapter
Chapter 0 Prologue
Big-O Notation
• Multiplicative constants can be omitted: 14n2 becomes n2 .
• na dominates nb if a > b, for instance, n2 dominates n
• an dominates bn if a > b, for instance, 3n dominates 2n
• Any exponential dominates any polynomial: 3n dominates n5
• Any polynomial dominates any logarithm: n dominates (log n)3 .
This also means, for example, that n2 dominates n log n.
An Example
0.1. In each of the following situations, indicate whether f = O(g), or f = Ω(g), or both (in which case
f = Θ(g)).
f (n)
g(n)
(a) n − 100
n − 200
(b) n1/2
n2/3
(c) 100n + log n n + (log n)2
(d) n log n
10n log 10n
(e) log 2n
log 3n
(f) 10 log n
log(n2 )
(g) n1.01
n log2 n
(h) n2 / log n
n(log n)2
(i)
An Example
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
n0.1
log n
(log
√ n)
n
n1/2
n2n
2n
n!
log n
(log
Pn n) k
i
i=1
(log n)10
n/ log n
(log n)3
5log2 n
3n
2n+1
2n
2
2(log2 n)
nk+1
Chapter 1 Algorithms with Numbers
Recursion Analysis
MODEXP(x, y, N);
Two n-bit integers x and N, and an integer exponent y;
if y = 0 then return 1;
z=MODEXP (x, by/2c, N);
if y is even then
return z2 mod N;
else return x · z2 (mod N);
end
Another formulation:
(
(x by/2c )2 (mod N)
if y is even
x y (mod N) =
by/2c
2
x · (x
) (mod N) if y is odd
Arithmetic Algorithm
What is the GCD of 42 and 96? Can you prove it?
Chapter 2. Divide-and-Conquer
An Example
Foo(x);
if x > 1 then
Printf(“Hello World”);
Foo(x/4);
Foo(x/4);
Foo(x/4);
end
Chapter 3. Decompositions of Graphs
Our Tools and Weapons
Our Tools and Weapons
• DFS (Explore)
Our Tools and Weapons
• DFS (Explore)
• DAG, Linearization
Our Tools and Weapons
• DFS (Explore)
• DAG, Linearization
• SCC
Exercise 1
3.16. Suppose a CS curriculum consists of n courses, all of them mandatory. The prerequisite graph G
has a node for each course, and an edge from course v to course w if and only if v is a prerequisite
for w. Find an algorithm that works directly with this graph representation, and computes the
minimum number of semesters necessary to complete the curriculum (assume that a student
can take any number of courses in one semester). The running time of your algorithm should be
linear.
Exercise 2
3.22. Give an efficient algorithm which takes as input a directed graph G = (V, E), and determines
whether or not there is a vertex s ∈ V from which all other vertices are reachable.
Exercise 3
A bipartite graph is a graph G = (V , E) whose vertices can be
partitioned into two sets (V = V1 ∪ V2 and V1 ∩ V2 = ∅) such that
there are no edges between vertices in the same set (for instance, if
u, v ∈ V1 , then there is no edge between u and v).
(a) Give a linear-time algorithm to determine whether an undirected
graph is bipartite.
(b) Prove that an undirected graph is bipartite if and only if it
contains no cycles of odd length.
(c) At most how many colors are needed to color in an undirected
graph with exactly one odd length cycle?
Exercise 4
Prove that given a connected undirected graph G = (V , E), there
exists a node v ∈ V , such that G is still connected after deleting v.
Exercise 5
Given a directed graph, work out an efficient algorithm, to find all
cross edges in DFS searching.
Chapter 4. Path in Graphs
Our Tools and Weapons
Our Tools and Weapons
• BFS
Our Tools and Weapons
• BFS
• Shortest Path
Our Tools and Weapons
• BFS
• Shortest Path
• Shortest Path in DAG
Exercise 1
4.14. You are given a strongly connected directed graph G = (V, E) with positive edge weights along
with a particular node v0 ∈ V . Give an efficient algorithm for finding shortest paths between all
pairs of nodes, with the one restriction that these paths must all pass through v0 .
Exercise 2
Give an efficient algorithm that takes as input a directed acyclic graph
G = (V , E), and two vertices s, t ∈ V , and outputs the number of
different directed paths from s to t in G.
Exercise 3
Give an efficient algorithm to solve:
Input Given an undirected graph G = (V , E),every edge is
assigned a weight le > 0, and given a specific edge
e ∈ E.
Output the length of that shortest circle that contains e.
Exercise 4
Given an undirected graph with n nodes and the degree of each node is
≥ 1, there must exists a circle, and the number of nodes with degree
≥ 3 is less than 2 · dlog2 ne
Chapter 5. Greedy Algorithms
Our Tools and Weapons
Our Tools and Weapons
• MST
Our Tools and Weapons
• MST
• Cut Theorem
Chapter 6. Dynamic Programming
Exercise 1
6.2. You are going on a long trip. You start on the road at mile post 0. Along the way there are n
hotels, at mile posts a1 < a2 < · · · < an , where each ai is measured from the starting point. The
only places you are allowed to stop are at these hotels, but you can choose which of the hotels
you stop at. You must stop at the final hotel (at distance an ), which is your destination.
You’d ideally like to travel 200 miles a day, but this may not be possible (depending on the spacing
of the hotels). If you travel x miles during a day, the penalty for that day is (200 − x)2 . You want
to plan your trip so as to minimize the total penalty—that is, the sum, over all travel days, of the
daily penalties.
Give an efficient algorithm that determines the optimal sequence of hotels at which to stop.
Exercise 2
Given two strings x = x1 x2 . . . xn and y = y1 y2 . . . ym , we wish to find
the length of their longest common subsequence. Show how to do this
in time O(mn).
Chapter 7. Linear Programming
An Example
max x + 2y − 1
2x + y ≤ 3
x + 3y ≤ 5
x, y ≥ 0