The Sample Average Approximation Method Applied to

The Sample Average Approximation Method Applied to
Stochastic Routing Problems: A Computational Study
Bram Verweij∗, Shabbir Ahmed†‡, Anton J. Kleywegt∗,
George Nemhauser∗‡, and Alexander Shapiro∗
Georgia Institute of Technology
School of Industrial and Systems Engineering
Atlanta, GA 30332-0205
November 7, 2002
Abstract
The sample average approximation (SAA) method is an approach for solving stochastic
optimization problems by using Monte Carlo simulation. In this technique the expected objective
function of the stochastic problem is approximated by a sample average estimate derived from
a random sample. The resulting sample average approximating problem is then solved by
deterministic optimization techniques. The process is repeated with different samples to obtain
candidate solutions along with statistical estimates of their optimality gaps.
We present a detailed computational study of the application of the SAA method to solve
three classes of stochastic routing problems. These stochastic problems involve an extremely
large number of scenarios and first-stage integer variables. For each of the three problem classes,
we use decomposition and branch-and-cut to solve the approximating problem within the SAA
scheme. Our computational results indicate that the proposed method is successful in solving
problems with up to 21694 scenarios to within an estimated 1.0% of optimality. Furthermore, a
surprising observation is that the number of optimality cuts required to solve the approximating
problem to optimality does not significantly increase with the size of the sample. Therefore, the
observed computation times needed to find optimal solutions to the approximating problems
grow only linearly with the sample size. As a result, we are able to find provably near-optimal
solutions to these difficult stochastic programs using only a moderate amount of computation
time.
1
Introduction
This paper addresses two-stage Stochastic Routing Problems (SRPs), where the first stage consists
of selecting a route, i.e., a path or a tour, through a graph subject to arc failures or delays, and
the second stage involves some recourse, such as a penalty or a re-routing decision. The overall
objective is to minimize the sum of the first-stage routing cost and the expected recourse cost. A
generic formulation of this class of problems is
z ∗ = min cT x + EP [Q(x, ξ(ω))],
x∈X
∗
Supported by NSF grant DMI-0085723.
Supported by NSF grant DMI-0099726.
‡
Supported by NSF grant DMI-0121495.
†
1
(1)
where
Q(x, ξ(ω)) = min{q(ω)T y | Dy ≥ h(ω) − T (ω)x},
y≥0
(2)
x denotes the first-stage routing decision, X denotes the first-stage feasible set involving route
defining constraints, i.e., X = R ∩ {0, 1}d for some polyhedron R of dimension d, ω ∈ Ω denotes a
scenario that is unknown when the first-stage decision x has to be made, but that is known when
the second-stage recourse decision y is made, Ω is the set of all scenarios, and c denotes the routing
cost. We assume that the probability distribution P on Ω is known in the first stage. The quantity
Q(x, ξ(ω)) represents the optimal value of the second-stage recourse problem corresponding to the
first-stage route x and the parameters ξ(ω) = (q(ω), h(ω), T (ω)). Note that the first-stage variables
x are binary in this problem. Hence, SRP belongs to the class of two-stage stochastic programs
with integer first-stage variables.
The difficulty in solving SRP is two-fold. First, the evaluation of E[Q(x, ξ(ω))] for a given value
of x requires the solution of numerous second-stage optimization problems. If Ω contains a finite
number of scenarios {ω1 , ω2 , . . . , ω|Ω| }, with associated probabilities pk , k = 1, 2, . . . , |Ω|, then the
expectation can be evaluated as the finite sum
E[Q(x, ξ(ω))] =
|Ω|
pk Q(x, ξ(ωk )).
(3)
k=1
However, the number of scenarios grows exponentially fast with the dimension of the data. For
example, consider a graph with 100 arcs, each of which has a parameter with three possible values.
This leads to |Ω| = 3100 scenarios for the parameter vector of the graph, making a direct application
of (3) impractical. Second, E[Q(x, ξ(ω))], regarded as a function defined on Rd (and not just on
the discrete points X = R ∩ {0, 1}d ), is a polyhedral function of x. Consequently, SRP involves
minimizing a piecewise linear objective over an integer feasible region and can be very difficult,
even for problems with a modest number of scenarios.
Early solution approaches for stochastic routing problems were based on heuristics [35] or dynamic programming [1, 2]. These methods were tailored to very specific problem structures and
are not applicable in general. Exact mathematical programming methods for this class of problems have mostly been restricted to cases with a small number of scenarios, to allow for the exact
evaluation of the second-stage expected value function E[Q(x, ξ(ω))]. Exploiting this property, Laporte et al. [18, 19, 20, 21] solved various stochastic routing problems using the integer L-shaped
algorithm of Laporte and Louveaux [17]. Wallace [38, 39, 40] has studied the exact evaluation of
the second-stage expected value function for problems in which the first stage feasible set is convex
and the second stage involves routing decisions.
For stochastic programs with a prohibitively large number of scenarios, a number of sampling
based approaches have been proposed, for example to estimate function values, gradients, optimality cuts, or bounds for the second-stage expected value function. We classify such sampling based
approaches into two main groups: interior sampling and exterior sampling methods. In interior
sampling methods, the samples are modified during the optimization procedure. Samples may be
modified by adding to previously generated samples, by taking subsets of previously generated samples, or by generating completely new samples. For example, methods that modify samples within
the L-shaped algorithm for stochastic linear programming have been suggested by Van Slyke and
Wets [36], Higle and Sen [12] (stochastic decomposition) and Infanger [14] (importance sampling).
For discrete stochastic problems, an interior sampling branch-and-bound method was developed by
Norkin et al. [24, 25]. In exterior sampling methods, a sample ω 1 , ω 2 , . . . , ω N of N sample scenarios
2
is generated from Ω according to probability distribution P (we use the superscript convention for
sample values), and then a deterministic optimization problem specified by the generated sample is
solved. This procedure may be repeated by generating several samples and solving several related
optimization problems. For example, the Sample Average Approximation (SAA) method is an
exterior sampling method in which the expected value function E[Q(x, ξ(ω))] is approximated by
n
the sample average function N
n=1 Q(x, ξ(ω ))/N . The Sample Average Approximation problem
zN = min cT x +
x∈X
N
1 Q(x, ξ(ω n )),
N
(4)
n=1
corresponding to the original two-stage stochastic program (1) (also called the “true” problem) is
then solved using a deterministic optimization algorithm. The optimal value zN and an optimal
solution x
ˆ to the SAA problem provide estimates of their true counterparts in the stochastic program (1). Over the years the SAA method has been used in various ways under different names,
see e.g. Rubinstein and Shapiro [30], and Geyer and Thompson [8]. SAA methods for stochastic
linear programs have been proposed, among others, by Plambeck et al. [28], Shapiro and Homemde-Mello [32, 33], and Mak, Morton, and Wood [22]. Kleywegt, Shapiro, and Homem-de-Mello [16]
analyzed the behavior of the SAA method when applied to stochastic discrete optimization problems. However, computational experience with this method is limited.
In this paper, we present a detailed computational study of the application of the SAA method
combined with a decomposition based branch-and-cut framework to solve three classes of stochastic
routing problems (SRPs). The first SRP is a shortest path problem in which arcs have deterministic
costs and random travel times, and we have to pay a penalty for exceeding a given arrival deadline.
In the second SRP, we are again looking for a shortest path, but now arcs in the graph can fail and
the recourse action has to restore the feasibility of the first-stage path at minimum cost. In the
third SRP, the recourse is the same as in the first SRP, but now the first-stage solution should be
a tour instead of a path. The first and third SRPs have continuous recourse decisions, whereas the
second SRP has discrete recourse decisions and a recourse problem with the integrality property,
so that for all three SRPs it is sufficient to solve continuous recourse problems.
Integrating the branch-and-cut method within the SAA method allows us to solve problems
with huge sample spaces Ω. We have been succesful in solving problems with up to 21694 scenarios
to within an estimated 1.0% of optimality. Furthermore, a surprising observation is that the number
of optimality cuts required to solve the SAA problem to optimality does not significantly increase
with the sample size. We suggest an explanation of this phenomenon.
The remainder of this paper is organized as follows. Section 2 gives a general description of
our solution methodology for two-stage stochastic routing problems. In Section 3 we introduce the
specific problem classes and discuss their computational complexity. We proceed in Section 4 by
describing how the method of Section 2 applies to the specific problem classes of Section 3. Our
computational results along with a discussion of the empirically observed stability of the number
of optimality cuts are given in Section 5. Finally, our conclusions are presented in Section 6.
2
Methodology
Our solution methodology for problem (1) consists of the integration of the SAA method, to deal
with the extremely large sets Ω, and a decomposition based branch-and-cut algorithm, to deal with
the integer variables in the SAA problems. We assume that problem (1) has relatively complete
recourse (a two-stage stochastic program is said to have relatively complete recourse if for each
3
feasible first-stage solution x ∈ X and each scenario ω ∈ Ω, there exists a second-stage solution
y ≥ 0 such that Dy ≥ h(ω) − T (ω)x).
2.1
The Sample Average Approximation Method
As mentioned earlier, the SAA method proceeds by solving the SAA problem (4) repeatedly. By
generating M independent samples, each of size N , and solving the associated SAA problems,
ˆ1 , x
ˆ2 , . . . , x
ˆM are obtained. Let
objective values zN1 , zN2 , . . . , zNM and candidate solutions x
z¯N =
M
1 m
zN
M
(5)
m=1
denote the average of the optimal objective function values of the M SAA problems. It is wellknown that E[¯
zN ] ≤ z ∗ [25, 22]. Therefore, z¯N provides a statistical estimate for a lower bound on
the optimal value of the true problem.
ˆ + E[Q(ˆ
x, ξ(ω))] is an upper bound
For any feasible point x
ˆ ∈ X, clearly the objective value cT x
∗
for z . This upper bound can be estimated by
N
1 x) = c x
ˆ+ Q(ˆ
x, ξ(ω n )),
zˆN (ˆ
N
T
(6)
n=1
where {ω 1 , ω 2 , . . . , ω N } is a sample of size N . Typically N is chosen to be quite large, N > N ,
and the sample of size N is independent of the sample, if any, used to generate x
ˆ. Then we have
T
x) is an unbiased estimator of c x
ˆ + E[Q(ˆ
x, ξ(ω))], and hence, for any feasible solution x
ˆ,
that zˆN (ˆ
we have that E[ˆ
zN (ˆ
x)] ≥ z ∗ . The variances of the estimators z¯N and zˆN (ˆ
x) can be estimated by
M
m
2
1
zN − z¯N
(M − 1)M
(7)
N 2
1
T
n
c
=
x
ˆ
+
Q(ˆ
x
,
ξ(ω
))
−
z
ˆ
(ˆ
x
)
,
N
(N − 1)N (8)
σ
ˆz2¯
=
N
m=1
and
σ
ˆz2ˆ (ˆx)
N
n=1
respectively.
Note that the above procedure produces up to M different candidate solutions. It is natural
ˆ1 , x
ˆ2 , . . . , x
ˆM of the M SAA problems which has the
to take x
ˆ∗ as one of the optimal solutions x
smallest estimated objective value, that is,
x) : x
ˆ ∈ {ˆ
x1 , x
ˆ2 , . . . , x
ˆM } .
(9)
x
ˆ∗ ∈ arg min zˆN (ˆ
One can evaluate the quality of the solution x
ˆ∗ by computing the optimality gap estimate
x∗ ) − z¯N ,
zˆN (ˆ
(10)
x∗ ) is recomputed after performing the minimization in (9) with an independent sample
where zˆN (ˆ
to obtain an unbiased estimate. The estimated variance of this gap estimator is
σ
ˆz2ˆ
N
(ˆ
x∗ )−¯
zN
= σ
ˆz2ˆ
N
(ˆ
x∗ )
+σ
ˆz2¯ .
N
The above procedure for statistical evaluation of a candidate solution was suggested in Norkin
et al. [25] and developed in Mak et al. [22]. Convergence properties of the SAA method were studied
in Kleywegt et al. [16].
4
2.2
Solving SAA problems using Branch-and-Cut
The SAA method requires the solution of the SAA problem (4). This problem can also be written
as
min cT x +
x∈X
1
QN (x),
N
(11)
n
where QN (x) = N
n=1 Q(x, ξ(ω )), and Q is given by (2).
It is well-known that the functions Q(x, ξ(ω n )), and hence QN (x), are piecewise linear and convex in x. Thus, because of relatively complete recourse, QN (x) = max{aTi x − ai0 | i ∈ {1, 2, . . . , p}}
for all x ∈ X, for some {(ai0 , ai )}, i = 1, 2, . . . , p, and some positive integer p. Hence problem (11)
can be reformulated as a mixed-integer linear program with p constraints representing affine supports of QN , as follows.
min cT x + θ/N
subject to θ ≥
aTi x
(12a)
− ai0
for i ∈ {1, 2, . . . , p}
x∈X
(12b)
(12c)
For a given first-stage solution x, the optimal value of θ is equal to QN (x). The coefficients ai0 , ai
of the optimality cuts (12b) are given by the values and extreme subgradients of QN , which in
turn are given by the extreme point optimal dual solutions of the second-stage problems (2). The
number p of constraints in (12b) typically is very large. Instead of including all such constraints, we
generate only a subset of these constraints within a branch-and-cut framework. For a given firststage solution x, the second-stage problem of each sample scenario can be solved independently
of the others, providing a computationally convenient decomposition, and their dual solutions can
be used to construct optimality cuts. This is the well-known Benders or L-shaped decomposition
method [36].
The standard branch-and-cut procedure is an LP-based branch-and-bound algorithm for mixedinteger linear programs, in which a separation algorithm is used at each node of the branch-andbound tree. The separation algorithm takes as input a solution of an LP relaxation of the integer
program, and looks for inequalities that are satisfied by all integer feasible points, but that are
violated by the solution of the LP relaxation. These violated valid inequalities (cuts) are then
added to the LP relaxation. For more details on branch-and-cut algorithms we refer to Nemhauser
and Wolsey [23], Wolsey [43], and Verweij [37].
Our branch-and-cut procedure for (12) starts with some initial relaxation that does not include
any of the optimality cuts (12b), and adds violated valid inequalities as the algorithm proceeds.
This includes violated valid inequalities for X, and the optimality cuts (12b), which are found by
solving the second-stage subproblems. We only generate optimality cuts at solutions that satisfy
all first-stage constraints, including integrality. Our approach to generating optimality cuts is
similar to the one used in the earlier exact L-shaped methods (see e.g. Wets [42]), except that we
use the sample average function instead of (3), and embedding it in a branch-and-cut framework
to deal with integer variables in the first stage.
3
Problem Classes
Here we introduce the problem classes on which we perform our computational study. Sections 3.1, 3.2, and 3.3 deal with the shortest path problem with random travel times, the shortest
path problem with arc failures, and the traveling salesman problem with random travel times,
5
respectively. The computational complexity of these problems is discussed in Section 3.4. In Section 3.5 we analyze the mean value problems corresponding to the specific problem classes at hand.
This is of interest because it justifies the use of stochastic models.
Each of the problems considered next involves a graph G = (V, A) or G = (V, E). For each
+
−
(i) (δG
(i)) denotes the set of arcs in G leaving (entering) i, and δG (i) denotes the
node i ∈ V , δG
set of edges in G incident to i. For any S ⊆ V , γG (S) denotes theset of arcs or edges in G with
both endpoints in S. For any x ∈ R|A| and A ⊆ A, x(A ) denotes a∈A xa .
3.1
Shortest Path with Random Travel Times
In the Shortest Path problem with Random travel Times (SPRT), the input is a directed graph
G = (V, A), a source node s ∈ V and a sink node t ∈ V , arc costs c ∈ R|A| , a probability distribution
|A|
P of the vector of random travel times ξ(ω) ∈ R+ , and a deadline κ ∈ R. The problem is to find
an s − t path that minimizes the cost of the arcs it traverses plus the expected violation of the
deadline.
The variables x ∈ {0, 1}|A| represent the path, where xa = 1 if arc a is in the path and xa = 0
otherwise. The SPRT can be stated as the following two-stage stochastic integer program:
min
x∈{0,1}|A|
cT x + EP [Q(x, ξ(ω))]
(13a)


for i = s,
1,
+
−
(i)) − x(δG
(i)) = −1, for i = t,
subject to x(δG


0,
for i ∈ V \ {s, t},
x(γG (S)) ≤ |S| − 1,
(13b)
for all S ⊂ V with |S| ≥ 2,
(13c)
where
Q(x, ξ(ω)) = max ξ(ω)T x − κ, 0 .
The same recourse function was used by Laporte et al. [19] in the context of a vehicle routing
problem. Note that, if G does not contain negative-cost cycles, then the cycle
elimination constraints
(13c)
can
be
omitted.
That
is,
if
for
all
cycles
C
in
G
and
all
ω,
a∈C ca ≥ 0 and
a∈C ξa (ω) ≥ 0; then there is an optimal solution of (13a)–(13b) that is a path.
3.2
The Shortest Path Problem with Random Arc Failures
Given a directed graph G = (V, A), the set of reverse arcs A−1 is defined by A−1 = {a−1 | a ∈ A},
where for (i, j) ∈ A, the arc (i, j)−1 = (j, i) has head i and tail j. Note that although (i, j)
and (j, i)−1 have the same head and tail, they are different (parallel) arcs, with (i, j) ∈ A and
(j, i)−1 ∈ A−1 . In the Shortest Path problem with random Arc Failures (SPAF), the input is a
directed graph G = (V, A), a source node s ∈ V and a sink node t ∈ V , arc costs c1 ∈ R|A| , and
a probability distribution P of ξ(ω) = (c2 (ω), u(ω)) ∈ R2|A| × {0, 1}|A| . Here, c2 (ω) is a vector of
random arc and reverse arc costs, and u(ω) is a vector of random arc capacities, where ua (ω) = 0
denotes that arc a is not available and ua (ω) = 1 denotes that it is. Let G = (V, A ∪ A−1 ).
In the first stage, an s − t path P has to be chosen in G. After a first-stage path has been
chosen, the values of the random variables ξ(ω) = (c2 (ω), u(ω)) are observed, and then a secondstage decision is made. If an arc on the first-stage path fails, then the path becomes infeasible, and
the first-stage flow on that arc has to be canceled. Flow on other arcs in the first-stage path may
be canceled too. First-stage flow on an arc (i, j) ∈ A is canceled in the second stage by flow on its
6
reverse arc (i, j)−1 . Such cancelation of a unit flow incurs a cost of c2(i,j)−1 (ω). (If c2(i,j)−1 (ω) < 0,
then a refund is received if the first-stage flow on arc (i, j) ∈ A is canceled.) The second-stage
decision consists of choosing flows in G in such a way that the combination of the uncanceled flow
in the first-stage path P and the second-stage flow on arcs in A forms an s−t path in G. Specifically,
let x ∈ {0, 1}|A| represent the first-stage path, where xa = 1 if arc a is in the path and xa = 0
−1
−1
otherwise, and let y ∈ {0, 1}|A∪A | respresent the second-stage flow. For any y ∈ {0, 1}|A∪A | , let
y + , y − ∈ {0, 1}|A| be defined by ya+ = ya and ya− = ya−1 for all a ∈ A. Then let ∆y ∈ {−1, 0, 1}|A|
be defined by ∆y = y + − y − , that is, ∆ya = ya − ya−1 for all a ∈ A. For (x, y) to be feasible, it
is necessary that both x and x + ∆y be incidence vectors of s − t paths in G. The objective of
the SPAF is to minimize the sum of the cost of the first-stage path and the expected cost of the
second-stage flow needed to restore feasibility.
Before we give a formulation of the SPAF, we characterize the structure of second-stage decisions.
Proposition 3.1. The feasible second-stage decisions of the SPAF are circulations in G .
Proof. Because x and x + ∆y are both incidence vectors of s − t paths in G, it follows that
+
−
+
−
(i)) − (x + ∆y)(δG
(i)) = x(δG
(i)) − x(δG
(i))
(x + ∆y)(δG
+
−
⇒ ∆y(δG
(i)) = ∆y(δG
(i))
+
−
(i)) = (y + − y − )(δG
(i))
⇒ (y + − y − )(δG
+
−
⇒ y(δG
= y(δG
(i))
(i))
for all nodes i ∈ V .
However, all circulations in G do not form paths when combined with a first-stage path P .
Proposition 3.2 will establish conditions under which all circulations in G can be allowed in the
second stage.
The SPAF can be stated as the following two-stage stochastic integer program:
min
x∈{0,1}|A|
(c1 )T x + E[Q(x, ξ(ω))]
(14a)


for i = s,
1,
+
−
subject to x(δG (i)) − x(δG (i)) = −1, for i = t,


0,
for i ∈ V \ {s, t},
x(γG (S)) ≤ |S| − 1,
(14b)
for all S ⊂ V with |S| ≥ 2,
(14c)
(14d)
where
Q(x, ξ(ω)) = min (c2 (ω))T y
subject to
−
(15a)
−
y(δG
(i))
for all i ∈ V
(15b)
(x + ∆y) (γG (S)) ≤ |S| − 1,
for all S ⊂ V with |S| ≥ 2,
(15c)
ya ∈ {la (x, u(ω)), ra (x, u(ω))} ,
for all a ∈ A ∪ A−1 ,
(15d)
+
y(δG
(i))
= 0,
and la (x, u(ω)), ra (x, u(ω)) are given by
la (x, u(ω)) = 0,
and
ra (x, u(ω)) = ua (ω)(1 − xa ),
7
for all a ∈ A
and
la−1 (x, u(ω)) = xa (1 − ua (ω)),
and
ra−1 (x, u(ω)) = xa ,
for all a−1 ∈ A−1 .
Upper bound ra (x, u(ω)) = ua (ω)(1 − xa ) for an arc a ∈ A means that the second-stage decision y
can have ya = 1 only if xa = 0, that is, an arc that was chosen in the first stage cannot be chosen
in the second stage as well. If it is acceptable to choose an arc in the first stage, and then cancel
the arc in the second stage and choose it again, then the upper bound is ra (x, u(ω)) = ua (ω). This
detail does not affect the rest of the development.
Proposition 3.2 shows that, under a particular condition, the cycle elimination constraints (15c)
can be omitted from the second-stage problem (15).
Proposition 3.2. Consider the second-stage problem (15) associated with a feasible first-stage solution x and a scenario ω. Suppose that for all cycles C in
G with ua (ω)= 1 for all a ∈ C, and
for all partitions C = C0 ∪ C1 , C0 ∩ C1 = ∅, it holds that a∈C1 c2a (ω) − a∈C0 c2a−1 (ω) ≥ 0 (it is
cheaper to cancel the flow on the arcs in C0 than it is to incur the costs for the arcs in C1 ). Then,
the cycle elimination constraints (15c) can be omitted from the second-stage problem (15) without
loss of optimality.
Proof. Consider any feasible solution y of the second-stage relaxation obtained by omitting the
cycle elimination constraints (15c) from the second-stage problem (15). (Such a y always exists
because of the assumption of relatively complete recourse.) Solution y still produces a circulation
because of (15b). However, x+∆y may not produce a path, but a path combined with a circulation.
It remains to be shown that such a circulation can be removed without loss of optimality.
Let C ⊂ A be any cycle in the circulation produced by x + ∆y. It follows from the feasibility
of y that ua (ω) = 1 for all a ∈ C. Let C0 = {a ∈ C | ya = 0} and C1 = {a ∈ C | ya = 1}. For each
a ∈ A, let ya+ = 0 and ya+−1 = 1 if a ∈ C0 , ya+ = 0 and ya+−1 = ya−1 if a ∈ C1 , and ya+ = ya and
ya+−1 = ya−1 if a ∈ C. It is easy to check that y + eliminates C from the circulation, and thus satisfies
circulation constraints (15b). Note that ra−1 (x, u(ω)) = xa = 1 (and ya−1 = 0) for all a ∈ C0 , and
capacity
and integrality constraints (15d).
la (x, u(ω)) = 0 for all a ∈ C1 , and thus y + also
satisfies
+
2
The difference in cost between y and y is a∈C1 ca (ω) − a∈C0 c2a−1 (ω) ≥ 0.
Repeating the procedure a finite number of times, all cycles can be eliminated from the circulation, resulting in a second-stage solution y + with no more cost than y, and such that x + ∆y +
produces an s − t path.
Corollary 3.3 follows from the observation that, if the cycle elimination constraints (15c) are
omitted from the second-stage problem (15), then the resulting relaxation is a network flow problem,
which has the integrality property.
Corollary 3.3. Suppose the conditions of Proposition 3.2 hold. Then the capacity and integrality
constraints (15d) can be replaced with
la (x, u(ω)) ≤ ya ≤ ra (x, u(ω))
(16)
for all a ∈ A ∪ A−1 , without loss of optimality. Also, Q(x, ξ(ω)) is piecewise linear and convex in
x ∈ R|A| for all ω.
In the computational work, the conditions of Proposition 3.2 were satisfied.
Proposition 3.4 shows that, under some conditions, the cycle elimination constraints (14c) can
also be omitted from the first-stage problem.
8
Proposition 3.4. Suppose that for all a ∈ A and all ω,
(i) c1a + c2a−1 (ω) ≥ 0 (it is not profitable to choose an arc in the first stage and cancel the arc in
the second stage), and
(ii) c2a (ω) ≤ c1a (it is not profitable to choose an arc in the first stage in anticipation of needing
the arc in the second stage).
Then, the cycle elimination constraints (14c) can be omitted from the first-stage problem (14)
without loss of optimality.
Proof. Consider any feasible solution x of the first-stage relaxation obtained by omitting the cycle
elimination constraints (14c) from the first-stage problem (14). Solution x may produce a path
combined with a circulation, because of (14b). It remains to be shown that such a circulation can
be removed without loss of optimality.
Let C ⊂ A be any cycle in the circulation produced by x. Let x+
a = 0 for each a ∈ C, and let
+
xa = xa for each a ∈ C. Note that x+ satisfies (14b). Consider any scenario ω, and any feasible
solution y of the resulting second-stage problem (15).
Let C + = {a ∈ C | ya−1 = 0} and C − = {a ∈ C | ya−1 = 1}. For each a ∈ A, let ya+ = 1 and
+
ya−1 = 0 if a ∈ C + , ya+ = ya and ya+−1 = 0 if a ∈ C − , and ya+ = ya and ya+−1 = ya−1 if a ∈ C. It is easy
+
+
to check that (x+ , y + ) satisfies circulation constraints (15b). Also, x+
a + ya − ya−1 ≤ xa + ya − ya−1
for all a ∈ A, and thus (x+ + ∆y + )(γG (S)) ≤ (x + ∆y)(γG (S)) ≤ |S| − 1 for all S ⊂ V with
|S| ≥ 2, and hence (x+ , y + ) satisfies the second-stage cycle elimination constraints (15c). Note
that ra (x+ , u(ω)) = ua (ω) = 1 for all a ∈ C + , and la−1 (x+ , u(ω)) = 0 for all a ∈ C − , and thus
capacity and integrality
(15d). The difference in cost between
(x+ , y + ) also satisfies constraints
1
+
+
1
2
(x, y) and (x , y ) is a∈C + ca − ca (ω) + a∈C − ca + c2a−1 (ω) ≥ 0. Thus for any scenario ω,
there is a second-stage solution y + such that (x+ , y + ) satisfies the second-stage constraints and
(x+ , y + ) has no more cost than (x, y).
Repeating the procedure a finite number of times, all cycles can be eliminated from the circulation, resulting in a first-stage solution x+ with no more cost than x, and such that x produces an
s − t path.
3.3
The Traveling Salesman Problem with Random Travel Times
In the Traveling Salesman problem with Random travel Times (TSPRT), the input is an undirected
graph G = (V, E), edge costs c ∈ R|E| , a probability distribution P of the vector of random travel
times ξ(ω) ∈ R|E| , and a deadline κ ∈ R. The problem is to find a Hamiltonian cycle in G that
minimizes the cost of the edges in the cycle plus the expected violation of the deadline.
We use x ∈ {0, 1}|E| to represent the tour, where xe = 1 if edge e is in the tour and xe = 0
otherwise. The TSPRT can be formulated as the following two-stage stochastic integer program:
min cT x + E[Q(x, ξ(ω))]
(17a)
subject to x(δG (i)) = 2,
x(γG (S)) ≤ |S| − 1,
x ∈ {0, 1}|E| ,
where Q(x, ξ(ω)) = max ξ(ω)T x − κ, 0 .
for all i ∈ V ,
(17b)
for all S ⊂ V with |S| ≥ 2,
(17c)
(17d)
9
3.4
Computational Complexity
Here we discuss the computational complexity of the models introduced in Sections 3.1, 3.2, and 3.3.
We start by observing that in the case of the SPRT and the TSPRT, the stochastic part can be
made redundant by choosing a sufficiently large κ, in which case both models reduce to their
deterministic counterpart. This shows that both the TSPRT and its corresponding SAA problem
are N P-hard.
In the case of the SPRT, we show N P-hardness by reducing the weight constrained shortest path
problem, which is known to be N P-hard [6], to a deterministic version of the SPRT (a deterministic
shortest path problem with a deadline).
Proposition 3.5. The deterministic shortest path problem with a deadline is N P-hard.
Proof. Consider an instance of the weight constrained shortest path problem on a graph G = (V, A),
with source and sink nodes s, t ∈ V , arc costs c ∈ R|A| , arc weights w ∈ N|A| , and let κ ∈ N be the
maximum allowed total weight of a path. Costs c do not form negative-cost cycles in G.
c |V |). Each arc a ∈ A has travel time wa . Then,
Define c¯ = maxa∈A |ca |, and let c = c /(¯
(G, s, t, c, w, κ) defines an instance of the shortest path problem with a deadline. Note that costs c
do not form negative-cost cycles in G.
Any feasible path P for the weight constrained shortest path problem is feasible for the shortest
path problem with a deadline. Conversely, any path P with total travel time less than or equal to
the deadline is feasible for the weight constrained shortest path problem.
Note that for any simple path P we have that |P | < |V |, and thus c(P ) < 1 for all paths P .
Consider any path with total travel time greater than the deadline. By integrality of w and κ,
such a path incurs a penalty of at least one, so the objective value of such a path for the shortest
path problem with a deadline is greater than one. Note that c is proportional to c . Hence, if an
optimal solution P ∗ of the shortest path problem with a deadline has objective value strictly less
than one, then P ∗ is optimal for the weight constrained shortest path problem. Otherwise, if the
optimal value of the shortest path problem with a deadline is greater than one, then the weight
constrained shortest path problem is infeasible.
By choosing P{ξa (ω) = wa } = 1 for all a ∈ A, it follows that the SPRT is N P-hard.
3.5
Associated Mean Value Problems
Given a stochastic optimization problem, its associated mean value problem (MVP) is obtained by
replacing all random variables by their means (see e.g. Birge and Louveaux [4]). The mean value
problem associated with (1) is given by
min cT x + Q(x, EP [ξ(ω)])
x∈X
(18)
If the means EP [ξ(ω)] of the random variables ξ(ω) are known or are easy to calculate, then the mean
value problem is a deterministic problem, which can be solved using deterministic optimization
algorithms, and usually this is much easier than solving a stochastic optimization problem. In the
case of the SPRT and the TSPRT the MVP will yield (first and second stage) feasible solutions as
long as feasible solutions exist. In the case of the SPAF, the MVP does not yield a sensible model
due to the integrality constraints.
Here we show that in the case of the SPRT, the mean value problem can have an optimal
solution that is arbitrarily bad compared with an optimal solution of the stochastic problem. A
similar construction can be used to show that the mean value problem can have arbitrarily bad
solutions in the case of the TSPRT.
10
Proposition 3.6. There are instances of the SPRT in which the difference and the ratio between
the expected cost of an optimal solution of the MVP and the optimal cost of the stochastic problem
are arbitrarily large.
Proof. Consider the graph G = (V, A), where V = {s, t}, A = {a1 , a2 }, with a1 = a2 = (s, t). Thus
there are two distinct s − t paths in G, with x1 using arc a1 and x2 using arc a2 . Choose κ > 0
and b ≥ κ. Let ca1 = 1 and ca2 = 2. Let P[ξa1 (ω) = b] = κ/b, P[ξa1 (ω) = 0] = 1 − κ/b, and
P[ξa2 (ω) = κ] = 1. Note that
E[ξa1 (ω)] = E[ξa2 (ω)] = κ.
(19)
Also,
ca1 + Q(x1 , E[ξ(ω)]) = 1
ca2 + Q(x2 , E[ξ(ω)]) = 2
Thus x1 is the optimal solution of the MVP. However,
z(x1 ) = ca1 + E[Q(x1 , ξ(ω))] = 1 + (b − κ)κ/b
z(x2 ) = ca2 + E[Q(x2 , ξ(ω))] = 2.
The result follows by choosing κ sufficiently large and b κ2 .
We conclude this section by observing that there are instances of the SPRT and the TSPRT
for which optimal solutions of the MVP are optimal or almost optimal for the stochastic problem.
When the deadline is either very loose or very tight, the recourse function is linear in the random
variables near optimal solutions. By linearity of expectation, an optimal solution of the MVP is
then an optimal solution of the stochastic problem.
4
Algorithmic Details
Here we complete the description of our branch-and-cut algorithm for solving the SAA problems
for the three problem classes. Section 4.1 addresses initial relaxations and heuristics. Sections 4.2
and 4.3 discuss the separation of the optimality cuts.
4.1
Initial Relaxations and Heuristics
As mentioned before, we initialize our branch-and-cut framework by solving the LP-relaxation
of formulation (12), without the optimality cuts (12b). For the SPRT, and for the SPAF, this
means that we start with only the flow conservation constraints (13b), and (14b), respectively,
and the bounds 0 ≤ x ≤ 1. In the case of the TSPRT, this means we start with only the degree
constraints (17b) and the bounds 0 ≤ x ≤ 1, and we add subtour elimination constraints (17c)
whenever they are violated. For all three problem classes we generate optimality cuts if and only if
the LP relaxation at hand has an integer solution. For the SPRT and the TSPRT, the optimality
cuts and their generation are discussed in detail in Section 4.2. Section 4.3 does the same for the
SPAF.
The generation of subtour elimination constraints for the TSPRT reduces to finding a global
minimum cut in a directed graph derived from the support of a fractional solution [26, 27]. For
this we use Hao and Orlin’s implementation [10] of the Gomory-Hu algorithm [9].
11
Solving the MIPs for the TSPRT turned out to be quite a computational challenge. For this
reason, we decided to limit the number of nodes in the branch-and-bound tree to 10000. Note
m from Section 2.1 are then taken to be the best lower bound obtained
that the values of zN and zN
upon termination of the branch-and-cut algorithm. We also added a rounding heuristic to the
branch-and-cut algorithm, which is called occasionally starting from intermediate LP solutions.
The heuristic tries to find a tour that is cheap with respect to the expected recourse in the support
of the fractional solution, using edges that are not in the support only if they are needed to get a
feasible solution. This is accomplished using the Lin-Kerninghan algorithm (see e.g. Johnson and
McGeoch [15]).
4.2
Optimality Cut Generation for the SPRT and the TSPRT
Note that for the models with random travel times (the SPRT and the TSPRT), the second-stage
recourse problem for sample scenario ω n , with n ∈ {1, 2, . . . , N }, is given by
Qn (x) = max ξ(ω n )T x − κ, 0 .
It follows that
N
Qn (x) = max
n=1
(ξ(ω ) x − κ) | S ⊆ {1, . . . , N } ,
n T
n∈S
where S is a subset of N . Hence, in the case of the SPRT, and the TSPRT, the sample average
approximating problem is equivalent to the mixed-integer linear problem
min cT x + θ/N
(20a)
subject to Ax ≤ b
θ ≥
(ξ(ω n )T x − κ)
(20b)
for all S ⊆ {1, 2, . . . , N }
(20c)
n∈S
x binary,
(20d)
where the inequalities (20c) are the optimality cuts, and the system Ax ≤ b represents the constraints (13b), and (17b)–(17c), respectively. Note that (20c) represents a huge number of constraints. However, for any feasible first-stage solution in the course of our branch-and-cut iterations,
we only add the maximally violated optimality cut. The maximally violated optimality cut at a
feasible first-stage solution x corresponds to the subset S(x) = {n ∈ {1, 2, . . . , N } | ξ(ω n )T x > κ}.
4.3
Optimality Cut Generation for the SPAF
Assume that the conditions of Propositions 3.2 and 3.4 hold. Let Qn (x) = Q(x, ξ(ω n )) be the
optimal value of problem (15a), (15b), (16). By associating dual variables π n ∈ R|V | , and γ n , τ n ∈
−1
R|A∪A | , with the flow conservation, lower, and upper bound constraints, respectively, we obtain
the following LP dual of the recourse function
Qn (x) = max l(x, u(ω n ))T γ n + r(x, u(ω n ))T τ n
subject to
πin
πjn
n
−
−
πjn
πin
+
+
n
n
γ(i,j)
+ τ(i,j)
≤ c(i,j)
n
n
γ(i,j)
−1 + τ(i,j)−1 ≤ c(i,j)−1
γ ≥ 0, τ n ≤ 0.
(21a)
for all (i, j) ∈ A,
(21b)
for all (i, j)−1 ∈ A−1 ,
(21c)
(21d)
12
The optimal dual multipliers from (21) then provide the optimality cut
θ≥
N
l(x, u(ω n ))T γ n + r(x, u(ω n ))T τ n .
(22)
n=1
Recall that l and r are linear functions of x. We can
substituting the definitions of l and r, eliminating terms
−1
−1
term α : R|A∪A | × R|A∪A | → R|A| defined as,
τan−1 − τan ,
αa (γ n , τ n , ω n ) =
τan−1 + γan−1 ,
simplify the expression for the cut by
that evaluate to 0, and introducing the
if ua (ω n ) = 1,
if ua (ω n ) = 0.
(23)
The optimality cut (22) can then be expressed as
N
α(γ , τ ) x ≤ θ −
n
n T
N n=1 a∈An
n=1
τan .
(24)
The resulting SAA problem for the SPAF is
min cT x + θ/N
subject to (14b), and (24) for all (π n , γ n , τ n ) satisfying (21b)–(21d), with n ∈ {1, 2, . . . , N }
x ∈ {0, 1}|A| .
The generation of the optimality cuts for the SPAF is implemented by maintaining one linear
program of the form (15a), (15b), (16), which we will refer to as the recourse LP in the following
discussion. Suppose we are given an integer solution x to the first stage of the SPAF. To generate an
optimality cut, we iterate over the scenarios. Focus on any n ∈ {1, 2, . . . , N }. First, we regenerate
ω n from a stored seed value. Next, we impose the bounds l(x, u(ω n )) and r(x, u(ω n )) on the
recourse LP. Then, we use the dual simplex method to solve it, starting from the optimal dual
associated with the previous recourse LP. In this way, we exploit similarities between consecutive
recourse LPs. This could be further accelerated by techniques such as sifting [7] or bunching [41]
that identify scenarios for which a particular recourse basis is optimal. However, we have not used
such strategies in our implementation. We iteratively compute optimality cuts of the form (24) in
each iteration by adding the terms that involve the optimal dual solution to the current recourse
LP. Because the constraint matrix induced by the constraints (15b), (16) is totally unimodular (see
e.g. Schrijver [31]), the recourse LPs yield integer optimal solutions (Corollary 3.3).
Note that because of the possibility of regenerating the random part of the constraint matrix,
it is not necessary to have the entire constraint matrix in memory at any time during the execution
of the algorithm. This aspect is of great practical significance, as it allows us to use large values of
N . In this way we can solve SAA problems that have a huge number of non-zeros in the constraint
matrix.
5
Computational Results
The algorithmic framework of Section 4 has been implemented in C++ using the MIP solver of
CPLEX 7.0 [13]. The implementation takes advantage of the callback functionality provided by
CPLEX to integrate separation algorithms and heuristics within the branch-and-bound algorithm.
13
This section presents our computational experiments. Section 5.1 discusses the generation of the
problem instances, Section 5.2 provides the computational results, and Section 5.3 discusses the
empirically observed stability of the number of optimality cuts. We only report on a subset of our
experiments; tables with the full output can be found in Appendix A. The CPU times reported in
this section were measured on a 350MHz Sun Sparc workstation for the SPRT and the TSPRT and
a 450MHz Sun Sparc workstation for the SPAF.
5.1
Generation of Test Instances
An instance of the SPRT is defined by a directed graph G = (V, A), a source and sink node, a cost
vector c, a probability distribution over the travel times on the arcs in G, and a deadline. Our
approach is to generate instances from the TSP library [29] as follows.
A TSP library instance defines a set of p cities, {1, 2, . . . , p}, together with an inter-city distance
matrix D = [dij | i, j ∈ {1, 2, . . . , p}]. We associate a node vi in V with city i. Next, we iterate over
the nodes in G. At iteration i, we connect node vi to the δ closest nodes that it has not already
been connected to in an earlier iteration. Connecting node v to node w is done by adding the arcs
(v, w) and (w, v) to A. The cost of arc (vi , vj ) is taken to be c(vi ,vj ) = dij . To choose a source and
sink node, we first find the set of pairs (u, v) with u, v ∈ V that maximize the minimum number of
arcs over all u − v paths. From this set we choose one pair uniformly at random to be the source
and sink. In order to generate instances for which the associated MVP is unlikely to yield optimal
solutions, we use a probability distribution which discourages the use of short arcs. For a ∈ A we
let
p, if ca < c¯,
1 − p if ca < c¯,
and
P{ξa (ω) = ca } =
P{ξa (ω) = F ca } =
0, if ca ≥ c¯,
1,
if ca ≥ c¯,
where c¯ denotes the median of the arc lengths, for some parameters F > 1 and 0 < p < 1. Finally,
the deadline κ is determined by computing a shortest path in G with respect to c, and taking the
deadline to be the expected travel time on this path.
An instance of the SPAF is defined by a directed graph G, a source and sink node, arc costs
c1 and c2 (ω), and a probability distribution over ξ(ω) = (u(ω), c2 (ω)). The graph G, the source
and sink, and c1 are generated as above from TSP library instances. For all arcs a ∈ A we let
P{c2a (ω) = c1a } = 1, and for all arcs a−1 ∈ A−1 we let P{c2a (ω) = 0} = 1. The distribution we use
for the upper bounds u(ω) is similar in spirit to the one used for the SPRT, namely,
p, if ca < c¯,
1 − p if ca < c¯,
and
P{ua (ω) = 1} =
P{ua (ω) = 0} =
0, if ca ≥ c¯,
1,
if ca ≥ c¯,
with c¯ and p as above. Note that assumptions of Propositions 3.2 and 3.4 are satisfied. Relatively
complete recourse is ensured by inserting an arc (s, t) with very high cost.
An instance of the TSPRT is defined by an undirected graph G = (V, E), a cost vector c ∈ R|E| ,
a distribution over the travel times, and a deadline. The graph G is generated as above, except
that connecting nodes v and w is done by inserting an edge {v, w} in E. The cost of edge {vi , vj }
is taken to be c{vi ,vj } = dij . The probability distribution of the travel time on edge e ∈ E is defined
by
p, if ce < c¯,
1 − p if ce < c¯,
and
P{ξe (ω) = ce } =
P{ξe (ω) = F ce } =
0, if ce ≥ c¯,
1,
if ce ≥ c¯,
14
name
burma14
ulysses16
ulysses22
eil51
berlin52
st70
|V |
14
16
22
51
52
70
|A|
172
212
332
912
932
1292
|E|
86
106
166
456
466
646
name
eil76
pr76
gr96
rat99
kroA100
kroB100
|V |
76
76
96
99
100
100
|A|
1412
1412
1812
1872
1892
1892
|E|
706
706
906
936
946
946
name
kroC100
kroD100
kroE100
rd100
eil101
|V |
100
100
100
100
101
|A|
1892
1892
1892
1892
1912
|E|
946
946
946
946
956
Table 1: Dimensions of graphs generated from TSP library instances.
where c¯ denotes the median of the edge lengths, and F > 1 and 0 < p < 1 are parameters. The
deadline is taken to be the expected travel time on a shortest Hamiltonian cycle in G with respect
to c.
In our experiments, we have used the following parameter settings for the probability distribution: F = 10 for the SPRT, F = 20 for the TSPRT, and p = 0.1 for all problems. We used δ = 10
for generating graphs. We have used the following parameters for solving the SAA problems (see
Section 2.1): M = 10, and N = 105 . The dimensions of the graphs underling our experiments are
summarized in Table 1.
5.2
Computational Results
Figures 2, 5, and 8 give the average number of optimality cuts as a function of the sample size N for
the SPRT, the SPAF, and the TSPRT, respectively. Figures 3, 6, and 9 give the average number
of nodes in the branch-and-bound tree for the SPRT, the SPAF, and the TSPRT, respectively.
Figures 4, 7, and 10 give the average CPU time per SAA problem in seconds for the SPRT, the
SPAF, and the TSPRT, respectively. In these figures, each graph shows averages taken over M
SAA problems with sample sizes N ∈ {200, 300, . . . , 1000} for the set of problem instances derived
from the TSP library. The key is given in Figure 1. The standard deviations of the average number
of optimality cuts, the average number of nodes in the branch-and-bound tree, and the average
CPU time in seconds, as a function of the sample size N , are given in the appendix for the SPRT,
the SPAF, and the TSPRT. The appendix also gives the total CPU times for the execution of the
SAA method.
Table 2 gives solution values, optimality gaps, and their standard deviations for all of the
problems, as observed with N = 1000. Recall from Section 3.5 that the mean value problem
(MVP) is the deterministic optimization problem obtained by replacing all random variables by
their means. The optimal solution of each MVP is denoted by x
¯. Note, that some of the gaps in
Table 2 are negative. As both the lower bound and the upper bound used to compute the gaps are
random variables, and no gap is smaller than minus twice its estimated standard deviation, this is
not unexpected. Also note from Table 2 that the MVP rarely yields a competitive solution for the
generated instances.
In Figures 2, 5, and 8 we see that the average number of optimality cuts displays the same
behavior for all generated instances of the SPRT, the SPAF, and the TSPRT. For each of the
three problems under consideration, the average number of optimality cuts does not depend on the
sample size used to formulate the SAA problem. From Figures 3, 6, and to a lesser extent from
Figure 9 (for the TSPRT, all runs needed 10000 nodes in the branch-and-bound tree, except for
the runs on the three smallest instances), it is clear that the same observation holds for the average
number of nodes in the branch-and-bound tree. As a consequence, the running time grows only
15
’burma14’
’ulysses16’
’ulysses22’
’eil51’
’berlin52’
’st70’
’eil76’
’pr76’
’gr96’
’rat99’
’kroA100’
’kroB100’
’kroC100’
’kroD100’
’kroE100’
’rd100’
’eil101’
Figure 1: The key to Figures 2–10.
100
90
80
70
No. Cuts
60
50
40
30
20
10
0
200
300
400
500
600
No. Scenarios
700
800
Figure 2: Average number of optimality cuts for the SPRT.
16
900
1000
700
600
No. Nodes
500
400
300
200
100
0
200
300
400
500
600
No. Scenarios
700
800
900
Figure 3: Average number of nodes in branch-and-bound tree for the SPRT.
17
1000
600
500
CPU time (s)
400
300
200
100
0
200
300
400
500
600
No. Scenarios
700
800
Figure 4: Average CPU times for the SPRT.
18
900
1000
50
45
40
35
No. Cuts
30
25
20
15
10
5
0
200
300
400
500
600
No. Scenarios
700
800
Figure 5: Average number of optimality cuts for the SPAF.
19
900
1000
350
300
No. Nodes
250
200
150
100
50
0
200
300
400
500
600
No. Scenarios
700
800
900
Figure 6: Average number of nodes in branch-and-bound tree for the SPAF.
20
1000
1200
1000
CPU time (s)
800
600
400
200
0
200
300
400
500
600
No. Scenarios
700
800
Figure 7: Average CPU times for the SPAF.
21
900
1000
300
250
No. Cuts
200
150
100
50
0
200
300
400
500
600
No. Scenarios
700
800
Figure 8: Average number of optimality cuts for the TSPRT.
22
900
1000
10000
9000
8000
7000
No. Nodes
6000
5000
4000
3000
2000
1000
0
200
300
400
500
600
No. Scenarios
700
800
900
Figure 9: Average number of nodes in branch-and-bound tree for the TSPRT.
23
1000
5000
4500
4000
CPU time (s)
3500
3000
2500
2000
1500
1000
500
0
200
300
400
500
600
No. Scenarios
700
800
Figure 10: Average CPU times for the TSPRT.
24
900
1000
25
instance
burma14
ulysses16
ulysses22
eil51
berlin52
st70
eil76
pr76
gr96
rat99
kroA100
kroB100
kroC100
kroD100
kroE100
rd100
eil101
SAA
zˆ105 (ˆ
x∗ ) Gap
371.0 1.2
2510.0 0.0
1442.4 8.6
61.7 0.6
839.7 -1.9
114.9 0.3
49.2 0.1
18207.0 0.0
7601.0 0.0
205.8 0.4
3405.4 20.7
2468.2 -3.4
2941.0 0.6
2490.0 0.0
3708.3 4.3
1197.0 0.0
52.2 -0.1
σ
ˆGap
1.09
0.00
3.83
0.28
1.00
0.51
0.21
0.00
0.00
0.12
10.76
4.82
1.39
0.00
2.45
0.00
0.16
MVP
Gap
42.5
132.1
189.2
0.7
399.2
28.7
15.2
7007.3
1118.3
63.9
272.6
-2.3
195.6
108.4
310.2
196.5
7.8
σ
ˆGap
0.99
2.82
3.33
0.10
2.87
0.34
0.12
37.90
10.30
0.39
5.07
1.47
4.38
2.44
3.51
1.71
0.12
SPAF
SAA
zˆ105 (ˆ
x∗ ) Gap
320.8 -0.6
2382.2 0.6
1307.1 0.8
51.2 0.0
812.5 1.6
102.7 0.2
46.5 0.1
18463.5 -0.5
7422.6 0.9
207.0 0.0
3022.8 -2.0
2366.6 2.4
2649.3 0.5
2382.9 0.6
3532.4 3.7
1166.5 0.4
44.5 0.0
σ
ˆGap
0.43
1.09
0.83
0.07
0.73
0.19
0.05
8.60
3.98
0.14
2.10
1.45
3.27
1.70
2.03
0.45
0.05
zˆ105 (ˆ
x∗ )
1562.7
9506.2
10000.9
543.9
9481.0
834.5
661.0
124419.0
62837.4
1464.4
25775.1
26411.4
24660.3
25468.7
25775.5
9478.4
763.1
SAA
Gap
7.2
34.7
80.0
17.9
308.5
66.9
30.0
9315.0
2328.4
47.4
1233.8
1862.4
950.3
1350.0
1975.1
439.3
41.9
Table 2: Solution values and absolute gaps with lower bound.
zˆ105 (¯
x)
412.3
2642.1
1623.0
61.8
1240.7
143.3
64.3
25214.3
8719.3
269.3
3657.3
2469.4
3136.0
2598.4
4014.2
1393.5
60.1
SPRT
σ
ˆGap
3.18
24.30
28.68
1.80
21.87
2.48
1.55
188.09
56.58
2.50
65.33
50.90
40.60
40.89
40.51
18.04
2.45
zˆ105 (¯
x)
2129.6
11455.6
10577.5
548.3
10202.0
852.8
674.1
135868.0
68136.5
1474.0
26563.3
27773.6
26129.0
26802.9
27799.1
9856.2
758.3
TSPRT
MVP
Gap
574.0
1984.1
656.6
22.3
1029.6
85.2
43.1
20764.0
7627.5
57.0
2022.0
3224.6
2419.0
2684.2
3998.7
817.1
37.1
σ
ˆGap
4.12
17.47
11.47
0.68
12.33
1.01
0.74
139.73
63.70
1.39
26.24
27.54
26.84
27.55
28.59
9.84
0.74
linearly in the sample size. This linear growth is clearly visible in Figures 4 and 7. From Figure 10
we conclude that for the TSPRT the running time is dominated by the branch-and-bound algorithm
itself for N ≤ 1000, therefore the average time spent in the generation of the optimality cuts is still
a lower-order term in the average CPU time.
From Table 2 it is clear that the SAA method produces provably close to optimal solutions.
The reported solutions were within an estimated 1.0%, 0.3%, and 8.7% from the lower bounds for
the SPRT, the SPAF, and the TSPRT, respectively (for N = 1000). Also, the estimated standard
deviations of these estimated gaps are small; they are of the same order of magnitude as the
estimated gaps themselves for the SPRT and the SPAF, and for the TSPRT they are significantly
smaller. In the case of the TSPRT, a significant part of these estimated gaps can be attributed to
the fact that the MIPs are not solved to optimality; the instance that realized the forementioned
gap of 8.7% had an average (MIP) gap between the best integer solution and the best lower bound
of 8.17% over the M SAA problems. We believe that by taking advantage of sophisticated cuttingplane and branching techniques as applied by Applegate et al. [3], these SAA problems can be
solved to optimality for the instances we considered, thereby reducing the gap to that caused by
the quality of the stochastic bounds.
From the tables in the Appendix, it is clear that as N increases, the estimated values of the
reported solutions do not change much. For the SPRT and the TSPRT, the stochastic lower
bounds improve, for the SPAF there is not much change in the lower bounds either. In all cases
the estimated variance goes down as N increases.
5.3
Stability of the Number of Optimality Cuts
It was observed empirically that on average the number of optimality cuts generated when solving
a SAA problem is fairly stable and hardly changes with change in the sample size N . In this section
we suggest an explanation of this empirical phenomenon.
Recall that the set of feasible first stage solutions for the true problem and that of a corresponding SAA instance are identical, and is a finite set of integer vectors. Also note that, in the course
of the proposed branch-and-cut algorithm, an optimality cut is added only when a feasible integer
solution is found. Consequently, the number of optimality cuts added is equal to the number of feasible integer solutions explored by the algorithm. We conjecture that as N increases, the sequence
of integer solutions explored when the proposed algorithm is applied to the generated SAA problem
converges to a sequence of integer solutions which would be explored by the algorithm applied to
the true (expected value) problem. If there is degeneracy, that is, if at some integer solution the
subdifferential is not a singleton, then there may be multiple sequences of integer solutions corresponding to the true problem. Since the set of possible sequences of integer solutions is finite, it
follows that for N large enough, a sequence of integer solutions generated for a SAA problem will
coincide with one of the sequences for the true problem. Consequently, the number of optimality
cuts generated for a SAA instance will stabilize on average with increase of the sample size. We
provide a basis of this conjecture in the following.
Note that an optimality cut correspond to an extreme point of the subdifferential of the objective
function. As N increases, the subdifferential of the SAA objective function at a given solution
converges to the subdifferential of the true objective function at that solution (see Shapiro and
Homem-de-Mello [34, Proposition 2.2]). As a result, for N large enough, at a given integer feasible
solution, an optimality cut for the SAA problem will be close to one of the possible optimality
cuts for the true problem. A small difference in the optimality cuts between the SAA and true
problems means that the LP relaxations differ only slightly, and the feasible integer solutions do
not change. As a result the next integer solution found by the algorithm for the SAA problem will
26
likely be the same as the next integer solution that would be found for the true problem, as long as
corresponding extreme points of the subdifferentials for the SAA problem and the true problem are
chosen in case of degeneracy. Thus, for N large enough, the sequence of integer solutions explored
for the SAA problem corresponds to a sequence of integer solutions for the true problem.
6
Conclusions
We have applied the sample average approximation method to three classes of 2-stage stochastic
routing problems. Among the instances we considered were ones with a huge number of scenarios
and first-stage integer variables. Our experience is that the growth of the computational effort
needed to solve SAA problems is linear in the sample size used to estimate the true objective
function. The reason for this is that in the cases that we examined, neither the number of optimality
cuts, nor the sizes of branch-and-bound trees needed to solve SAA problems to optimality increase
when the sample size increases. This allows us to find solutions of provably high quality to the
stochastic programming problems under consideration.
References
[1] G. Andreatta. Shortest path models in stochastic networks. In G. Andreatta, F. Mason, and P. Serafini, editors, Stochastics in Combinatorial Optimization, pages 178–186. CISM Udine, World Scientific
Publishing, Singapore, 1987.
[2] G. Andreatta and L. Romeo. Stochastic shortest paths with recourse. Networks, 18:193–204, 1988.
[3] D. Applegate, R. Bixby, V. Chv´
atal, and W. Cook. On the solution of traveling salesman problems.
Documenta Mathematica, Extra Volume ICM 1998 III:645–656, 1998.
[4] J.R. Birge and F.V. Louveaux. Introduction to Stochastic Programming. Springer-Verlag, Berlin, 1997.
[5] Y. Ermoliev and R.J-B. Wets, editors. Numerical techniques for stochastic optimization. SpringerVerlag, Berlin, 1988.
[6] M.R. Garey and D.S. Johnson. Computers and Intractability, A Guide to the Theory of NPCompleteness. W.H. Freeman and Company, New York, 1979.
[7] S. Garstka and D. Rutenberg. Computation in discrete stochastic programmin with recourse. Operations
Research, 21:112–122, 1973.
[8] C.J. Geyer and E.A. Thompson. Constrained Monte Carlo maximum likelihood for dependent data,
(with discussion). Journal of the Royal Statistical Society Series B, 54:657–699, 1992.
[9] R.E. Gomory and T.C. Hu. Multi-terminal network flows. SIAM Journal on Applied Mathematics,
9:551–570, 1961.
[10] J. Hao and J.B. Orlin. A faster algorithm for finding the minimum cut in a graph. In Proceedings of
the 3rd Annual ACM-SIAM Symposium on Discrete Algorithms, pages 165–174, 1992.
[11] J.L. Higle and S. Sen. Stochastic Decomposition. Kluwer Academic Publishers, Dordrecht, Netherlands,
1996.
[12] J.L. Higle and S. Sen. Stochastic decomposition: An algorithm for two stage stochastic linear programs
with recourse. Mathematics of Operations Research, 16:650–669, 1991.
[13] ILOG, Inc., CPLEX Division, Incline Village, Nevada. CPLEX, a division of ILOG, 2001.
[14] G. Infanger. Planning Under Uncertainty: Solving Large Scale Stochastic Linear Programs. Boyd and
Fraser Publishing Co., Danvers, MA, 1994.
27
[15] D.S. Johnson and L.A. McGeoch. The traveling salesman problem: a case study. In E. Aarts and J.K.
Lenstra, editors, Local Search in Combinatorial Optimization, pages 215–310. John Wiley & Sons Ltd,
Chichester, 1997.
[16] A.J. Kleywegt, A. Shapiro, and T. Homem-de-Mello. The sample average approximation method for
stochastic discrete optimization. SIAM Journal on Optimization, 12:479–502, 2001.
[17] G. Laporte and F.V. Louveaux. The integer L-shaped method for stochastic integer programs with
complete recourse. Operations Research Letters, 13:133–142, 1993.
[18] G. Laporte, F.V. Louveaux, and H. Mercure. Models and exact solutions for a class of stochastic
location-routing problems. European Journal of Operational Research, 39:71–78, 1989.
[19] G. Laporte, F.V. Louveaux, and H. Mercure. The vehicle routing problem with stochastic travel times.
Transportation Science, 26:161–170, 1992.
[20] G. Laporte, F.V. Louveaux, and H. Mercure. A priori optimization of the probabilistic traveling salesman
problem. Operations Research, 42:543–549, 1994.
[21] G. Laporte, F.V. Louveaux, and L. van Hamme. Exact solution of a stochastic location problem by an
integer L-shaped algorithm. Transportation Science, 28:95–103, 1994.
[22] W.K. Mak, D.P. Morton, and R.K. Wood. Monte Carlo bounding techniques for determining solution
quality in stochastic programs. Operations Research Letters, 24:47–56, 1999.
[23] G.L. Nemhauser and L.A. Wolsey. Integer and Combinatorial Optimization. John Wiley and Sons, New
York, 1988.
[24] V.I. Norkin, Y.M. Ermoliev, and A. Ruszczy´
nski. On optimal allocation of indivisibles under uncertainty.
Operations Research, 46:381–395, 1998.
[25] V.I. Norkin, G.Ch. Pflug, and A. Ruszczy´
nski. A branch and bound method for stochastic global
optimization. Mathematical Programming, 83:425–450, 1998.
[26] M.W. Padberg and G. Rinaldi. Facet identification for the symmetric traveling salesman polytope.
Mathematical Programming, 47:219–257, 1990.
[27] M.W. Padberg and G. Rinaldi. A branch-and-cut algorithm for the resolution of large-scale symmetric
traveling salesman problems. SIAM Review, 33:60–100, 1991.
[28] E.L. Plambeck, B.R. Fu, S.M. Robinson, and R. Suri. Sample-path optimization of convex stochastic
performance functions. Mathematical Programming, 75:137–176, 1996.
[29] G. Reinelt. http://softlib.rice.edu/softlib/tsplib, 1995.
[30] R.Y. Rubinstein and A. Shapiro. Optimization of static simulation models by the score function method.
Mathematics and Computers in Simulation, 32:373–392, 1990.
[31] A. Schrijver. Theory of linear and integer programming. John Wiley & Sons Ltd, Chichester, 1986.
[32] A. Shapiro. Simulation-based optimization: Convergence analysis and statistical inference. Stochastic
Models, 12:425–454, 1996.
[33] A. Shapiro and T. Homem-de-Mello. A simulation-based approach to two-stage stochastic programming
with recourse. Mathematical Programming, 81:301–325, 1998.
[34] A. Shapiro and T. Homem-de-Mello. On rate of convergence of Monte Carlo approximations of stochastic
programs. SIAM Journal on Optimization, 11:70–86, 2001.
[35] A.M. Spaccamela, A.H.G. Rinnooy Kan, and L. Stougie. Hierarchical vehicle routing problems. Networks, 14:571–586, 1984.
[36] R.M. Van Slyke and R.J-B. Wets. L-shaped linear programs with applications to optimal control and
stochastic programming. SIAM Journal on Applied Mathematics, 17:638–663, 1969.
28
[37] A.M. Verweij. Selected Applications of Integer Programming: A Computational Study. PhD thesis,
Department of Computer Science, Utrecht University, Utrecht, 2000.
[38] S.W. Wallace. Solving stochastic programs with network recourse. Networks, 16:295–317, 1986.
[39] S.W. Wallace. Investing in arcs in a network to maximize the expected max flow. Networks, 17:87–103,
1987.
[40] S.W. Wallace. A two-stage stochastic facility-location problem with time-dependent supply. In Ermoliev
and Wets [5], pages 489–513.
[41] R.J-B. Wets. Solving stochastic programs with simple recourse. Stochastics, 10:219–242, 1983.
[42] R.J-B. Wets. Large scale linear programming techniques. In Ermoliev and Wets [5], pages 65–93.
[43] L.A. Wolsey. Integer Programming. John Wiley and Sons, New York, 1998.
29
Appendix
Tables 3, 4, and 5 give our computational results with the SAA method. Here, for each combination
ˆCPU , Cuts, and
of a base instance and a number of sample scenarios N , the Nodes, σ
ˆNodes , CPU, σ
σ
ˆCuts rows give the average number of nodes of the branch-and-bound tree, the standard deviation
of the average number of nodes of the branch-and-bound tree, the average CPU time in seconds,
the standard deviation of the average CPU time in seconds, the average number of optimality cuts,
and the standard deviation of the average number of optimality cuts, respectively, over the M = 10
SAA problems solved to find the best reported solution and the lower bound estimate. For each
combination of a base instance and a number of sample scenarios N , the Total CPU rows give the
total CPU time for solving M = 10 SAA problems, computing the large sample average objective
xm ) with sample size N = 105 for each of the M solutions x
ˆm produced, and choosing
value zˆN (ˆ
∗
m
ˆ , as in (6) and (9).
the solution x
ˆ with the best value of x
It can be observed that the total CPU times for the SPAF problems are significantly higher
than the times required to solve the corresponding M = 10 SAA problems. This is due to the
fact that the more complicated recourse structure in the SPAF problems require considerable effort
in estimating the objective values with a large sample size N = 105 for each of the M solutions
produced. This effort can be significantly reduced by choosing a smaller N at the expense of a
slight increase in the estimated confidence intervals. Furthermore, the I/O bottleneck for loading
M × N = 106 small second-stage recourse LPs into the solver can be reduced by bunching several
of these LPs into larger LPs, and using warm-start strategies during their solution.
30
burma14
ulysses16
ulysses22
eil51
berlin52
st70
Number of Sample Scenarios N
500
600
700
200
300
400
800
900
1000
363.0
2.60
372.0
0.44
365.0
1.03
372.7
0.45
367.4
2.40
371.2
0.44
368.7
1.55
371.9
0.44
369.5
1.34
371.5
0.44
369.3
1.04
371.3
0.44
371.8
1.67
371.2
0.44
369.6
1.57
371.3
0.44
369.8
1.00
371.0
0.44
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
19.0
1.50
0.7
0.03
10.1
0.38
18s
2508.3
1.66
2510.0
0.00
20.9
1.58
1.1
0.02
10.2
0.13
18s
2508.0
2.01
2510.0
0.00
21.4
1.33
1.5
0.05
10.6
0.22
21s
2510.0
0.00
2510.0
0.00
18.7
0.86
1.8
0.06
10.4
0.31
24s
2510.0
0.00
2510.0
0.00
18.3
1.43
2.1
0.07
10.4
0.22
29s
2510.0
0.00
2510.0
0.00
21.5
1.45
2.3
0.05
10.1
0.10
30s
2510.0
0.00
2510.0
0.00
22.0
1.61
2.9
0.16
10.8
0.36
36s
2510.0
0.00
2510.0
0.00
18.3
0.54
3.2
0.17
10.5
0.22
39s
2510.0
0.00
2510.0
0.00
19.1
1.10
3.4
0.13
10.4
0.27
41s
2510.0
0.00
2510.0
0.00
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
7.3
0.68
0.6
0.02
6.0
0.15
11s
1424.2
5.61
1440.1
1.57
8.8
0.65
0.9
0.02
6.0
0.15
14s
1440.8
4.35
1438.2
1.56
7.9
0.41
1.2
0.01
6.0
0.00
15s
1435.6
4.26
1440.5
1.57
6.7
0.47
1.5
0.03
6.1
0.10
19s
1431.1
9.70
1439.0
1.57
7.7
0.62
1.8
0.03
6.0
0.00
22s
1433.1
5.71
1439.3
1.57
7.7
0.56
2.1
0.03
6.0
0.00
24s
1433.1
6.35
1439.1
1.57
8.1
0.41
2.6
0.08
6.3
0.15
29s
1440.0
4.23
1438.6
1.56
8.0
0.54
2.8
0.04
6.1
0.10
31s
1444.9
4.38
1440.3
1.57
7.6
0.52
3.1
0.09
6.2
0.13
34s
1433.8
3.49
1442.4
1.58
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
14.2
0.76
1.3
0.06
8.8
0.36
22s
61.0
0.46
61.6
0.10
16.5
0.85
1.9
0.05
9.1
0.18
25s
61.1
0.26
61.6
0.10
16.6
1.38
2.5
0.07
9.0
0.21
32s
61.3
0.24
61.8
0.10
14.7
1.16
2.8
0.11
8.4
0.31
36s
61.4
0.36
61.8
0.10
14.0
1.11
3.4
0.09
8.5
0.17
42s
60.9
0.30
61.8
0.10
15.0
1.03
4.3
0.10
9.0
0.15
47s
61.0
0.33
61.7
0.10
14.4
0.78
4.9
0.03
9.0
0.00
53s
61.6
0.26
61.6
0.10
15.2
0.71
5.3
0.12
8.8
0.13
58s
61.4
0.19
61.6
0.10
14.1
0.85
6.0
0.15
8.9
0.18
1m6s
61.1
0.26
61.7
0.10
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
14.4
1.28
4.9
0.33
7.8
0.55
1m3s
836.2
1.42
840.0
0.48
14.6
1.27
7.2
0.34
8.1
0.35
1m23s
836.6
2.08
839.1
0.47
15.1
1.54
10.1
0.65
8.5
0.52
1m52s
836.8
1.78
839.5
0.48
17.1
1.39
13.4
0.56
9.1
0.31
2m27s
836.9
1.16
839.7
0.48
14.8
1.35
14.7
0.88
8.3
0.42
2m40s
839.0
1.98
839.0
0.48
14.2
1.26
17.2
0.86
8.4
0.43
3m3s
842.7
1.20
839.5
0.48
16.3
0.33
19.1
0.86
8.1
0.31
3m21s
837.0
1.52
839.4
0.48
15.3
1.49
21.4
1.27
8.1
0.41
3m47s
839.3
1.02
838.9
0.47
15.2
1.15
23.7
0.86
8.2
0.29
4m8s
841.5
0.88
839.7
0.48
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
106.1
9.22
18.8
0.68
36.3
1.16
3m19s
112.1
0.90
114.8
0.13
112.1
10.46
26.7
1.02
35.5
1.14
4m40s
113.5
0.72
115.0
0.13
119.7
6.26
36.5
2.22
36.7
1.92
6m17s
115.9
0.67
114.9
0.13
125.1
9.75
48.9
1.76
39.5
1.16
8m27s
115.4
0.53
114.7
0.13
123.5
10.06
57.4
2.32
38.7
1.44
9m45s
113.9
0.39
114.8
0.13
156.0
7.35
72.1
1.98
41.4
1.02
12m12s
113.4
0.42
114.7
0.13
112.8
9.69
72.1
2.37
37.0
1.14
12m9s
115.2
0.41
114.8
0.13
114.7
10.03
89.9
1.59
40.3
0.72
15m7s
115.2
0.36
114.8
0.13
128.2
10.77
104.9
2.49
42.2
0.92
17m36s
114.6
0.49
114.9
0.13
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
175.3
16.18
27.9
2.06
28.2
1.93
5m2s
205.0
16.80
44.3
3.63
30.6
2.53
7m35s
238.2
19.45
74.3
5.14
38.8
2.66
12m36s
235.3
23.19
84.2
5.83
35.8
2.37
14m15s
190.7
21.50
97.8
7.96
35.0
2.79
16m30s
176.6
11.00
100.3
7.02
31.1
2.03
16m54s
204.9
12.92
135.1
8.73
36.3
2.24
22m42s
207.7
21.64
143.5
13.65
34.2
3.22
24m6s
181.8
17.60
150.4
10.62
32.6
2.23
25m15s
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
105
105
105
105
105
105
Table 3: Computational Results with SAA (SPRT).
31
300
400
800
900
1000
49.1
0.38
49.3
0.05
49.0
0.26
49.2
0.05
49.5
0.27
49.2
0.05
49.4
0.14
49.2
0.05
49.0
0.12
49.3
0.05
49.1
0.12
49.3
0.05
49.2
0.23
49.2
0.05
49.4
0.07
49.2
0.05
49.1
0.20
49.2
0.05
eil76
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
64.5
6.09
22.8
2.00
21.6
1.43
4m0s
18207.0
0.00
18207.0
0.00
65.4
4.60
35.6
2.38
23.3
1.41
6m4s
18207.0
0.00
18207.0
0.00
61.4
2.40
44.7
1.43
22.1
0.72
7m35s
18207.0
0.00
18207.0
0.00
57.6
3.01
50.3
2.06
20.1
0.71
8m36s
18207.0
0.00
18207.0
0.00
65.8
4.84
70.9
5.05
23.2
1.46
11m56s
18207.0
0.00
18207.0
0.00
57.1
2.95
75.4
3.77
21.3
0.80
12m42s
18207.0
0.00
18207.0
0.00
53.3
4.12
81.6
4.69
20.4
1.01
13m43s
18207.0
0.00
18207.0
0.00
54.3
2.31
92.6
2.68
20.6
0.50
15m33s
18207.0
0.00
18207.0
0.00
56.6
3.74
115.4
7.18
22.8
1.23
19m22s
18207.0
0.00
18207.0
0.00
pr76
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
190.3
18.19
31.5
2.28
35.3
2.53
5m21s
7601.0
0.00
7601.0
0.00
155.7
19.29
43.3
4.21
34.1
3.07
7m19s
7601.0
0.00
7601.0
0.00
151.2
20.57
57.4
5.46
35.5
3.06
9m40s
7601.0
0.00
7601.0
0.00
151.3
21.97
68.6
4.11
34.0
1.79
11m32s
7601.0
0.00
7601.0
0.00
124.4
13.74
80.4
6.84
33.6
3.01
13m30s
7601.0
0.00
7601.0
0.00
159.5
16.63
100.7
8.18
36.0
2.48
16m53s
7601.0
0.00
7601.0
0.00
140.1
7.91
109.7
4.88
34.2
1.64
18m23s
7601.0
0.00
7601.0
0.00
132.9
12.25
114.9
8.13
32.6
2.34
19m16s
7601.0
0.00
7601.0
0.00
129.1
15.59
129.1
8.14
32.6
2.15
21m37s
7601.0
0.00
7601.0
0.00
gr96
Number of Sample Scenarios N
500
600
700
200
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
85.3
3.49
20.3
0.91
21.4
0.82
3m28s
205.3
0.22
205.8
0.06
82.3
4.33
28.8
1.41
21.8
0.89
4m53s
204.8
0.25
205.8
0.06
88.0
4.70
41.3
4.17
23.7
1.81
6m58s
205.3
0.23
205.7
0.06
85.2
6.54
48.5
2.46
22.3
1.03
8m9s
205.6
0.18
205.8
0.06
86.1
4.28
58.2
2.26
22.8
0.89
9m47s
205.6
0.16
205.7
0.06
86.3
3.51
74.9
2.82
24.7
0.76
12m34s
205.4
0.19
205.6
0.05
82.4
4.59
82.9
4.00
24.1
0.89
13m54s
205.3
0.17
205.9
0.06
77.4
2.43
90.5
4.33
23.4
0.79
15m9s
205.5
0.10
205.9
0.06
77.8
2.62
107.5
4.33
24.6
0.81
17m59s
205.4
0.10
205.8
0.06
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
582.7
38.97
113.1
6.17
86.6
4.34
19m20s
3355.6
17.54
3414.8
3.84
489.5
35.09
146.4
4.31
78.2
2.03
24m52s
3378.9
8.55
3416.2
3.85
594.2
65.74
200.5
10.92
84.8
4.18
33m46s
3374.7
9.74
3419.6
3.86
573.8
59.75
248.8
17.60
86.2
5.53
41m58s
3386.8
7.96
3413.6
3.84
608.5
50.56
323.3
16.02
93.7
4.36
54m24s
3395.3
6.80
3419.6
3.67
576.6
64.41
354.7
29.45
88.3
7.14
59m24s
3389.9
9.19
3418.2
3.86
581.6
47.35
388.2
22.55
84.9
4.37
1h4m
3390.9
7.32
3419.6
3.86
573.8
35.51
440.6
25.43
88.3
4.41
1h13m
3376.6
9.18
3414.9
3.84
631.3
40.55
522.4
15.07
93.0
2.51
1h27m
3384.7
10.07
3405.4
3.80
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
28.4
3.11
13.6
1.21
11.2
0.94
2m34s
2466.4
7.16
2467.9
1.47
28.2
5.58
19.9
1.02
11.5
0.54
3m39s
2466.4
7.34
2470.0
1.48
30.1
4.77
27.0
1.17
12.1
0.55
4m47s
2462.9
6.08
2469.2
1.47
28.2
3.79
34.8
1.04
12.1
0.38
6m6s
2470.1
6.29
2469.7
1.48
31.0
1.82
39.9
1.94
11.6
0.37
6m59s
2466.4
6.81
2470.7
1.48
37.8
3.99
48.9
2.71
12.7
0.54
8m28s
2460.6
5.71
2472.1
1.49
25.8
4.82
49.5
2.92
11.2
0.53
8m37s
2469.1
4.42
2470.8
1.48
26.4
2.89
60.5
3.66
11.9
0.53
10m27s
2475.0
5.38
2468.9
1.47
35.1
5.17
69.5
2.98
12.7
0.45
11m52s
2471.7
4.59
2468.2
1.47
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
3.3
0.60
5.0
0.55
3.8
0.20
55s
2920.0
11.40
2940.5
0.60
2.5
0.50
4.4
0.81
2.9
0.38
50s
2944.5
3.11
2941.1
0.60
2.6
0.40
5.0
0.35
2.8
0.20
56s
2946.7
3.46
2940.6
0.60
2.5
0.52
7.6
1.21
3.1
0.28
1m21s
2939.1
3.45
2940.7
0.60
1.6
0.16
7.3
0.48
2.8
0.20
1m19s
2939.6
2.63
2941.2
0.60
2.3
0.37
8.1
0.49
2.7
0.15
1m27s
2938.9
2.89
2939.8
0.59
2.9
0.43
10.5
0.57
3.1
0.18
1m51s
2938.0
0.98
2940.3
0.60
2.9
0.46
12.3
0.42
3.2
0.13
2m9s
2938.2
0.89
2939.9
0.60
3.3
0.65
18.4
3.02
3.5
0.22
3m9s
2940.4
1.25
2941.0
0.60
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
32.1
2.30
14.5
1.11
11.5
0.81
2m35s
38.2
3.07
25.7
1.32
13.9
0.71
4m26s
36.7
1.41
31.2
2.06
13.4
0.90
5m21s
34.1
2.79
43.1
1.27
14.9
0.48
7m20s
35.8
2.24
45.2
3.00
12.9
0.82
7m41s
32.9
2.00
57.2
4.04
14.0
0.98
9m41s
30.9
1.79
61.5
5.14
13.0
1.00
10m24s
31.7
2.02
68.9
3.77
13.1
0.74
11m37s
33.1
1.97
75.5
5.89
13.1
0.91
12m44s
kroC100
kroB100
kroA100
rat99
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
105
105
105
105
105
105
105
Table 3: Computational Results with SAA (SPRT). (Cnt’d)
32
kroD100
kroE100
rd100
eil101
Number of Sample Scenarios N
500
600
700
200
300
400
800
900
1000
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
2486.6
3.45
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
2490.0
0.00
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
3.0
0.58
2.9
0.14
2.4
0.16
36s
3718.3
9.24
3710.4
1.36
1.8
0.42
4.0
0.18
2.2
0.13
44s
3716.2
7.39
3709.5
1.35
1.6
0.50
5.1
0.18
2.2
0.13
55s
3705.2
6.05
3712.1
1.37
1.6
0.50
6.4
0.33
2.2
0.20
1m8s
3702.2
5.70
3709.8
1.36
2.5
0.78
8.1
0.47
2.4
0.22
1m25s
3714.6
5.64
3710.3
1.36
2.3
0.67
8.9
0.28
2.2
0.13
1m33s
3719.0
5.29
3710.2
1.36
1.8
0.42
10.8
0.59
2.5
0.22
1m52s
3710.8
6.16
3709.7
1.35
1.2
0.20
10.9
0.30
2.1
0.10
1m53s
3716.0
3.15
3710.3
1.35
1.1
0.10
11.9
0.10
2.0
0.00
2m3s
3704.0
2.05
3708.3
1.34
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
39.0
3.31
12.0
1.20
9.6
0.88
2m8s
1195.2
1.64
1197.0
0.00
38.0
4.62
18.1
1.45
9.9
0.78
3m9s
1196.5
0.47
1197.0
0.00
27.8
3.84
23.3
2.54
9.9
1.08
4m0s
1197.0
0.00
1197.0
0.00
27.5
2.85
28.3
3.20
9.4
1.00
4m50s
1196.8
0.23
1197.0
0.00
31.2
2.82
34.7
2.15
9.6
0.52
5m55s
1197.0
0.00
1197.0
0.00
26.1
2.42
38.8
1.71
9.1
0.38
6m36s
1197.0
0.00
1197.0
0.00
25.3
3.65
44.6
4.58
9.4
0.93
7m33s
1197.0
0.00
1197.0
0.00
29.1
2.40
51.2
4.95
9.5
0.83
8m39s
1197.0
0.00
1197.0
0.00
29.6
3.47
49.2
5.16
8.2
0.84
8m19s
1197.0
0.00
1197.0
0.00
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
30.4
3.19
10.6
0.41
10.2
0.36
1m59s
51.7
0.39
52.2
0.05
31.6
2.05
15.5
0.42
10.3
0.21
2m43s
52.1
0.23
52.2
0.05
31.6
2.06
19.8
0.41
10.0
0.21
3m22s
51.9
0.33
52.2
0.05
31.0
1.32
25.1
0.74
10.4
0.22
4m19s
52.6
0.16
52.2
0.05
33.3
1.96
28.7
0.82
10.0
0.26
4m51s
52.3
0.29
52.2
0.05
29.0
1.06
33.0
0.60
9.9
0.18
5m34s
52.1
0.16
52.2
0.06
35.3
1.16
39.7
0.60
10.3
0.15
6m42s
52.2
0.21
52.2
0.05
31.6
1.86
44.2
0.70
10.4
0.16
7m26s
52.1
0.20
52.2
0.05
30.6
1.61
48.9
0.98
10.3
0.15
8m13s
52.3
0.15
52.2
0.05
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
138.5
14.17
53.3
4.34
33.1
2.61
9m25s
105.0
7.80
73.0
6.31
31.3
2.43
12m24s
112.6
13.74
101.3
8.87
32.9
2.65
17m9s
117.8
8.61
135.3
6.37
35.1
1.64
22m46s
112.8
5.36
153.7
7.75
33.8
1.55
25m51s
107.0
10.03
172.7
14.89
32.2
2.67
29m1s
109.8
8.21
188.8
13.42
30.8
2.07
31m42s
119.6
7.08
246.6
10.28
35.0
1.41
41m20s
100.5
11.16
259.9
17.98
33.9
2.22
43m33s
105
105
105
105
Table 3: Computational Results with SAA (SPRT). (Cnt’d)
33
400
800
900
1000
321.6
0.69
320.9
0.13
322.2
0.64
320.9
0.13
321.0
0.65
320.9
0.13
321.5
0.63
320.8
0.13
320.4
0.85
320.7
0.13
320.4
0.61
320.9
0.13
321.0
0.37
320.8
0.13
320.3
0.41
321.0
0.13
321.4
0.41
320.8
0.13
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
2.9
1.07
2.6
0.32
4.0
0.56
28m25s
2376.8
1.77
2381.8
0.37
3.2
0.62
4.4
0.47
4.4
0.48
28m43s
2379.7
1.30
2382.1
0.37
2.8
0.79
4.9
0.72
3.7
0.37
28m58s
2380.5
1.59
2382.2
0.37
2.6
0.87
6.2
1.05
4.0
0.60
29m14s
2381.4
1.15
2381.3
0.37
2.5
0.71
6.6
0.58
3.5
0.27
29m29s
2383.0
1.45
2382.4
0.37
3.2
1.42
8.2
1.56
3.7
0.60
29m23s
2379.8
0.89
2382.1
0.37
1.1
0.04
7.1
0.54
2.7
0.15
29m43s
2381.7
1.33
2381.8
0.37
1.4
0.14
7.4
0.67
2.5
0.22
29m15s
2380.3
0.94
2381.8
0.37
2.9
0.78
11.0
2.10
3.4
0.54
30m2s
2381.6
1.03
2382.2
0.37
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
1.3
0.17
2.8
0.19
3.8
0.29
36m24s
1302.0
2.14
1306.2
0.29
1.2
0.12
4.3
0.20
3.9
0.18
35m47s
1302.3
1.02
1306.7
0.29
1.4
0.18
5.8
0.27
4.1
0.23
36m29s
1303.9
1.65
1306.6
0.29
1.7
0.37
7.0
0.40
3.6
0.27
36m19s
1302.5
0.81
1307.0
0.29
1.5
0.24
9.4
0.51
4.2
0.33
37m2s
1303.9
0.97
1306.3
0.29
1.0
0.00
10.0
0.34
3.8
0.20
37m7s
1305.9
1.08
1306.9
0.29
1.3
0.17
11.4
0.53
3.8
0.25
37m3s
1306.2
0.52
1306.5
0.29
1.0
0.00
12.2
0.55
3.8
0.33
36m46s
1305.1
0.74
1306.7
0.29
1.2
0.15
14.3
0.81
3.9
0.28
37m42s
1306.3
0.77
1307.1
0.29
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
2.5
0.56
4.7
0.25
4.3
0.21
54m28s
51.2
0.19
51.2
0.02
2.1
0.36
7.4
0.25
4.8
0.20
55m57s
51.1
0.10
51.2
0.02
4.2
0.99
11.7
0.96
5.1
0.38
57m49s
51.3
0.09
51.2
0.02
3.7
0.69
15.0
0.98
5.4
0.31
57m30s
51.2
0.09
51.2
0.02
4.1
0.82
18.4
1.33
5.6
0.45
56m58s
51.4
0.08
51.2
0.02
4.3
0.53
19.4
0.83
4.7
0.15
1h22m
51.2
0.06
51.2
0.02
3.2
0.55
19.8
0.55
4.2
0.20
56m53s
51.2
0.09
51.2
0.02
3.2
0.50
24.8
1.51
5.0
0.26
58m54s
51.3
0.06
51.2
0.02
3.5
0.38
27.6
0.97
5.1
0.10
59m18s
51.2
0.07
51.2
0.02
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
5.7
1.01
18.0
0.98
6.3
0.33
2h39m
810.8
1.00
812.8
0.18
6.5
0.78
26.8
1.06
6.5
0.31
2h39m
812.0
0.94
812.9
0.19
7.6
1.08
35.2
1.20
6.4
0.27
2h41m
811.0
0.80
812.3
0.18
7.5
0.67
45.1
1.02
6.6
0.16
2h43m
810.5
0.51
812.5
0.18
6.9
0.69
54.8
1.62
6.7
0.21
2h44m
811.9
0.59
812.4
0.18
7.7
0.82
64.0
2.35
6.7
0.26
2h47m
812.4
0.45
812.5
0.18
7.4
0.58
74.0
3.27
6.8
0.36
2h47m
811.2
0.54
812.6
0.18
6.4
0.63
79.2
2.69
6.4
0.27
2h47m
812.0
0.26
812.6
0.18
6.3
0.85
88.9
2.94
6.5
0.22
2h50m
811.0
0.70
812.5
0.18
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
4.7
1.01
21.3
1.68
7.1
0.55
2h39m
102.4
0.22
102.8
0.06
5.8
0.96
30.7
2.35
7.4
0.64
2h40m
101.9
0.31
102.6
0.06
4.4
1.07
38.8
2.24
6.6
0.45
2h44m
102.4
0.29
102.8
0.06
4.3
0.89
49.7
2.52
6.6
0.43
2h43m
102.4
0.22
102.6
0.06
5.5
1.04
61.9
4.16
6.9
0.41
2h46m
102.6
0.20
102.8
0.06
4.9
0.78
69.4
4.55
6.8
0.42
2h47m
102.5
0.17
102.8
0.06
5.0
0.63
78.5
3.00
6.7
0.40
2h47m
102.4
0.20
102.8
0.06
6.2
0.69
94.0
2.95
7.1
0.28
2h51m
102.7
0.12
102.7
0.06
4.9
0.65
96.6
6.39
6.6
0.54
2h52m
102.4
0.18
102.7
0.06
st70
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
18.1
3.94
63.8
7.85
13.8
1.55
4h11m
46.1
0.10
46.5
0.02
15.6
4.41
89.3
8.03
13.2
1.52
4h5m
46.4
0.07
46.5
0.02
15.0
1.72
109.3
9.06
12.3
0.94
4h5m
46.4
0.07
46.5
0.02
14.6
2.71
142.3
15.09
12.6
1.38
4h9m
46.4
0.10
46.5
0.02
14.9
1.70
183.5
14.74
14.6
1.23
4h29m
46.4
0.07
46.5
0.02
14.7
2.66
186.1
19.07
12.0
1.32
4h14m
46.4
0.05
46.5
0.02
17.7
1.85
235.6
5.99
13.7
0.50
4h31m
46.4
0.03
46.5
0.02
19.9
3.84
276.4
22.73
14.3
1.19
4h44m
46.4
0.05
46.5
0.02
19.6
3.31
296.1
18.18
13.6
1.06
4h38m
46.4
0.05
46.5
0.02
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
14.0
2.12
51.7
3.23
11.8
0.94
4h23m
16.0
1.34
84.7
4.97
12.5
0.76
4h26m
14.8
1.61
99.8
6.64
11.3
0.73
4h22m
20.9
2.50
141.4
8.12
12.1
0.74
4h35m
19.0
1.77
151.2
9.86
11.2
0.73
4h25m
16.6
2.44
181.4
11.06
11.3
0.56
4h43m
17.9
1.81
200.3
9.12
10.9
0.57
4h34m
15.9
1.63
233.4
14.10
11.4
0.65
6h46m
17.2
1.71
269.5
13.68
11.7
0.70
4h45m
berlin52
eil51
ulysses22
ulysses16
burma14
300
eil76
Number of Sample Scenarios N
500
600
700
200
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
105
105
105
105
105
105
105
Table 4: Computational Results with SAA (SPAF).
34
300
400
800
900
1000
18457.6
10.88
18468.4
2.54
18452.6
11.20
18465.7
2.53
18451.5
11.54
18470.1
2.56
18453.3
9.39
18467.2
2.54
18459.6
9.39
18462.8
2.52
18467.4
10.49
18469.4
2.55
18461.4
7.37
18472.5
2.56
18460.2
8.30
18466.1
2.53
18464.0
8.22
18463.5
2.53
pr76
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
52.3
8.06
75.7
9.17
16.6
2.20
4h7m
7400.2
2.24
7420.1
0.90
49.9
8.84
90.5
10.67
13.5
1.57
4h20m
7420.4
4.28
7422.0
0.91
57.1
11.13
140.2
17.07
15.2
2.58
4h17m
7426.7
2.72
7422.4
0.91
45.3
9.07
135.6
12.42
11.5
1.23
4h18m
7420.0
5.32
7421.9
0.90
82.1
15.33
241.0
31.99
18.1
2.79
4h33m
7419.5
2.93
7422.0
0.90
110.4
22.64
263.9
30.70
16.8
2.31
4h39m
7423.9
4.14
7422.0
0.90
19.8
3.38
154.9
18.16
8.8
0.80
4h21m
7422.5
3.41
7421.6
0.90
68.9
15.02
284.8
32.83
14.0
1.41
4h45m
7426.0
3.56
7421.4
0.90
118.4
41.51
403.6
58.34
18.5
2.50
4h59m
7421.6
3.88
7422.6
0.91
gr96
Number of Sample Scenarios N
500
600
700
200
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
11.9
2.01
41.2
3.16
7.2
1.04
5h5m
206.5
0.20
207.1
0.03
13.7
2.67
72.2
5.74
7.8
0.90
5h11m
206.5
0.17
207.1
0.03
12.8
1.91
92.7
6.31
7.5
1.20
5h14m
206.7
0.17
207.1
0.03
17.2
2.12
144.1
8.48
9.8
0.93
5h19m
206.8
0.13
207.2
0.03
17.1
3.13
130.3
8.77
7.6
1.49
5h19m
206.9
0.09
207.1
0.03
19.3
5.96
178.3
22.10
8.6
1.28
5h29m
206.9
0.08
207.2
0.03
11.7
1.71
189.2
12.69
8.0
1.36
5h28m
207.1
0.09
207.1
0.03
19.3
4.49
213.2
20.89
7.9
1.18
5h33m
207.0
0.09
207.1
0.03
17.3
3.59
230.0
20.17
7.6
0.88
5h37m
207.0
0.14
207.0
0.03
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
250.8
48.49
225.9
21.81
41.2
3.78
5h57m
3021.7
4.59
3023.5
0.73
209.9
28.83
300.8
20.13
36.2
2.35
6h6m
3024.5
4.14
3023.3
0.73
256.2
37.39
447.9
39.99
40.6
3.58
6h26m
3030.5
2.23
3023.8
0.73
222.6
27.23
551.7
48.48
39.1
3.44
6h47m
3022.9
2.91
3023.6
0.73
333.2
53.56
777.0
58.86
46.9
3.12
7h20m
3025.8
3.78
3022.4
0.73
320.7
42.22
843.0
52.29
43.4
2.81
7h36m
3021.9
3.16
3023.5
0.73
287.7
32.59
1003.6
69.79
45.6
3.13
8h1m
3024.1
4.11
3023.7
0.73
319.4
49.46
1104.2
114.70
44.1
4.29
8h19m
3027.1
2.30
3023.1
0.73
297.7
53.34
1109.2
98.97
40.5
3.23
8h26m
3024.8
1.97
3022.8
0.73
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
9.1
2.87
35.6
5.37
5.3
0.93
5h19m
2372.3
2.41
2366.4
0.50
18.3
3.79
83.3
10.14
8.1
1.55
5h27m
2364.7
3.29
2366.7
0.50
11.4
2.69
84.2
10.78
6.1
0.89
5h27m
2367.5
1.68
2366.3
0.50
8.3
2.18
89.2
8.74
5.0
1.17
5h27m
2366.4
2.36
2366.6
0.50
14.8
4.75
113.0
21.77
5.8
1.21
5h33m
2365.9
1.50
2366.6
0.50
14.9
4.62
142.7
18.15
5.9
0.85
5h34m
2366.4
1.94
2366.6
0.50
19.2
5.22
165.2
25.81
6.0
1.57
5h36m
2366.1
2.21
2366.8
0.50
13.7
4.70
168.3
32.64
5.4
1.13
5h41m
2368.6
1.53
2366.5
0.50
8.3
2.47
185.7
38.61
5.3
1.29
5h44m
2364.2
1.36
2366.6
0.50
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
2.3
0.24
20.3
0.54
3.1
0.10
5h19m
2650.3
4.78
2647.8
0.87
2.8
0.61
33.4
2.84
3.5
0.34
5h16m
2646.5
3.84
2649.6
0.87
3.8
0.58
44.8
2.42
3.5
0.22
5h17m
2650.2
4.96
2648.3
0.87
2.5
0.18
50.0
1.35
3.1
0.10
5h21m
2645.1
3.18
2649.9
0.87
2.4
0.29
66.2
3.63
3.4
0.22
5h23m
2643.2
2.83
2647.7
0.87
2.5
0.18
70.8
0.63
3.1
0.10
5h22m
2648.5
3.66
2648.3
0.87
2.1
0.11
79.4
2.17
3.1
0.10
5h23m
2650.3
2.26
2649.2
0.87
2.3
0.13
89.9
0.10
3.0
0.00
5h27m
2656.9
3.32
2648.0
0.87
2.7
0.26
109.2
4.20
3.4
0.16
5h30m
2648.9
3.15
2649.3
0.87
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
34.2
5.28
66.5
5.27
11.5
0.78
5h28m
2384.1
2.67
2382.9
0.44
35.6
4.82
97.9
7.73
11.4
0.88
5h34m
2379.8
2.33
2383.0
0.44
34.7
5.37
129.5
7.29
10.5
0.50
5h42m
2387.7
2.02
2382.7
0.44
26.1
1.60
157.1
6.60
10.8
0.39
5h42m
2383.7
1.70
2382.9
0.44
32.5
2.76
190.3
11.21
10.4
0.45
5h48m
2381.8
1.04
2382.8
0.44
30.8
3.14
210.5
14.26
9.8
0.55
5h58m
2381.0
1.51
2383.5
0.44
30.7
3.14
234.7
13.04
9.9
0.50
5h56m
2382.9
1.14
2382.3
0.43
28.4
2.98
277.9
14.31
10.2
0.51
6h1m
2382.4
1.65
2383.5
0.44
36.4
4.38
346.0
26.37
11.7
0.87
6h15m
2382.4
1.64
2382.9
0.43
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
1.1
0.11
22.7
1.34
3.4
0.22
5h13m
1.0
0.00
30.4
1.15
3.1
0.18
5h13m
1.0
0.00
43.8
1.73
3.4
0.16
5h19m
1.0
0.00
50.8
1.88
3.1
0.18
5h18m
1.0
0.00
58.6
1.77
3.0
0.15
5h24m
1.0
0.00
71.2
1.97
3.1
0.10
5h21m
1.0
0.00
78.1
2.32
3.0
0.15
5h25m
1.0
0.00
94.6
5.39
3.2
0.20
5h29m
1.3
0.13
105.1
5.53
3.1
0.10
5h27m
kroD100
kroC100
kroB100
kroA100
rat99
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
105
105
105
105
105
105
105
Table 4: Computational Results with SAA (SPAF). (Cnt’d)
35
kroE100
rd100
eil101
Number of Sample Scenarios N
500
600
700
200
300
400
800
900
1000
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
3531.5
6.32
3533.1
0.69
3532.2
2.74
3533.5
0.69
3536.8
2.71
3532.9
0.69
3537.5
2.94
3532.8
0.69
3532.5
2.70
3532.8
0.69
3532.1
2.97
3532.9
0.69
3535.3
1.85
3533.8
0.69
3532.2
2.21
3532.4
0.69
3528.8
1.91
3532.4
0.69
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
63.9
17.27
73.6
6.85
13.2
1.15
5h29m
1165.5
1.11
1166.3
0.15
57.0
9.13
103.8
9.56
12.6
0.88
5h31m
1166.0
0.73
1166.4
0.15
59.6
6.17
151.1
9.07
13.8
1.25
5h40m
1165.0
0.94
1166.3
0.15
100.2
11.12
229.4
16.34
16.1
1.43
5h55m
1166.6
0.65
1166.5
0.15
52.8
9.94
181.5
10.45
11.1
0.67
5h46m
1167.6
0.61
1166.5
0.15
62.1
7.71
251.2
20.37
13.2
1.05
5h51m
1166.8
0.58
1166.2
0.15
54.6
11.65
259.2
27.09
12.0
1.28
6h1m
1166.8
0.50
1166.4
0.15
49.8
5.28
286.0
15.96
11.7
0.67
6h2m
1166.3
0.74
1166.4
0.15
63.2
5.85
347.3
16.73
13.0
0.68
6h12m
1166.2
0.42
1166.5
0.15
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10.3
1.76
41.4
3.95
6.2
0.53
5h19m
44.2
0.15
44.5
0.02
16.9
2.82
74.2
6.98
7.5
0.90
5h24m
44.4
0.15
44.5
0.02
12.8
1.64
95.1
9.37
7.0
0.75
5h29m
44.6
0.07
44.5
0.02
14.9
1.75
132.2
12.62
8.0
0.76
5h37m
44.5
0.10
44.5
0.02
15.7
2.13
140.7
6.50
6.7
0.65
5h36m
44.5
0.10
44.5
0.02
21.6
4.78
185.3
17.20
8.0
1.11
5h47m
44.4
0.06
44.6
0.02
18.1
2.32
209.6
12.31
8.1
0.80
5h47m
44.5
0.09
44.5
0.02
18.0
2.85
220.6
10.67
7.5
0.40
5h51m
44.7
0.06
44.5
0.02
17.9
3.46
250.8
15.13
7.7
0.52
5h57m
44.5
0.04
44.5
0.02
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
35.8
3.94
108.0
9.98
17.0
1.43
6h5m
32.9
3.66
174.4
12.11
19.3
1.27
6h14m
46.7
3.50
237.6
9.76
18.7
0.90
6h24m
40.7
3.74
270.9
17.31
17.1
1.02
6h24m
38.7
3.62
300.8
19.87
16.3
1.01
6h28m
38.5
2.12
379.8
18.08
17.3
0.84
6h44m
40.6
4.88
463.5
32.92
18.3
1.16
6h55m
46.0
1.90
532.0
23.09
19.5
0.92
7h17m
39.2
3.26
517.5
30.66
16.9
0.92
7h5m
105
105
105
Table 4: Computational Results with SAA (SPAF). (Cnt’d)
36
400
800
900
1000
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
1559.1
8.57
1565.0
1.32
1551.7
6.43
1565.0
1.32
1565.4
3.94
1564.0
1.32
1554.0
5.57
1561.3
1.30
1550.4
5.30
1564.0
1.32
1559.2
4.06
1563.7
1.32
1556.9
3.24
1564.7
1.33
1555.8
2.86
1564.2
1.33
1555.5
2.90
1562.7
1.31
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
206.2
17.26
1.7
0.13
37.3
2.91
1m21s
9290.9
31.10
9488.2
7.50
230.6
10.59
2.0
0.11
35.4
2.06
1m23s
9374.3
36.80
9501.4
7.58
232.0
22.47
2.7
0.22
40.4
3.20
1m3s
9408.1
35.54
9529.2
6.87
211.2
19.52
3.0
0.15
40.6
1.64
57s
9439.8
37.19
9493.6
7.50
183.6
22.42
3.1
0.24
36.1
2.78
1m0s
9457.7
27.46
9498.7
7.58
216.3
20.50
4.0
0.14
40.2
1.48
1m17s
9474.0
30.93
9498.2
7.53
205.9
26.43
4.2
0.23
37.9
1.95
1m15s
9457.3
19.30
9500.6
7.59
250.7
16.65
4.8
0.29
39.0
2.37
1m23s
9466.4
27.78
9497.4
7.55
180.7
13.25
4.9
0.25
37.7
2.04
1m10s
9471.5
23.08
9506.2
7.61
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
629.0
74.11
4.5
0.39
50.9
2.83
2m3s
9804.1
47.97
10023.6
6.50
774.2
122.91
5.9
0.72
52.0
4.60
1m58s
9883.3
24.34
10024.1
6.49
859.4
114.02
7.7
0.99
56.3
5.67
2m16s
9921.6
23.11
9985.6
6.50
610.7
67.21
7.1
0.73
55.8
5.61
2m34s
9934.1
18.98
10000.3
6.56
996.1
210.07
10.1
1.24
59.8
3.57
2m18s
9929.3
18.20
10009.6
6.47
968.1
183.49
10.8
1.46
59.5
5.34
2m45s
9883.8
30.12
9989.3
6.51
791.8
86.17
10.6
0.61
57.3
3.04
2m43s
9894.0
14.86
9986.9
6.46
882.4
131.60
11.0
0.85
52.1
3.09
2m55s
9961.9
21.84
9996.5
6.52
811.6
104.75
11.6
1.06
54.2
4.19
2m48s
9920.9
27.93
10000.9
6.55
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
4190.3
546.66
50.6
9.34
132.0
16.23
10m25s
512.4
3.58
550.2
0.65
6089.5
735.61
88.0
14.47
175.3
21.80
16m21s
519.5
1.90
544.1
0.64
5656.8
794.19
90.5
17.24
180.7
18.33
17m8s
519.1
2.62
547.9
0.64
6818.6
757.74
106.7
15.59
187.0
17.13
19m19s
521.1
2.41
547.0
0.64
6317.7
710.35
112.9
12.44
206.5
9.72
20m28s
522.2
2.27
543.8
0.63
6033.4
975.32
97.1
16.01
155.7
14.08
17m24s
523.5
1.78
547.0
0.64
6496.3
861.96
114.1
17.18
177.9
13.38
20m31s
522.7
1.95
544.7
0.64
7602.9
729.32
143.9
13.59
203.5
14.93
25m40s
523.9
2.26
546.7
0.64
6038.7
919.16
129.3
17.00
192.1
12.15
23m7s
526.0
1.69
543.9
0.64
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
1283.6
63.79
250.8
18.40
3h40m
9036.3
19.36
9496.3
8.57
10000.0
0.00
1117.4
69.31
220.0
15.00
3h12m
9141.6
33.09
9495.3
7.83
10000.0
0.00
1170.1
73.93
240.0
11.35
3h20m
9148.0
24.08
9496.8
8.56
10000.0
0.00
1150.9
61.68
239.9
18.00
3h16m
9186.4
32.23
9476.0
8.48
10000.0
0.00
1230.8
64.08
254.4
14.40
3h31m
9188.0
18.85
9469.4
8.45
10000.0
0.00
1315.0
62.83
268.5
15.88
3h45m
9183.1
33.27
9453.1
8.46
10000.0
0.00
1171.6
58.17
242.4
13.07
3h21m
9221.7
18.23
9482.9
8.51
10000.0
0.00
1256.2
51.38
244.2
10.46
3h35m
9200.2
24.77
9482.0
8.47
10000.0
0.00
1337.7
57.49
264.5
12.68
3h49m
9172.4
20.14
9481.0
8.53
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
1018.8
111.23
207.1
31.90
2h55m
747.1
3.84
860.6
0.97
10000.0
0.00
1317.9
99.57
268.3
20.32
3h45m
760.6
4.52
840.8
0.92
10000.0
0.00
1329.9
96.04
280.4
24.36
3h47m
761.4
1.97
839.8
0.92
10000.0
0.00
1210.4
66.56
264.7
20.10
3h27m
762.4
3.01
837.0
0.92
10000.0
0.00
1210.9
73.85
256.0
19.17
3h27m
759.6
3.31
841.0
0.93
10000.0
0.00
1125.2
77.34
219.7
21.56
3h13m
764.5
2.66
836.7
0.92
10000.0
0.00
1371.7
77.50
285.7
23.96
3h54m
769.3
2.32
842.5
0.93
10000.0
0.00
1290.0
74.58
254.6
20.47
3h40m
768.0
2.37
841.3
0.92
10000.0
0.00
1305.6
105.22
253.4
25.90
3h42m
767.6
2.31
834.5
0.91
st70
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
1630.8
92.53
167.1
16.13
4h40m
619.4
3.45
662.6
0.59
10000.0
0.00
1494.2
75.70
164.3
11.87
4h17m
620.6
1.88
667.9
0.70
10000.0
0.00
1654.8
79.70
197.4
16.12
4h44m
624.2
2.11
661.8
0.59
10000.0
0.00
1578.5
87.62
177.3
12.37
4h31m
627.8
1.42
662.7
0.59
10000.0
0.00
1372.8
48.17
155.7
8.42
3h57m
626.4
1.83
659.9
0.59
10000.0
0.00
1508.9
76.61
162.1
13.32
4h20m
630.5
3.31
660.1
0.59
10000.0
0.00
1535.2
64.74
184.1
10.89
4h24m
629.8
2.03
661.0
0.59
10000.0
0.00
1606.5
75.29
178.6
6.75
4h36m
632.3
1.63
661.1
0.59
10000.0
0.00
1549.0
106.10
168.5
15.81
4h26m
631.0
1.44
661.0
0.59
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
10000.0
0.00
3448.6
244.44
154.9
19.90
9h44m
10000.0
0.00
3229.6
80.43
155.0
7.40
9h7m
10000.0
0.00
3167.3
114.45
163.5
8.74
8h57m
10000.0
0.00
3181.0
159.97
179.9
8.22
8h59m
10000.0
0.00
3361.5
87.97
195.8
8.13
9h29m
10000.0
0.00
3201.7
75.36
190.9
13.40
9h3m
10000.0
0.00
3421.0
150.31
195.3
15.37
9h39m
10000.0
0.00
3279.7
131.52
194.0
9.87
9h16m
10000.0
0.00
3147.5
94.59
204.3
11.04
8h54m
berlin52
eil51
ulysses22
ulysses16
burma14
300
eil76
Number of Sample Scenarios N
500
600
700
200
105
105
105
105
105
105
105
Table 5: Computational Results with SAA (TSPRT).
37
300
400
800
900
1000
114408.0
371.11
125067.0
77.59
114314.0
305.30
124206.0
74.98
114500.0
148.97
124144.0
75.91
114491.0
378.42
124426.0
76.86
114851.0
177.70
123658.0
74.59
114930.0
284.87
123586.0
72.42
114932.0
200.05
123609.0
75.06
114944.0
200.64
123480.0
74.39
115104.0
171.58
124419.0
77.06
pr76
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
1360.8
41.90
76.9
4.47
3h55m
59866.4
141.43
62799.3
37.12
10000.0
0.00
1397.0
76.43
85.4
6.34
4h1m
59975.4
133.45
63090.6
35.65
10000.0
0.00
1319.3
50.24
88.1
6.18
3h48m
60301.5
120.02
62857.1
33.84
10000.0
0.00
1275.6
46.37
88.5
5.68
3h41m
60068.8
79.93
63050.7
35.41
10000.0
0.00
1356.4
35.01
91.3
4.16
3h54m
60387.4
102.25
63092.9
37.52
10000.0
0.00
1311.1
49.67
87.3
4.83
3h47m
60457.5
97.40
63086.8
33.11
10000.0
0.00
1243.0
44.86
84.7
4.94
3h35m
60443.3
98.30
63155.8
34.63
10000.0
0.00
1333.6
41.11
90.7
5.99
3h50m
60409.5
65.73
62844.3
37.34
10000.0
0.00
1312.2
42.52
91.9
5.19
3h47m
60509.0
44.94
62837.4
34.39
gr96
Number of Sample Scenarios N
500
600
700
200
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
2478.5
119.07
165.0
17.38
7h4m
1391.5
13.35
1469.1
1.25
10000.0
0.00
2478.5
129.06
162.7
11.33
7h4m
1388.0
4.45
1465.2
1.28
10000.0
0.00
2363.2
83.16
174.0
10.85
6h44m
1405.2
4.73
1470.3
1.20
10000.0
0.00
2343.8
93.99
162.0
8.69
6h41m
1410.1
6.06
1468.5
1.24
10000.0
0.00
2286.7
91.47
164.7
9.36
6h32m
1414.8
3.51
1466.8
1.35
10000.0
0.00
2187.4
110.78
163.5
10.53
6h15m
1408.2
5.25
1462.4
1.33
10000.0
0.00
2379.4
69.26
166.4
7.28
6h47m
1412.5
4.44
1461.0
1.28
10000.0
0.00
2274.8
51.12
169.2
7.06
6h30m
1413.8
2.99
1462.3
1.29
10000.0
0.00
2278.7
78.01
171.6
9.30
6h30m
1417.0
2.18
1464.4
1.24
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
3153.4
96.82
178.3
13.62
8h57m
24141.1
112.48
25807.5
19.23
10000.0
0.00
2768.7
135.33
174.7
10.05
7h53m
24155.7
87.66
25860.3
21.23
9716.7
283.30
2632.0
238.09
170.6
14.07
7h30m
24381.7
68.61
26004.5
20.57
10000.0
0.00
3099.2
106.67
189.2
9.30
8h48m
24388.7
73.86
25697.8
19.10
10000.0
0.00
2949.8
55.83
173.5
9.41
8h23m
24369.9
58.01
25918.0
19.36
10000.0
0.00
2828.6
99.21
176.8
14.63
8h3m
24335.0
73.18
25810.7
17.95
10000.0
0.00
2688.9
120.97
182.5
11.25
7h40m
24448.1
57.04
25793.2
20.03
10000.0
0.00
3067.2
168.82
200.9
19.92
8h42m
24580.2
42.44
25803.5
20.21
10000.0
0.00
2809.8
172.06
176.4
12.77
8h0m
24541.3
62.13
25775.1
20.19
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
2681.1
119.95
154.6
12.74
7h39m
24043.4
106.38
26319.7
18.57
10000.0
0.00
2581.2
103.15
164.6
6.97
7h22m
24506.2
53.81
26732.1
18.21
10000.0
0.00
2497.5
129.11
159.4
11.39
7h8m
24353.1
63.92
26418.0
19.77
10000.0
0.00
2418.5
113.43
156.5
8.58
6h55m
24475.2
47.58
26466.2
20.53
10000.0
0.00
2311.9
94.54
164.0
11.76
6h37m
24407.7
53.02
26286.0
15.80
10000.0
0.00
2439.4
69.39
166.2
6.11
6h58m
24457.3
40.70
26470.1
17.47
10000.0
0.00
2390.2
76.86
169.1
10.99
6h50m
24453.6
42.31
26551.4
21.37
10000.0
0.00
2401.8
98.14
175.6
10.46
6h52m
24655.2
41.62
26356.6
19.26
10000.0
0.00
2263.0
97.48
165.6
10.44
6h29m
24549.0
47.49
26411.4
18.32
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
3114.9
155.59
178.1
11.58
8h51m
23372.6
97.70
24792.5
19.42
10000.0
0.00
3156.8
108.03
198.8
7.43
8h57m
23556.9
69.81
24902.9
19.71
10000.0
0.00
2828.6
88.09
206.0
7.38
8h3m
23516.8
66.86
24732.4
18.33
10000.0
0.00
3107.4
155.32
234.1
13.69
8h49m
23679.1
83.78
24640.2
17.07
10000.0
0.00
2854.8
119.37
205.4
12.46
8h7m
23656.5
59.53
24655.1
18.21
10000.0
0.00
2796.4
71.70
202.3
5.42
7h58m
23680.3
49.01
24681.6
18.19
10000.0
0.00
2936.4
84.19
207.1
8.53
8h21m
23680.1
80.80
24692.3
18.13
10000.0
0.00
2941.2
96.50
218.7
8.57
8h22m
23649.7
33.33
24628.5
17.18
10000.0
0.00
2754.5
99.13
202.3
11.66
7h51m
23710.0
36.78
24660.3
17.19
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
2503.8
137.82
169.7
20.82
7h9m
23713.4
79.48
25719.1
18.00
10000.0
0.00
2597.9
99.35
201.6
12.95
7h25m
23716.1
81.50
25896.5
19.42
10000.0
0.00
2471.9
137.65
226.0
24.52
7h4m
23930.4
64.31
25774.7
20.04
10000.0
0.00
2426.4
100.09
205.9
16.28
6h56m
23977.2
69.41
25538.6
17.54
10000.0
0.00
2278.0
106.83
207.6
13.19
6h31m
23952.7
45.74
25570.3
18.72
10000.0
0.00
2266.1
88.59
188.2
11.80
6h29m
24043.7
39.65
25500.6
16.52
10000.0
0.00
2120.2
54.51
181.0
11.43
6h5m
24096.5
37.34
25629.6
18.67
10000.0
0.00
2383.2
69.71
200.1
11.00
6h49m
24136.7
32.03
25499.0
18.52
10000.0
0.00
2406.5
86.42
220.4
11.35
6h53m
24118.7
35.72
25468.7
19.91
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
10000.0
0.00
3338.1
108.45
195.4
14.15
9h28m
10000.0
0.00
3069.2
139.67
186.6
16.23
8h43m
10000.0
0.00
3087.8
114.60
196.7
7.03
8h46m
10000.0
0.00
3069.9
113.59
208.6
11.08
8h43m
10000.0
0.00
2967.9
91.74
199.3
14.25
8h26m
10000.0
0.00
2812.1
152.52
194.7
13.55
8h0m
10000.0
0.00
2985.9
72.02
205.6
6.11
8h29m
10000.0
0.00
2927.2
119.28
189.6
12.65
8h19m
10000.0
0.00
3055.0
109.84
208.0
10.96
8h41m
kroD100
kroC100
kroB100
kroA100
rat99
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
105
105
105
105
105
105
105
Table 5: Computational Results with SAA (TSPRT). (Cnt’d)
38
kroE100
rd100
eil101
Number of Sample Scenarios N
500
600
700
200
300
400
800
900
1000
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
23424.6
64.11
26007.8
17.93
23712.8
67.07
26134.5
18.23
23644.5
57.41
25894.7
19.75
23745.5
30.93
25773.3
17.60
23826.0
60.42
25933.4
19.82
23744.6
45.91
25963.4
18.04
23766.0
38.93
25766.3
17.57
23856.1
35.65
25843.9
19.43
23800.4
35.55
25775.5
19.43
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
2790.3
118.09
73.2
5.53
7h57m
8903.9
42.51
9557.9
7.01
10000.0
0.00
2719.1
58.24
77.8
2.30
7h45m
8930.1
20.26
9526.7
7.33
10000.0
0.00
2487.3
39.96
75.9
2.98
7h6m
9002.5
31.60
9502.9
7.68
10000.0
0.00
2624.0
64.77
83.9
2.62
7h29m
8958.3
25.14
9486.1
6.59
10000.0
0.00
2613.7
80.87
88.7
4.92
7h27m
9037.0
29.54
9517.4
6.62
10000.0
0.00
2632.9
57.89
83.3
5.56
7h30m
9000.0
22.30
9486.0
6.59
10000.0
0.00
2542.5
88.79
83.3
4.15
7h15m
9044.6
28.43
9500.6
7.35
10000.0
0.00
2446.4
73.44
84.0
3.52
6h59m
9015.0
19.47
9463.5
7.24
10000.0
0.00
2518.9
65.66
86.7
3.51
7h11m
9039.1
16.50
9478.4
7.31
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
z
¯N
σ
ˆ z¯N
z
ˆ105 (ˆ
x∗ )
σ
ˆ zˆ
(ˆ
x∗ )
10000.0
0.00
2573.2
155.82
107.2
5.66
7h20m
701.7
3.37
771.7
0.77
10000.0
0.00
2168.6
92.40
90.0
8.39
6h13m
708.7
0.89
766.5
0.76
10000.0
0.00
2281.4
152.26
122.9
13.21
6h32m
714.7
1.94
768.5
0.76
10000.0
0.00
2165.2
72.75
105.1
11.88
6h12m
712.8
2.58
761.3
0.75
10000.0
0.00
2362.2
126.68
127.3
11.93
6h45m
718.5
3.27
764.4
0.75
10000.0
0.00
2099.5
136.68
122.0
12.74
6h1m
721.0
2.34
761.1
0.75
10000.0
0.00
2057.1
113.71
107.7
10.66
5h54m
720.9
2.61
759.6
0.74
10000.0
0.00
2149.1
114.70
120.7
8.40
6h10m
720.6
1.44
763.8
0.75
10000.0
0.00
2058.1
86.07
117.0
9.46
5h55m
721.2
2.33
763.1
0.75
Nodes
σ
ˆ Nodes
CPU
σ
ˆ CPU
Cuts
σ
ˆ Cuts
Total CPU
10000.0
0.00
4745.7
234.78
72.4
6.64
13h23m
10000.0
0.00
4060.1
151.95
78.2
4.83
11h29m
10000.0
0.00
4354.6
277.75
102.6
8.88
12h18m
10000.0
0.00
3966.6
184.83
94.1
5.59
11h13m
10000.0
0.00
3964.8
157.15
91.1
4.43
11h13m
10000.0
0.00
4261.1
168.96
95.6
6.18
12h3m
10000.0
0.00
4058.6
176.00
107.3
6.37
11h29m
10000.0
0.00
4170.6
138.10
106.7
6.21
11h47m
10000.0
0.00
4051.9
191.90
100.4
8.01
11h28m
105
105
105
Table 5: Computational Results with SAA (TSPRT). (Cnt’d)
39