Sensitivity analysis for production planning model of an oil company Thesis Yu Da

Sensitivity analysis for production
planning model of an oil company
Thesis
By Yu Da
Supervised by Assoc. Professor Tibor Ill´
es
E¨otv¨os Lor´and University
Faculty of Natural Sciences
2004
ACKNOWLEDGEMENT
I take occasion to thank MOL RT. for giving me the opportunity so that
I could join in the research work and, besides, for supporting me by a ten
month scholarship.
I am beholden to Assoc. Prof. Ills Tibor, my supervisor, who let me take
part in the project of MOL RT. and gave me many pieces of good advice.
Last but not least, I thank my friends, Csizmadia Zsolt, Nagy Mariann,
Mincsovics Gergely for their help.
1
CONTENTS
Contents
Introduction
3
1 Linear Programming Problems
1.1 The primal and dual linear programming problems
1.2 The basis and basic solution . . . . . . . . . . . . .
1.3 Duality Theory for Linear Programming . . . . . .
1.4 The Geometry of Linear Programming Problems . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
. 4
. 7
. 8
. 11
2 Linear Programming Pivot Algorithms
2.1 Simplex Method and its variants . . . . . . . . . . . . . . . . .
2.2 Criss-Cross Method . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Monotonic Build-Up Simplex Algorithm . . . . . . . . . . . .
13
13
16
17
3 Degeneracy in Linear Programming
3.1 The Case of Degeneracy . . . . . . . . . . . . .
3.1.1 (Primal) Degeneracy and Nondegeneracy
3.1.2 Dual Degeneracy and Nondegeneracy . .
3.2 Geometry of Degeneracy . . . . . . . . . . . . .
3.3 Anti cycling Pivot Rule . . . . . . . . . . . . . .
3.3.1 Perturbation method . . . . . . . . . . .
3.3.2 Lexicographic Methods . . . . . . . . . .
3.3.3 Bland Pivoting Rule . . . . . . . . . . .
3.4 The Alternative optimal Solutions . . . . . . . .
19
19
19
20
21
23
25
27
33
35
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4 Parametrical Linear Programming
4.1 The Parametric Cost Problem . . . . . . . . . . . . . . . . .
4.2 Parametric Right-Hand-Side problem . . . . . . . . . . . . .
4.3 Resolution of Cycling in The Parametric Simplex Algorithms
4.3.1 Resolution of Cycling in the Parametric Right-HandSide Simplex Algorithm . . . . . . . . . . . . . . . .
4.3.2 Resolution of Cycling in the Parametric Cost Simplex
Algorithm . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
39
. 39
. 45
. 47
. 48
. 49
5 Sensitivity Analysis
51
5.1 The Fundamental Principle of the Sensitivity
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 The sensitivity analysis of coefficients of the
objective function . . . . . . . . . . . . . . . . . . . . . . . . . 52
2
CONTENTS
5.3
5.4
The sensitivity analysis of constant on the right-hand-side
The sensitivity analysis of data in the matrix . . . . . . . .
5.4.1 The change of one matrix element . . . . . . . . . .
5.4.2 Adding a new variable . . . . . . . . . . . . . . . .
5.4.3 Adding a new constraint . . . . . . . . . . . . . . .
6 Analysis of an oil production planning model
6.1 Short analysis about the structure and size of the model
6.2 Flow Chart of solution by the PIMS . . . . . . . . . . . .
6.3 The oil industrial product planning model . . . . . . . .
6.4 Problems about the model and the appropriate analysis .
6.4.1 Local optimum and alternative optimal solution .
6.4.2 Infeasibility and being unbounded . . . . . . . . .
6.5 Model collecting . . . . . . . . . . . . . . . . . . . . . . .
6.6 User documentation about the analyzing program . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
55
57
57
59
59
.
.
.
.
.
.
.
.
63
63
66
68
69
72
74
75
77
INTRODUCTION
The optimization or mathematical programming is the branch of mathematics dealing with methods of optimization of some decision variables which
are restricted by given constraints. Interest in optimization methods and
their applications has become very widespread. They are used in almost all
area of the industry. Still, the applications must not be isolated from theory,
otherwise the solutions can be inaccurate, inexact or even turn aside from the
truth. Therefore, it is always necessary for all the applied mathematicians to
give mathematical arguments and analysis for each step of resolution process
in order to approximate the real life problems in a correct way.
In this thesis, a production planning model of an oil company and related topics, like solution methods, important properties of the problem etc.
are introduced. This is a typical example for applying optimization in the
real life. In order to discuss the model in-depth, the specification of linear
programming, linear programming pivot algorithms, degeneracy are studied in the first three sections. Afterwards, sensitivity analysis, parametric
programming are presented.
Computer programs have become essential tools by modelling and analyzing as in the case studied. In the last part of this thesis the documentation
of our program are included, with which it is possible to analyze the model
and the solutions. Our program is an LP modelling and analyzing tool for
general purpose. The linear programming problem can be described by defining it’s constraints and variables. The program uses the mps file format, as
the models were given in this format. With the program we can modify the
model description, e.g. any value given in the model can be altered, the order
of the variables can be changed, the objective function can be transformed to
a constraint. With the program we can read the solutions of the model and
check, whether the solution really satisfies the constraint conditions. Furthermore, if the solution is outside of the feasible solution set the right-hand
side can be changed by a single click to make it feasible.
1. LINEAR PROGRAMMING PROBLEMS
1
4
Linear Programming Problems
In this section we introduce two forms of linear programming problems, primal and dual. One say that a linear optimization problem consists of optimizing, a linear function over a certain domain. The domain is given by a
set of linear constraints. Here we give the definitions of basic solution, basis, duality, etc. The feasibility problem is shown, the duality theorems are
discussed, and the connection between the geometry and algebra of linear
programming problems are also talked in this section.
1.1
The primal and dual linear programming problems
Linear programs are usually stated and analyzed in the standard forms:

min cT x 
Ax = b
(P )

x ≥ 0
where c is vector in Rn , b is a vector in Rm , A is an m × n real matrix
with rank(A) = m, and x ∈ Rn is the unknown vector..We call this problem
as the primal linear programming problem.
Simple devices can be used to transform any linear program to this form.
For example, given the problem
min cT x
Ax ≤ b
we can convert the inequality constraints to equalities by adding a vector of
slack variable s and writing:

min cT x

Ax + s = b
.

s ≥ 0
For the standard formulation, we will assume throughout that m ≤ n.
Otherwise, the system Ax = b contains redundant rows is infeasible.
Given the data c, b and A, which define the standard primal problem,
using that we can give another closely related problem:

max yT b

AT y + s = c
(D)

s ≥ 0
5
1. LINEAR PROGRAMMING PROBLEMS
where y ∈Rm and s ∈Rn A is an m × n real matrix with rank(A) = m. This
problem is called the dual linear programming problem.
If the solution set of the given linear programming problem in primal
form is denoted by P, then P , which is called as primal feasible solution set,
can be defined as follows:
P = {x ∈ Rn : Ax = b, x ≥ 0}
and also if we denote the solution set of linear programming problem in dual
form by D, the dual feasible solution set, then
D := (y, s) ∈ Rm+n |yA + s = c , s ≥ 0 .
The linear programming problem in standard form may be written in
tableau form in the pivot algorithms, in next section we will talk more about
linear programming pivot algorithms :
A
c
b
∗
First let us recall some basic concepts of linear algebra, which are necessary for understanding pivot algorithms. For more detail see [4].
Let ai ∈ Rm be the ith column of the matrix A. Denote the set of indices
belonging to vectors ai by I. Then I = {1, 2, ..., n} .
Definition 1 A linear combination of the vectors {ai |i ∈ I } is any vector
b ∈ Rm of the form
X
b=
λi ai
i∈I
where λi ∈ R, for all i ∈ I.
Using this definition the generating system is definable, which can express
any vector in the set.
Definition 2 Let IG ⊂ I be given. The set {ai |i ∈ IG } is called a generating
system of {ai |i ∈ I } if any ai , i ∈ I G (I G = I \ IG ) can be expressed as a
linear combination of vectors {ai |i ∈ IG } . That means the set {ai |i ∈ I } is
spanned by the vectors {ai |i ∈ IG } .
6
1. LINEAR PROGRAMMING PROBLEMS
Suppose, that IG ⊂ I contains the indices of a generating system. Then
X
aj =
aij ai , for j ∈ I G .
i∈IG
Using this previous expression we can produce the following tableau, which
is called short pivot tableau:
IG
aj
..
.
IG
ai
· · · aij
..
.
···
Now, let us introduce the primal full pivot tableau, which is used in
solving the primal linear programming problems.
IG
ai
IG
aj
..
.
..
.
· · · aij
..
.
..
.
IG
ai
1
0
··· 0
..
.
0
...
···
1
..
.
0
..
···
0
.
0
1
In the tableau, the aij 6= 0 element is called a pivot element, the row
i ∈ IG is called pivot row, while column j ∈ I G is called pivot column. A
pivot operation and pivot step is uniquely defined by the pivot element. The
primal pivot operation consists of
• a series of elementary row operations on primal pivot tableau, to transform the pivot column into one containing an entry of 1 in the pivot
row (i ∈ IG ) and an entry 0 in all the other rows.
• exchange the unity vector obtained in the pivot column (j ∈ I G ) by
−akj /aij when k 6= i and with 1/aij when k = i.
• IG := (IG \ {i}) ∪ {j} and I G := (I G \ {j}) ∪ {i} .
The result of the pivot operation can be summarized in the following
theorem.
7
1. LINEAR PROGRAMMING PROBLEMS
Theorem 3 If ars 6= 0 then ar , r ∈ IG can be exchanged with as , s ∈ I G in
the following way:
1. a0sr =
1
ars
is
2. a0ir = − aars
a
rj
3. a0sj = − ars
4. a0ij = aij −
0
i ∈ IG , i 6= s
0
j ∈ IG , j 6= r
arj ais
ars
0
0
i ∈ IG , i 6= s, j ∈ I G , j 6= r
The proof can be find in [2][1].
1.2
The basis and basic solution
Assume, that a IG ⊂ I generating system is given. Obviously, if there exists
another set IG0 , that IG ⊂ IG0 ⊂ I, then IG0 is also a generating system. But
if IG is linear dependent, then IG0 can not be linear independent.
Definition 4 The set {ai ∈ Rn : i ∈ I} of vectors, where |I| ≥ 2, is said to
be linearly independent, if there does not exist vector ar , r ∈ I which can be
expressed as a linear combination of the vectors {ai ∈ Rn : i ∈ I\ {r}} .
Thus, if |I| =
6 ∞, then the smallest generating system is linearly independent.
Definition 5 If a system of vectors {ai |i ∈ IB } IB ⊆ I is a linearly independent and also generating system of {ai |i ∈ I } , then it is called a basis.
Let A be an m × n real matrix (m ≤ n) and the matrix A has m linearly independent columns. Let us define by IB the set of indices of such m
columns, and by AB the corresponding m × m submatrix of A. In this case
AB is nonsingular, therefore invertible. The columns of A which indices are
not in the IB form an m × (n − m) submatrix AN of A, the corresponding
index set is denoted by IN .
If a nonhomogeneous linear equation is given as Ax = b then we may
split the unknowns into two different vector according to the index sets IB
and IN . The vector xB contains those variables which indices are in IB ,
so, the xB is the corresponding basic vector, while xN the nonbasic vector.
Thus, x = (xB , xN ). In the similarly way, c = (cB , cN ), in which cB is the
row vector of the basic cost coefficients. Then the nonhomogeneous linear
equation can be expressed as
8
1. LINEAR PROGRAMMING PROBLEMS
AB xB + AN xN = b
Using the invertibility of AB we can solve the equation as
−1
xB = A−1
B b − AB AN xN
−1
If xN = 0 and xB = A−1
B b then the solution x = (AB b, 0) is called
as primal basic solution corresponding to the given basis AB . If A−1
B b ≥ 0
then the basic solution x = (xB , xN ) is called as basic feasible solution and
the AB is called as primal feasible basis. We denote BFS for basic feasible
solution.
m
The vector y =A−T
is called dual basic solution. Then the
B cB ∈ R
T −T
vector sN = cN − AN AB cB and if the condition sN ≤ 0, is satisfied, then
AB is called dual feasible basis, and the solution (y, s) is called dual feasible
solution.
The solutions are optimal if they are both primal and dual feasible.
An equivalent form to the standard form in linear programming problem
with the consideration for the definition in above is:

T −T
min z = cTB A−1
B b+ cN − AN AB cB xN 
−1
(P )
xB + A−1
B AN xN = AB b

xB , xN
≥
0
The pivot tableau, which belongs to AB basis, is as follows:
1.3
A−1
B A
A−1
B b
T
cN − (cTB A−1
B A)
−cTB A−1
B b
Duality Theory for Linear Programming
The primal and the dual problems are generally two distinct linear programming problems. (It is notable, that there is a small class of linear programming problems called self-dual linear programs. Any linear programming
problem in this class is equivalent to its dual.) However, there is close relation between them.
Theorem 6 (Weak duality theorem). Let us consider a primal-dual pair of
linear programming problems and let x ∈ P and (y, s) ∈ D. In the primaldual pair of linear programming problems, let x be a primal feasible solution
and (y, s) the dual feasible solution. Then cT x ≥ bT y, thus xT s ≥ 0, and
equality holds only if x(c − AT y) = 0.
9
1. LINEAR PROGRAMMING PROBLEMS
Corollary 7 The primal objective value of any primal feasible solution is an
upper bound to the maximum value of the dual objective in the dual problem.
The dual objective value of any dual feasible solution is a lower bound to the
minimum value of the primal objective in the primal problem. If the primal
problem is feasible and its objective function is unbounded below on the primal
feasible solution set, then the dual problem cannot have a feasible solution. If
the dual problem is feasible and its objective function is unbounded above on
the dual feasible solution set, then the primal problem cannot have a feasible
solution.
If the solution set of the given linear programming problem in primal
form is denoted by P ∗ , then P ∗ , which is called as primal optimal solution
set, can be defined as follows:
P ∗ : = x∗ ∈ P : cT x∗ ≤ cT x, ∀x ≥ 0
If we denote the optimal solution set of linear programming problem in dual
form by D∗ , the dual optimal solution set.
D∗ := (y∗ , s∗ ) ∈ D : bT y∗ ≥ bT y, ∀(y, s) ∈ D .
Assume, we have x
¯ ∈ P, (¯
y, ¯
s) ∈ D and cT x
¯ = bT y
¯, thus x
¯T ¯
s = 0. In
∗
∗
this case x
¯ ∈ P and (¯
y, ¯
s) ∈ D . Let us define the optimal value as ζ.
Consequently
P ∗ = x ∈ Rn |Ax = b, x ≥ 0, cT x = ζ .
The optimal criterion with the basic solution is: if AB is primal and dual
feasible basis then
−1
∗
T −1
T
∗
x = (xB , xN ) = (A−1
B b, 0) ∈ P and (y, s) = (cB AB , c − (cB AB A) ) ∈ D .
On the other hand, if the basis is infeasible, we have following criteria.
Primal infeasibility criterion: if
A−1
B b
i
< 0,
and
A−1
B A
(i)
≥ 0 i ∈ IB
then the primal problem cannot have a feasible solution, thus P = ∅.
10
1. LINEAR PROGRAMMING PROBLEMS
Let us consider it in the pivot tableau. According to the pivot tableau
belong to AB basis and the primal infeasibility criterion, it is depicted as
following:
i
⊕ ... ⊕
Dual infeasibility criterion: if
T
< 0,
c − (cTB A−1
B A)
j
and
−
A−1
B A
j
≥ 0 j ∈ IN
then dual problem cannot have a feasible solution, thus D = ∅.
Let us consider it in the pivot tableau. According to the pivot tableau
belong to AB basis and the dual infeasibility criterion, it is depicted as following:
:
:
−
j
In conclusion, we have the following optimality criterion. For the detailed
proof, see[1][3][2].
Theorem 8 (Sufficient optimality criterion) If x∗ and (y∗ , s∗ ) are a pair
of primal and dual feasible solutions satisfying cT x∗ = bT y∗ , then x∗ is an
optimal feasibility solution of the primal and (y∗ , s∗ ) is an optimal solution
of the dual.Thus (x∗ )T s∗ = 0, and x∗ ∈ P ∗ , (y∗ , s∗ ) ∈ D∗ .
The converse of this sufficient optimality criterion is the strong duality
theorem.
Theorem 9 (The strong (fundamental) duality theorem) In a primal-dual
pair of linear programming problems, if either the primal or the dual problem has an optimal solution, then the other does also, and the two optimal
objective values are equal.
1. LINEAR PROGRAMMING PROBLEMS
11
Consider the primal and dual problems discussed in the weak duality
theorem, we saw that the minimum value of cT x is greater than or equal to
the maximum value of bT y. In the strong duality theorem, it stated, that the
minimum value of cT x is equal to the maximum value of bT y. This explains
the names weak, strong duality theorems.
The optimality criterion can also be formed as follows:
Ax = b x ≥ 0
y A+s = c s ≥ 0
xT s = 0
T
1.4
The Geometry of Linear Programming Problems
Let us take a look at an important relationship between the algebraical and
geometrical viewpoints of linear programming problems, it will help us to understand the problems and the solution methods more better. The feasible
solution set of an linear programming problem defined by the linear constraints is geometrically a convex polyhedron, an intersection of halfspaces.
In order to introduce the relationship, it is necessary to define the extreme
points, which play an important role in solving problems related to convex
polyhedra.
Definition 10 Let Γ be a convex subset of Rn . A point x ∈ Γ is called an
extreme point of Γ if it is impossible to express it as a convex combination
of two other distinct points in Γ. That means, x is an extreme point of Γ
iff x1 , x2 ∈ Γ and 0 < λ < 1 such that x = λx1 + (1 − λ)x2 implies that
x = x1 = x2 .
An extreme point of the convex subset are the points that do not lie on
a straight line between two other points in the set.
In many linear programming literatures, extreme points defined in an
algebraic way are called basic feasible solutions. Here we have theorem to
explain the connection. Before that let us give a definition of basic feasible
solutions in another form.
Definition 11 Let K be the set of feasible solutions of (P). The feasible solution x ∈ K is called basic feasible solution for (P) iff the set of column vectors
of A that x uses is linearly independent, that the {aj : j, such that xj > 0}
is linearly independent set.
1. LINEAR PROGRAMMING PROBLEMS
12
In the following theorem, we will see, that extreme point and basic feasible
solution are the same, only defined from different viewpoints
For more detail and proof about the following theorem see [3] [2] [1]
Theorem 12 Let K be the set of feasible solutions of (P). Assume, x ∈ K
is an extreme point iff it is a basic feasible solutions of (P).
On the other hand, if we assume, that the optimal value of problem (P)
is finite, then the optimal value is taken at extreme point as well.
The extreme points:
2. LINEAR PROGRAMMING PIVOT ALGORITHMS
2
13
Linear Programming Pivot Algorithms
In this chapter we introduce three pivot algorithms for linear programming
problems. Simplex Method, which was developed by George B. Danzing(1947),
is used today widely in many linear optimization application areas. CrissCross Method was worked out by Tam´as Terlaky(1986)[5]. The last one is the
Monotonic Build-Up Simplex Algorithm For Linear Programming (1991)[7],
which is established by Kurt M. Anstericher and Tam´as Terlaky.
2.1
Simplex Method and its variants
This is an algorithm that can be used to solve any LP in standard form.
Althrough, there are example, e.g. Klee-Minty problem, for which simplex
method use exponencially many iterations, but it is used the most widespread
nowadays.
For the simplex method, suppose the start feasible basic vector is denoted
as (x1 , ..., xm ).
Definition 13 The primal linear programming problem is in canonical form
if the following two conditions are satisfied:
1. b ≥ 0
2. the matrix A contain an m × m unity submatrix where A ∈ Rm×n ,
b ∈ Rm , c ∈ Rn , x ∈ Rn .
Primal Simplex Method
Let the primal linear programming problem be given in the canonical
form.
2. LINEAR PROGRAMMING PIVOT ALGORITHMS
14
Initialization
AB is an primal feasible basis;
IB and IN index set of the basic and nonbasic indices
begin
while Ic := {i : c¯i < 0} =
6 ∅ do
choose the entering arbitrary variable r ∈ Ic
if Ir := {i ∈ IB : a
¯ir < 0} = ∅ then
STOP D = ∅, the
dual infeasibility criterion
is satisfied
else K := arg min ¯bj /¯
ajr , such that a
¯jr > 0
choose the leaving arbitrary variable s ∈ K
Pivot on a
¯sr
endif
endwhile
STOP: the solution is optimal
end
Let introduce a definition, which is used in the algorithm.
Definition 14 The (¯bs /¯
asr ) = min ¯bj /¯
ajr , such that a
¯jr > 0 is called minimum ratio test.
b
Obviously, that 0 ≤ a¯blrl ≤ a¯jrj for all j ∈ Ir , thus the value of the
objective function is decreasing after each pivot iteration. Let us see the
following pivot tableau before the pivot iteration:
..
.
..
.
··· a
¯lr
..
.
c¯r
···
· · · ¯bl
z0
Suppose we chose a
¯lr as the pivot element, according to the algorithm,
we have c¯r < 0, a
¯lr > 0 and ¯bl ≥ 0. After the pivot iteration, we obtain
new
z
= z0 − c¯r¯bl /¯
alr , since c¯r¯bl /¯
alr ≤ 0 that results in z new ≥ z0 .
Since we reading out with negative sign the value of the objective function,
therefore the increase here means decrease in reality. Hence in each iteration,
objective value does not increase.
2. LINEAR PROGRAMMING PIVOT ALGORITHMS
15
The primal simplex algorithm start with a primal feasible basis and move
to a terminal basis by walking through a sequence of primal feasible bases
along the edges of the set of feasible solutions. All the bases with the possible
exception of the terminates basis obtained in the primal algorithm are dual
infeasible. At each pivot iteration, the primal algorithm makes an attempt
to reduce dual infeasibility while retaining primal feasibility
However, the dual simplex algorithm does just the opposite.
Dual Simplex Method
The dual simplex method starts with a dual feasible, but not certain a
primal feasible basis. At each pivot iteration, this algorithm tries to reduce
primal infeasibility while retaining dual feasibility. If a primal feasible basis
is reached, the dual simplex algorithm terminates.
Initialization
AB is an dual feasible basis;
IB and IN index set of the basic and nonbasic indices
begin
while Ir := i : ¯bi < 0 6= ∅ do
choose the arbitrary leaving variable r ∈ Ir
if Ic := {i ∈ IN : a
¯ri < 0} = ∅ then
STOP P = ∅, the primal infeasibility criterion is satisfied.
else K := arg min {−¯
cj /¯
arj , such that a
¯rj < 0}
choose the arbitrary entering variable s ∈ K
Pivot on a
¯rs
endif
endwhile
STOP: the solution is optimal
end
According to the primal method, for the dual simplex algorithm, we define
also the dual simplex minimum ratio test, which is used in the algorith.
Definition 15 The minimum ratio
−(¯
cs /¯
ars ) = min {−¯
cj a
¯rj , such that a
¯rj < 0}
is called dual simplex minimum ratio test.
There are a lot of specialized literature discuss about simplex methods.
Here we do not want to go in detail. Let us see the next poivot algorithm,
Criss Cross method.
2. LINEAR PROGRAMMING PIVOT ALGORITHMS
2.2
16
Criss-Cross Method
This algorithm is introduced by Tam´as Terlaky[5]. However, it is not known,
whether Criss-Cross algorithm perform numerically well or not, but this is
the first algorithm which solves general primal linear programming problem
in one phase starting from any, not necessarily primal or dual feasible basis.
If the basic solution is optimal we are ready, else a primal or dual infeasible variable is chosen. After pivoting we obtain a new basis, in which
this variable becomes both primal and dual feasible. The process is repeated
until optimality is reached or the condition of primal or dual infeasibility is
established(see,[6]).
Criss-Cross Algorithm:
Let the initial basis AB , be given. Let IB and IN denote the basic and
nonbasic indices, respectively, (I = IB ∪ IN ).
Initialization
AB is an arbitrary initial basis;
IB and IN index set of the basic and nonbasic indices
begin
while I¯ := i : ¯bi < 0 or c¯i < 0 6= ∅ do
r := mini∈I¯ i
if (r ∈ IN ) then
if Ir := {i ∈ IB : a
¯ir > 0} = ∅ then
STOP D = ∅, the dual infeasibility criterion is satisfied
else s := minj∈Ir j and IB := ((IB \ {s}) ∪ {r})
Pivot on (s, r) position
endif
else (r ∈ IB )
if Ir := {i ∈ IN : a
¯ri < 0} = ∅ then
STOP P = ∅, the primal infeasibility criterion is satisfied
else s := minj∈Ir j and IB := ((IB \ {r}) ∪ {s})
Pivot on (r, s) position
endif
endif
endwhile
STOP the current solution solves the problem
end
Now, there are more variations about the proof about criss-cross type
pivot rules. E.g. see [8].
2. LINEAR PROGRAMMING PIVOT ALGORITHMS
2.3
17
Monotonic Build-Up Simplex Algorithm
This algorithm begins with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot rule produces a sequence of
pivots such that ultimately the originally chosen nonbasic variable enters the
basis and all reduced costs which were originally nonnegative remain nonnegative. The pivot rule thus monotonically builds up to a dual feasible, and
hence optimal basis. For more details see [7]
The MBU Pivot Rule
Given any feasible basis. For (P) problem the pivot rule is:
Initialization
AB is an arbitrary initial basis;
IB and IN index set of the basic and nonbasic indices
begin
while I¯ := {i : c¯i < 0} =
6 ∅ do
choose s ∈ I¯ as driving variable; θ1 := 1, θ2 := 0
while θ1 > θ2 ∅ do
if a
¯is ≤ 0, i = 1, ..., m, then
STOP D = ∅, the dual infeasibility criterion is satisfied
else let r := arg mini ¯bi /¯
ais : a
¯is > 0 and θ1 = |¯
cs | /¯
ars
endif
if a
¯rj ≥ 0, for every j, with c¯j ≥ 0, then Pivot on (s, r) position
θ2 = +∞.
else let t := arg minj {¯
cj /¯
arj : c¯j ≥ 0 and a
¯rj < 0} and θ2 = c¯t / |¯
art |
endif
if θ1 > θ2 then
pivot on a
¯rt and xt as the blocking variable
else pivot on a
¯rs
endif
endwhile
endwhile
STOP the current solution solves the problem
end
The result of a pivot in the last ”if” in the algorithm will be a basis which
is neither primal nor dual feasible. This pivot selection can also be viewed as
”dual” pivot, since the purpose of such a pivot is to maintain as much dual
feasibility as currently exists.
The Dual MBU Pivot Rule
2. LINEAR PROGRAMMING PIVOT ALGORITHMS
18
The dual MBU rule can easily be derived from he primal rule, and has
the ”build-up” feature with respect to the number of nonnegative primal
variables. Assume an initial dual feasible basis is available.
Initialization
AB is an arbitrary initial basis;
IB and IN index set of the basic and nonbasic indices
begin
while I¯ := i : ¯bi < 0 6= ∅ do
choose r ∈ I¯ as driving row; θ1 := 1, θ2 := 0
while θ1 > θ2 ∅ do
if a
¯rj ≥ 0, i = 1, ..., m,then
STOP P = ∅, the dual infeasibility criterion is satisfied
else let s := arg minj {¯
cj / |¯
arj | : a
¯rj < 0} and θ1 = ¯br /¯
ars
endif
if a
¯is ≤ 0, for every j, with ¯bi ≥ 0, then
θ2 = +∞.
else let q := arg mini ¯bi /¯
ais : ¯bi ≥ 0 and a
¯is > 0 and θ2 = ¯bq /¯
aqs
endif
if θ1 > θ2 then
pivot on a
¯qs and q as the blocking row
else pivot on a
¯rs
endif
endwhile
endwhile
STOP the current solution solves the problem
end
Certain, there are also many other linear programming algorithms, for
example, interior point method, ellipse method. Some of them can be applied
more effective, others can give us exact theory background. About that I do
not want to go into details. However, all the pivot algorithms can occur
degeneracy, which is talked in next section.
3. DEGENERACY IN LINEAR PROGRAMMING
3
19
Degeneracy in Linear Programming
In the proceeding section several linear programming pivot algorithms are
introduced. However, all of them have the same weak point, the degeneracy,
which can occur the algorithm into cycling, so that it will not terminated for
ever. It’s description can be find in many literature, e.g. [11][3][1], etc.
In the first part of this section we are going to discuss about the degeneracy in detail, also in geometry sense. After that several anticycling pivot
rule are introduced. Final, one important attribute of degeneracy, alternative
optimal solutions are talked about.
3.1
The Case of Degeneracy
In the fist section, we have given the Linear programming problem in the
standard form as:

min cT x 
Ax = b
(P )

x ≥ 0
where c and x are vectors in Rn , b is a vector in Rm , and A is an m × n real
matrix rank(A) = m.
The degeneracy in linear programming can also be classified in primal
and dual.
3.1.1
(Primal) Degeneracy and Nondegeneracy
A basic feasible solution (BFS) of a linear programming problem in standard
form is said to be (primal) degenerate if the value of at least one of the basic
variables is zero. Suppose x∗ is a BFS of (P), and the rank of A is m. Since
any set of m + 1 or more column vectors in Rm is linearly dependent, the
number of positive variables x∗j cannot exceed m. The solution x∗ is said to
be a nondegenerate BFS of (P) or primal nondegenerate BFS, if exactly m of
the x∗j are positive. If the number of positive x∗j is less than m, x∗ is known
as a primal degenerate BFS of (P).
If x∗ is a nondegenerate BFS of (P), the column vectors of A corresponding to positive x∗j form a basis of (P) for which x∗ is the associated basic
solution.
If x∗ is a degenerate BFS of (P), the set of column vectors of A corresponding to positive x∗j is linearly independent, but does not consist of
enough vectors to form a basis. It is possible to expand this set with some
more column vectors of A to a basis.
3. DEGENERACY IN LINEAR PROGRAMMING
20
Hence a nondegenerate BFS has a unique basis associated with it. A
degenerate BFS may have many bases associated with it.
A primal feasible basis AB of (P) is said to be a primal nondegenerate
basis if A−1
B b > 0.
It is said to be a primal degenerate basis if at least one of the components
in the vector A−1
B b is zero. Let us see it in the pivot tableau:
i
...
+
+
0
+
..
.
+
In a degenerate BFS, if we perform the standard pivot operation to improve the objective value, the minimum ratio can turn out to be zero. When
this happens, the pivot operation just changes the basic vector, but the BFS
corresponding to it and the objective value remain unchanged. In the next
pivot step might the same happens and after a series of such pivot steps, it
might return to the basic vector which started these degenerate pivot steps,
e.g. AB1 → AB2 → ... → AB1 . In such a case, it is said that the algorithm
cycle due to degeneracy.
In a nondegenerate BFS the values of all the basic variables are positive.
Therefore, the minimum ratio in a pivot step from a nondegenerate BFS
is always positive and such a step in the course of the simplex algorithm
is guaranteed to result in a strict decreasing in the objective value. Since
the objective value never increases in the course of the algorithm, once a
strict decrease in it occurs, we will never go back to any previous basis in
subsequent steps. The cycling can never begin with a nondegenerate BFS.
3.1.2
Dual Degeneracy and Nondegeneracy
A basis AB of (P) is said to be dual degenerate, if the reduced costs coefficient
of at least one nonbasic variable is zero in the canonical tableau with respect
to the basis AB .
A basis AB of (1) is said to be dual nondegenerate, if the reduced costs
coefficient of all the nonbasic variable are nonzero in the canonical tableau
with respect to the basis AB .
3. DEGENERACY IN LINEAR PROGRAMMING
21
−, −..0..−, −
j
A linear programming problem is said to be dual nondegenerate if in every
solution (y∗ , s∗ ) for y∗T A + s∗ = c in which the variables are y and s∗ , at
least n-m of the s∗j are nonzero; otherwise, it is dual degenerate. If (P) is dual
nondegenerate, then the reduced cost coefficients of all the nonbasic variables
will be nonzero in the canonical tableau with respect to any bases for (P).
A sufficient condition for (P) to have a unique optimal solution is that
there exist an optimal basis for (P) that is dual nondegenerate.
Suppose an optimal solution is obtained when an linear programming
problem is solved. If some of the nonbasic variables have zero reduced cost
coefficients in the final optimal canonical tableau, that means there is dual
degeneracy. In this case, alternative optimal solutions for the problem can be
obtained by bringing a nonbasic variable with zero reduced costs coefficients
into the basis . We will discuss about the alternative optimal solution later.
3.2
Geometry of Degeneracy
A primal feasible basis AB of (P) is said to be a primal nondegenerate basis if
all the components in the vector A−1
B b are nonzero. The linear programming
problem is said to be totally (primal) nondegenerate if every basis for (P) is
nondegenerate.
Equivalently, (P) is totally nondegenerate if b cannot be expressed as a
linear combination of any subset of m − 1 of less column vectors of A, that
is, if b does not lie in any subspace that is the linear hull of a set of m − 1
or less column vectors of the matrix A.




1 0 0 0
1




Example 16 A =  0 1 0 1 , b =  −1 
0 0 1 1
0
In this example, b can be expressed as a linear combination of a1 and a2 .
Hence, this problem is primal degenerate. Also any bases for this problem
22
3. DEGENERACY IN LINEAR PROGRAMMING
that includes a1 and a2 as basic vectors is a degenerate basis. The problem
becomes primal nondegenerate if b does not lie on any of these four planes
in the picture.
x3
A3=(0,0,1)
A1=(1,0,0)
x1
b=(-1,1,0)
A2=(0,1,0)
A4=(1,0,-1)
x2
The problem (P) is degenerate if b is in a subspace that is the linear
Pm−1hull
of a set of m − 1 or less column vectors of A. There are at most r=1 nr
subsets of column vectors of A with m − 1 or less column vectors in them.
The linear hull of each of these subsets if a subspace of dimension m − 1 or
less. The problem (P) is degenerate if b lies in one of these subspaces. When
A is fixed, the set of all b that make (P) degenerate is the set of all b that lie
in a finite number of subspaces of dimension m − 1 or less. Thus almost all
b will make (P) nondegenerate. Also if (P) is degenerate, there are methods
what would make (P) nondegenerate. For example b can be perturbed to a
point in its neighborhood that is not in any of the finite number of subspaces.
We will discuss these methods later.
We know, that if a basic solution is nondegenerate, it suits an unique
feasible basis, however if it is degenerate, it will be determined by more
feasible bases. As a consequence of this relationship, a linear programming
problem with given standard form has maximum (nm ) feasible bases, so the
number of its bases solution can not exceed (nm ), hence the polyhedron will
not have more than (nm ) vertices.
As we know, P ∗ = x ∈ Rn |Ax = b, x ≥ 0, cT x = ζ . So, the set of
optimal solutions of an linear programming problem is a convex polyhedron.
It is easily to proof, that if the problem has more optimal value, then it
is dual degenerate.
23
3. DEGENERACY IN LINEAR PROGRAMMING
Theorem 17 Let us assume that a problem (P) is given. Suppose that the
optimal value has been obtained at several vertices, then the convex combination of these optimal solutions is also an optimal solution. In this case the
problem (P) is dual degenerate.
How many optimal solutions does a linear programming problem have,
the following corollary will give an answer for this question.
Corollary 18 Let us assume that a problem (P) is given and that the feasible
solution set, P is bounded. If there is at least two optimal solutions, then
these are infinite number of optimal solutions, too.
3.3
Anti cycling Pivot Rule
We know, if the linear programming problem is degeneracy, then the algorithm may cycle. That means, the algorithm goes through a series of bases,
and return to the basis which started the degenerate pivot steps. Therefore,
the iteration will never end. To prevent this phenomenon several methods are
found. E.g. perturbation method, lexicographic method, or Bland pivoting
rule.
At first let us see a cycling example.
Example 19 Let us see an example for cycling and perturbation method:
maximize
subject to
10x1 − 57x2 − 9x3 − 24x4
0.5x1 − 5.5x2 − 2.5x3 + 9x4 + x5 = 0
0.5x1 − 1.5x2 − 0.5x3 + x4 + x6 = 0
x1 + x7 = 1
The initial tableau is:
1.
x5
x6
x7
c
x1
x2
x3
x4 x5 x6 x7
0.5 −5.5 −2.5
9 1 0 0
0.5 −1.5 −0.5
1 0 1 0
1
0
0
0 0 0 1
10 −57
−9 −24 0 0 0
b
0
0
1
0
The pivot rules:
(1) The entering variable will always be the nonbasic variable that has
the largest positive coefficient in the object function row.
24
3. DEGENERACY IN LINEAR PROGRAMMING
(2) If two or more basic variables compete for leaving the basis, then the
candidate with the smallest subscript will be made to leave.
Now, the entering variable should be x1
After the first iteration:
2. x1
x2 x3
x4
x5 x6 x7
x1
1 −11 −5
18
2 0 0
x6
0
4
2
−8 −1 1 0
x7
0
11
5 −18 −2 0 1
c 0
53 41 −204 −20 0 0
After the
3. x1
x1
1
x2
0
x7
0
c 0
b
0
0
1
0
second iteration:
x2
x3
x4
x5
x6 x7
0
0.5 −4 −0.75
2.75 0
1
0.5 −2 −0.25
0.25 0
0 −0.5
4
0.75
13.25 1
0
41 −98 −6.75 −13.25 0
After the third iteration:
4.
x1 x2 x3 x4
x5
x6 x7
x3
2 0 1 −8 −1.5
5.5 0
x2
−1 1 0
2
0.5 −2.5 0
x7
1 0 0
0
0
0 1
c −29 0 0 18
15 −93 0
b
0
0
1
0
After the fourth iteration:
5.
x1 x2 x3 x4
x5
x6 x7
x3
−2
4 1 0 0.5 −4.5 0
x4 −0.5 0.5 0 1 0.25 −1.25 0
x7
1
0 0 0
0
0 1
c −20 −9 0 0 10.5 −70.5 0
b
0
0
1
0
After the fifth iteration:
6. x1
x2
x3 x4
x5
x6 x7
x5 −4
8
2 0
1 −9 0
x4 0.5
1.5 −0.5 1
0 −1 0
x7
1
0
0 0
0
0 1
c 22 −93 −21 0 10.5 −24 0
b
0
0
1
0
After the sixth iteration:
b
0
0
1
0
25
3. DEGENERACY IN LINEAR PROGRAMMING
7.
x5
x6
x7
c
x1
x2
x3
x4 x5 x6 x7
0.5 −5.5 −2.5
9 1 0 0
0.5 −1.5 −0.5
1 0 1 0
1
0
0
0 0 0 1
10 −57
−9 −24 0 0 0
b
0
0
1
0
Since the basic variables after the sixth iteration are same than the initial
basis, the method will go through the same six iterations again and again.
This is known as cycling. The cycling can occur only in the presence of
degeneracy: since the value of the objective function increases with each
nondegenerate iteration and remains unchanged after each degenerate one,
all the iterations in the sequence leading from a basic variables set to itself
must be degenerate.
3.3.1
Perturbation method
Suppose a degenerate linear programming problem is given, one method to
avoid algorithm cycling is to modify the data a little. Such methods are
called perturbation methods.
The basic variables currently at zero would most likely assume small
nonzero values if the initial right-hand side b, changed slightly. If the change
is very small, the problem could be considered unchanged for all practical
purpose.
One such perturbation that makes (P) nondegenerate is to replace b by
b(ε) = b + (ε, ε2 , ..., εm )T ,
where ε is an arbitrarily small positive number.
Theorem 20 Given any b ∈ Rm , there exists a positive number ε0, such
that whenever 0 < ε < ε0,

min cT x

Ax = b(ε)
(P )
 ε
x ≥ 0
the perturbed problem(Pε ) is primal nondegenerate.
Proof : Let AB1 , AB2 ,...,ABl be all the bases for (Pε ). Then l =
the bases number.
r
r
r
Let A−1
Br = (β ij ). Since ABp is regular (β i1 , ...β im ) 6= 0, i = 1...m.
¯ r = A−1 b. Then
Let b
Br
n
m
is
26
3. DEGENERACY IN LINEAR PROGRAMMING

A−1
Br b(ε)


=


−
br1 +β r11 ε + ... + β r1m εm
..
.
−






brm +β rm1 ε + ... + β rmm εm
for i = 1...m, and r = 1...l, the polynomial, ¯bri + β ri1 η + ... + β rim η m is a
nonzero polynomial in η. Therefore, it has at most m roots. If ε is not equal
to one of these roots, then ¯bri +β ri1 ε + ... + β rim εm 6= 0. The collection of the
roots of all these polynomials in η is a finite set consisting of at most lm2
real numbers. Hence, it is possible to choose a positive number ε0 such that
the roots set contains no element belonging to the open interval 0 < ε < ε0 .
By the construction of the set and the positive number ε0 , it is clear that
in any basic solution of (Pε ) all the bases variables are nonzero and, hence,
that (Pε ) is nondegenerate whenever 0 < ε < ε0 . Suppose (P ) is degenerate. The general strategy is as follows. Replace b
by b(ε) and get the perturbed problem. We know that there exists a positive
number ε0 such that for all 0 < ε < ε0 (Pε ) is nondegenerate. Thus for all
ε that are positive but arbitrarily small, (Pε ) can be solved by applying the
simplex algorithm with no possibility of cycling occurring. It is not necessary
to give ε any specific value as long as it is treated as arbitrarily small, which
will mean that it is strictly smaller than any other positive number with
which it may be compared during the course of the algorithm.
Theorem 21 If AB is a feasible basis for (Pε ), then AB is a feasible basis
for (P ), as well.
−1
¯
Proof : Let A−1
B = (β ij ) and b = AB b. Then


¯

A−1
B b(ε) = 
¯b1 + β ε + ... + β εm
11
1m
¯bm + β m1 ε + ... + β mm εm




Suppose AB is a infeasible basis for (P ), that means, there are some i ,
¯bi < 0, then there exists such small ε, for which ¯bi + β ε + ... + β εm < 0, so
i1
im
¯ ≥ 0 for all i. So AB must be a feasible
we obtain a contradiction. Hence b
basis for (P).
3. DEGENERACY IN LINEAR PROGRAMMING
3.3.2
27
Lexicographic Methods
The perturbation and the lexicographic methods are closely related to each
other. The lexicographic method can be seen as an implementation of the
perturbation method.
Definition 22 A vector a = (a1 , ..., an ) is said to be lexico-positive if a 6= 0
and if the first nonzero component of a is positive. Let us indicate this by
a 0. The vector a is said to be lexico-negative (a ≺ 0) if −a 0. Given
two vectors a = (a1 , ..., an ) and b = (b1 , ..., bn ), a b if a − b 0.
Theorem 23 AB is a feasible basis for (Pε ), with arbitrarily small ε ⇐⇒
(¯bi , β i1 , ...β im ) 0
for all i.
Proof : AB regular implies, that (β i1 , ...β im ) 6= 0, i = 1...m. So the
vector (¯bi , β i1 , ...β im ) is nonzero for all i. When ε is arbitrarily small, the
sign of ¯bi + β i1 ε + ... + β im εm is the same as the sign of the first nonzero
coefficient in this polynomial, the first nonzero component in (¯bi , β i1 , ...β im ).
So ¯bi + β i1 ε + ... + β im εm is positive if the first nonzero component in the
vector (¯bi , β i1 , ...β im ) is positive. With this theorem it is easy to test the feasibility of a basis for (Pε ).
There is no need to give any specific value to ε. There is required only the
right hand-side vector and the inverse of the basis to test the feasibility.
−1
¯
Let AB be a basis for (P), A−1
B = (β ij ), b = AB b. The basis AB is
¯ ≥ 0, the basis AB is called lexico (primal) feasible if
(primal) feasible, if b
¯
(bi , β i1 , ...β im ) 0 for each i = 1 to m. If AB is a nondegenerate feasible basis
¯ > 0, and obviously AB is automatically a lexico feasible basis for
for (P), b
(P). If AB is a degenerate feasible basis for (P), it is lexico feasible iff the
first nonzero entry in (β i1 , ...β im ) is strictly positive for every i such ¯bi = 0.
In conclusion, every feasible basis for perturbated problem(Pε ) for arbitrarily
small ε is automatically lexico feasible to (P). Conversely a feasible basis for
(P) is feasible for (Pε ) for arbitrarily small ε iff it is lexico feasible for (P).
Lexico Primal Simplex Algorithm.
Suppose AB is a feasible basis for (Pε ), A−1
B = (β ij ) and the nonbasic
variable xj will be brought into the basic vector. ¯
aj = A−1
B aj is the pivot
column. If a
¯ij > 0 for some i, then there is a unique lexico minimum among
the nonempty set of vectors
¯bi , β i1 , ...β im /¯
aij : i, such that a
¯ij > 0 .
3. DEGENERACY IN LINEAR PROGRAMMING
28
Assume, that
¯bi , β , ...β
aij = ¯bk , β k1 , ...β km /¯
akj i 6= j,
i1
im /¯
then
¯bi , β , ...β
¯ij /¯
akj ¯bk , β k1 , ...β km ,
i1
im = a
but the A−1
B is nonsingular matrix.
The minimum ratio in the pivot step in solving (Pε ) is as follows
−
m
/¯
aij : i, such that a
¯ij > 0
min
bi +β i1 ε + ... + β im ε
Thus when ε is small, the pivot row is uniquely determined, independent
from the actual value of ε.
Lexico Primal Simplex Minimum Ratio Pivot Row Choice Rule:
Initialization
AB is a primal feasible basis, i.e., xB ≥ 0;
the entering variable is xj
begin
aij : i, such that a
¯ij > 0 ) = 1 then
If (I0 := min ¯bi /¯
STOP then the minimum is unique, the pivot row is determined.
else p := 1
whileIp = min {β in /¯
aij : i ∈ In−1 } =
6 1 do
n := n + 1 (the next row)
endwhile
endif
STOP: the pivot row is determined
end
−1
¯
Let A−1
B = (β ij ) , b = AB b and cB is the row vector of basic cost
coefficients. The appropriate dual solution is y = cTB A−1
B and the object
T¯
value is cB b. The corresponding basic vector


¯b1 + β ε + ... + β εm
11
1m




xB = 

..


.
¯bm + β m1 ε + ... + β mm εm
all nonbasic variables are zero. The objective value
z¯0 = z¯ + y1 ε + ... + ym εm
3. DEGENERACY IN LINEAR PROGRAMMING
29
Suppose the nonbasic variable xj is the entering variable for this pivot
step. Let us denote the objective value after the pivot step with z1 , then we
obtain
m
c¯j β jt
c¯j (¯bj + β j1 ε + ... + β jm εm )
c¯j ¯bj X t
= z¯ +
+
ε (yt +
)
z1 = z¯0 +
a
¯ij
a
¯ij
a
¯ij
t=1
For arbitrarily small ε, z1 < z¯0 . Consequently,
c¯j β j1
c¯j β jm
c¯j ¯bj
z¯ +
, y1 +
, ...ym +
≺ (¯
z , y1 , ...ym ) .
a
¯ij
a
¯ij
a
¯ij
Remark 1 A basis is feasible to (Pε ) if the values of all the bases variables
are lexico positive. When this happens, we say that the basis is lexico feasible.
The algorithm begins with a lexico feasible basis and all the bases in the
course of the algorithm will be lexico feasible.
The termination criteria and the pivot column choice are the same as
under the regular simplex method.
When a leaving variable has been selected, the pivot row is chosen by
the the lexico minimum ratio rule. It determines the entering basis variable
uniquely.
There is a strict lexico decrease in the vector (¯
z , y1 , ...ym ) after each pivot
step. Hence a basis that appeared once can never reappear and cycling cannot
occur. The algorithm must terminate after a finite number of pivot steps.
When the simplex algorithm terminates, the terminal tableau is a terminal
tableau for (Pε ) for all ε < ε0 .
The terminal basis for (Pε ) obtained by the simplex algorithm with the
lexico minimum ratio rule is also a terminal basis for (P ).
Example 24 Now let us see, how we solve the previous cycling problem with
the perturbation method:
The initial tableau should be:
1.
x5
x6
x7
c
x1
x2
x3
x4 x5 x6 x7
b
0.5 −5.5 −2.5
9 1 0 0
ε
0.5 −1.5 −0.5
1 0 1 0
ε2
1
0
0
0 0 0 1 1 + ε3
10 −57
−9 −24 0 0 0
0
30
3. DEGENERACY IN LINEAR PROGRAMMING
The entering variable is again x1 . The constraints x5 ≥ 0, x6 ≥ 0, and
x7 ≥ 0 limit the increase of x1 to 2ε, 2ε2 and 1 + ε3 . Since 2ε2 < 2ε < 1 + ε3 ,
ε , the leaving variable is x6 .
2.
x5
x1
x7
c
x1
x2 x3
x4 x5
x6 x7
b
0 −4 −2
8 1 −1 0
ε − ε2
1 −3 −1
2 0
2 0
2ε2
0
3
1 −2 0 −2 1 1 − 2ε2 + ε3
0 −27
1 −44 0 −20 0
20ε2
3.
x5
x1
x3
c
x1
x2 x3
x4 x5
x6 x7
b
2
0
2 0
4 1 −5
2 2+ε − 5ε + 2ε3
1
0 0
0 0
0
1
1 + ε3
0
3 1 −2 0 −2
1
1 − 2ε2 + ε3
0 −30 0 −42 0 −18 −1
1 + 18ε2 + ε3
This is the optimal tableau for the perturbed problem. It may be converted into the optimal basis for the original problem by simply disregarding
all the terms involving ε.
The lexicographic method is that implementation of the perturbation
method in which ε is symbol, and quantities are compared by the lexicographic rule. It is always possible to choose the leaving variable by the
lexicographic rule: there is always one that is lexicographically smaller than
or equal to all the others. At any time, when the terms involving ε are
deleted, then the tableau for the perturbed problem reduces to a tableau for
the original problem.
Theorem 25 The simplex algorithm terminates, if the leaving variable is
selected by the lexicographic rule in each iteration.
Proof : We should prove that nondegenerate problem will be constructed.
Therefore, we need only consider an arbitrary row:
X
xk = (r0 + r1 ε + ... + rm εm ) −
dj xj
j ∈I
/ B
and prove that at least one of the m + 1 numbers r0 , r1 , ..., rm is different
from 0. If dj = 1 and di = 0 for all basic variables xi i 6= k, results:
n+m
X
j=1
dj xj = r0 + r1 ε + ... + rm εm
31
3. DEGENERACY IN LINEAR PROGRAMMING
The slack variables, xn+i = bi + εi −
n
P
aij xj
(i = 1, 2, ...m).
j=1
We get the equation
n
X
dj xj +
j=1
m
X
dn+i bi + εi −
i=1
n
X
!
aij xj
= r0 + r1 ε + ... + rm εm
j=1
which holds for all choices of numbers x1, x2, ..., xn and ε, ..., εm . Write it as
!
n
m
m
m
X
X
X
X
dj −
dn+i aij xj +
(dn+i − ri ) εi = r0 −
dn+i bi
j=1
i=1
i=1
i=1
consider that the coefficient of each xj , εi, according to the right-hand side,
must equal zero. Thus
dn+i = ri
i = 1, 2, ..., m
m
P
dj =
dn+i aij j = 1, 2, ..., n
i=1
If all the numbers r1 , ..., rm were equal to zero, then dn+i = 0 for all i =
1, 2, ..., m and dj = 0 for all j = 1, 2, ..., n,treat in contradiction with the fact
that dk = 1. 2
Lexico Dual Simplex algorithm
At first, we shall eliminate a variable from a positively dependent pair of
variables. We say, that two variables xp and xq are positively dependent if
there exists a α > 0 constant, that the column vector of xp in the original
tableau is α times the column vector of xq . Let y = xp +αxq , eliminate both
xp and xq from the problem and introduce y into the problem. The result
is an LP with variables less than (P). If y
¯ is the value of y in an optimal
solution of the reduced problem, then replace y
¯ in that solution by x
¯p and
x
¯q , that x
¯p +α¯
xq = y
¯. Especially x
¯p = 0 and x
¯q = y
¯. Thus if the LP contains
a positively dependent pair of variables, one of the variables in the pair can
be deleted by setting it equal to zero.
If (P) is not totally dual nondegenerate, the dual simplex algorithm applied on (P), starting with a dual feasible basic vector, can cycle.
The basis AB and the corresponding basic vector xB are dual feasible for
(P) if c¯j ≥ 0 for each j = 1 to n.
The basis AB and the corresponding basic vector xB are said to be lexico
dual feasible for (P) iff (¯
cj , ≥ a
¯1j , ..., a
¯mj ) 0 for each j = 1 to n.
Obviously, if (P) is totally dual nondegenerate, every dual feasible basis
for (P) is lexico dual feasible and vice versa. If (P) is not totally dual nondegenerate, a basis AB is lexico dual feasible for (P) only if all of the reduce
3. DEGENERACY IN LINEAR PROGRAMMING
32
cost coefficients are nonnegative. And the topmost nonzero entry in every
column of the canonical tableau is strictly positive in every column, in which
the reduce cost coefficient is zero.
In order to resolve dual degeneracy in the Dual Simplex Algorithm the
pivot column choice rule is to apply. The initial basis and all the bases obtained during this algorithm are lexico dual feasible. Pivot Column Choice
Rule to Resolve Dual Degeneracy in the Dual Simplex Algorithm:
Initialization
xB lexico dual feasible basic vector AB is a corresponding basis.
the leaving variable is xr
begin
If (I0 := min {¯
cj /¯
arj : j, such that a
¯rj < 0}) = 1 then
STOP then the minimum is unique, the pivot row is determined.
else s := 1
whileIs = min {¯
ani /¯
arj : j ∈ In−1 } =
6 1 do
n := n + 1 (the next column)
endwhile
endif
STOP: the pivot column is determined
end
In this algorithm is the pivot row chosen according to the dual simplex algorithm discussed in the second section, suppose it is row r. If the primal
infeasibility criterion is satisfied, (¯
ar1 , ..., a
¯rn ) ≥ 0, then the problem is infeasible. On the other hand, if there exists j, that a
¯rj < 0, then the entering
variable is xs , that
(¯
cj , a
¯1j , ..., a
¯mj )
(¯
cs , a
¯1s , ..., a
¯ms )
= lexico min
, j, that a
¯rj < 0
−¯
ars
−¯
arj
If the lexico minimum is not unique, let S := {i, i tied for it} . The set of all
variables xj for j ∈ S are pairwise positively dependent in this case. Select
any one of the variables xs for some s ∈ S as the entering variable into the
basic vector xB , set all other xj for j ∈ S, j 6= s equal to zero. Delete them
and the corresponding column vectors. The column vector of xs is the pivot
column.
If xB is a lexico dual feasible basic vector, and the pivot column is chosen
by the above procedure, then the basic vector obtained after this pivot remains also lexico dual feasible. The vector (−¯
z , ¯b1 , ..., ¯bm ) undergoes a strict
lexico decrease in the pivot step. If (P) is solved by the dual simplex algorithm, which starts with a lexico dual feasible basic vector, determines the
3. DEGENERACY IN LINEAR PROGRAMMING
33
pivot column in each pivot step by this procedure, then cycling cannot occur
and the algorithm must terminate after a finite number of pivot steps. And
it satisfies one of the termination conditions in the dual simplex algorithm.
3.3.3
Bland Pivoting Rule
In the preceding section we have discussed the lexico minimum ratio rule.
One disadvantage of that is it may require the entries in the basic inverse in
each step. If the problem is solved using the product form of the inverse, computing the basic inverse is computationally expensive. Another method for
resolving degeneracy has been developed by R.G.Bland[9]. In this method,
both the pivot entering nonbasic variable and the leaving basic variable are
selected uniquely by the rule. The method requires that a ordering of the
variables in problem be chosen before the algorithm is initiated. According
to Bland, this ordering can be arbitrary, but once it is selected, it is fixed and
cannot be changed after the algorithm is initiated. But, the recent research
showed, that the order of the variables need not to be fixed in the beginning,
let us just thinking about the new criss-cross algorithm[8].
In each pivot step, the entering nonbasic variable and the leaving basic
variable are chosen to be the first variables that are eligible to enter and leave
the basic vector in that step, when the variables are examined in the specific
ordering chosen for the variables.
Without any loss of generality we assume that the specific ordering chosen for the variables is the natural order x1 , x2 , ..., xn . In this case, Bland’s
entering and leaving variable choice rule is called the least subscript or least
index rule.
Theorem 26 (R.G.Bland) The simplex method terminates when the entering and leaving variables are selected by the least index rule in each iteration.
Proof : We need to show that cycling can not occur, when the smallest
subscript rule is used. Because if the simplex method fails to terminate, then
it must cycle.
Suppose the algorithm cycle on the basis sequence of iterations AB0 , AB1 ,
AB2 , ..., ABk , such that AB0 = ABk in a sequence of degenerate iteration. Let
xl have the largest subscript among the variables, which are nonbasic variable in some bases and basic variable in some bases. In the basis sequence
AB0 , AB1 , AB2 , ..., ABk there is a basis AB , where xl was the leaving variable,
and xs was the entering variable. Because of the assumption, that the algorithm is cycling. In the sequence AB0 , AB1 , AB2 , ..., ABk , AB0 , AB1 , AB2 , ...,
34
3. DEGENERACY IN LINEAR PROGRAMMING
ABk , there have to be another basis A∗B , where xl was the leaving variable.
Therefore, we can record AB as
X
x i = bi −
aij xj (i ∈ IB )
j ∈I
/ B
and the objective function value
z=v+
X
cj xj
j ∈I
/ B
The iterations between AB and A∗B are degenerate, so the objective function
z has same value v. Thus, z may be recorded as
z=v+
n+m
X
c∗j xj ,
j=1
c∗j = 0 whenever j ∈ IB∗ . This equation will be satisfied by every solution of
AB , especially xj = 0, j ∈
/ IB and j 6= s, xi = bi − ais xs (i ∈ IB ), z = v + cs xs
for every xs .
Thus we obtain
X
v + cs xs = v + c∗s xs +
c∗i (bi − ais xs )
i∈IB
therefore
(cs − c∗s +
X
c∗i ais )xs =
i∈IB
X
c∗i bi
i∈IB
for every xs . The right-hand side of the last equation is a constant independent of xs , conclusion
X
cs − c∗s +
c∗i ais = 0.
i∈IB
xs is entering variable in AB , according to the entering variable choice
rule cs > 0. The vector xs is not entering variable in IB∗ and s > l, so we have
c∗s ≤ 0. Therefore we get c∗r ars < 0 for some r ∈ IB . As r ∈ IB , so c∗r 6= 0,
so r ∈
/ IB , since xl has the largest subscript, r ≤ l. But r 6= l, because xl is
leaving variable in AB , als > 0 and so c∗l als > 0. Now r < l and xr is not
entering variable in A∗B . Thus c∗r ≤ 0, we have c∗r ars < 0 above, we obtain
ars > 0.
As all the iterations between AB and A∗B are degenerate, both of the two
bases write down same solution. xr is zero in both pivoting tableau of the
two bases, because xr is nonbasic variable in A∗B , so br = 0. Thus xr should
3. DEGENERACY IN LINEAR PROGRAMMING
35
be the leaving variable in AB , but we choose the xl , even though r < t. Here
we get the contradiction. The difference between the lexicographic minimum ratio version and the
Bland’s version of the simplex algorithm is: The lexicographic minimum ratio
starts with an initial lexico primal feasible basic vector, the entering variable
is random, but the leaving variable must be chosen by the lexico minimum
ratio test. On the other hand, the Bland’s pivoting rule begins with any
feasible basis, in each pivot iteration the entering or the leaving variables are
uniquely selected by the rule.
3.4
The Alternative optimal Solutions
Suppose an optimal solution is obtained when an linear programming problem is solved. Let xs be a nonbasic variable in the final optimal canonical
tableau, if reduce cost c¯s = 0, and if the operation of bringing xs into the basic vector involves a nondegenerate pivot step, this step yields an alternative
optimal BFS. The alternative optimal bases are obtained by bringing into an
optimal basic vector any nonbasic variable with a zero reduce cost coefficient.
Therefore, the sufficient condition for (P) to have a unique optimal solution
is:
Theorem 27 If there exists an optimal basic vector for an linear programming problem, whose reduce cost coefficients of all the nonbasic variables are
strictly positive, then the corresponding BFS is the unique optimal solution
of this LP.
The set of optimal solutions of the primal problem is
P ∗ := x ∈ Rn |Ax = b, x ≥ 0, cT x = ζ .
• If P ∗ = , then it does not have any optimal solution.
• If P ∗ 6= , then the linear programming problem has any optimal solution.
• If |P ∗ | =1,then it has a unique optimal solution.
• If |P ∗ | ≥2,then it has alternative optimal solution.
3. DEGENERACY IN LINEAR PROGRAMMING
Example 28 Let us see
4x1
x1
maximize
2x1
subject to
x1
The initial full
1. x1 x2
s1
1 −1
1
1
2
1
c 4
3
36
the following linear programming problem
+ 3x2 − x3 + x4
− x2 + 2x3 + x4 ≤ 8
+ x2 + x3 − x4 = 10
+ x2 − x3 + x4 = 16
x1 , x2 , x, x4 ≥ 0
tableau is:
x 3 x 4 s1
b
2
1 1 8
−1
1 0 10
1 −1 0 16
−1
1 0 0
Consider, the objective function is maximal, using the least script pivoting
rule, the variable x1 shall enter the basis, because c¯1 = 4 > 0. The leaving
variablecan be determined
by ratio test:
16
,
=
min
{8,
10, 8} = 8. So the pivot element is a
¯11.
min 18 , 10
1 2
2. x1 x2 x3 x4 s1
b
x1
1 −1
2
1
1
8
0
2 −3
0 -1
2
0
3 −3 −3 −2
0
c 0
7 −9 −3 −4 −32
3.
x1
x2
c
x1 x2 x3 x4
s1
b
1 0
1
0
1/3
8
0 0 −1
2
1/3
2
0 1 −1 −1 −2/3
0
0 0 −2
4
2/3 −32
4.
x1
x4
x2
c
x1 x2
x3 x4
s1
b
1 0
1 0
1/3
8
0 0 −1/2 1
1/6
1
0 1 −3/2 0 −1/2
1
0 0
0 0
0 −36
The basic variables are x1 , x4 , x2 . The nonbasic variables are x3 , s1 . The
optimal basic solution is x¯1 = (8, 1, 0, 1) and the optimal objective function
value is 36. In this pivot tableau the reduce cost coefficients are zero, the
optimal solution is dual degenerate, thus, there exists alternative optimal
solutions. If we pivot continuously, with the entering variable x3 :
3. DEGENERACY IN LINEAR PROGRAMMING
5.
x3
x4
x2
c
37
x1 x2 x3 x4
s1
b
1 0 1 0 1/3
8
1/2 0 0 1 1/3
5
3/2 1 0 0
0
13
0 0 0 0
0 −36
The solution can be read from this pivot tableau: x¯2 = (0, 13, 8, 5). But
if the entering variable is s1 , the optimal solution will be x¯3 = (6, 4, 0, 0).
5/ .
x1
s1
x2
c
x 1 x 2 x 3 x 4 s1
b
1 0 −2 −2 0
6
0 0 −3
6 1
6
0 1 −3
3 0
4
0 0
0
0 0 −36
Further more, the convex combination of the optimal basic solutions are
also optimal solutions of the original problem. For instance:
1
1
x¯ = x¯1 + x¯2 = (4, 7, 4, 3)
2
2
and the corresponding value of object function is 36.
Consider the pivot ordering is as follows: x2 ≺ x3 ≺ x1 ≺ x4 ≺ s1 , then
we have:
10 . x2 x3 x1 x4 s1
b
s1 -1
2 1
1 1 8
1 -1 1
1 0 10
1
1 2 −1 0 16
c 3 −1 4
1 0 0
20 .
s1
x2
c
30 .
s1
x2
x3
c
x 2 x 3 x 1 x 4 s1
b
0
1 2
2 1
8
1 −1 1
1 0
10
0
2 1 −2 0
16
3
2 1 −2 0 −30
x2 x3
0 0
1 0
0 1
0 0
x 1 x 4 s1
b
3/2
3 1
15
3/2
0 0
13
1/2 −1 0
3
0
0 0 −36
In this pivot table, we can read out the another optimal solution x¯4 =
(0, 13, 3, 0) vector.
3. DEGENERACY IN LINEAR PROGRAMMING
38
Assume a linear programming problem is given, if it is dual degenerate,
then it has alternative optimal solution. There are some methods, using
them the alternative optimal solutions can be found.
1. Several data in the problem should be modified a little. So-called perturbation method.
2. The ordering of linear programming problem data should be modified,
in other words, the rows and columns in pivot table should be permutated. Fukuda and Namiki showed the result in linear complementarity
problem. [10]
3. Assume the objective value of a linear programming problem is obtained, then the objective function can be fixed with the objective
value. We add this as a constraint condition to the other condition.
Then we get a optimal solution set in a implicit way.
4. Assume the objective value of a linear programming problem is obtained, if the problem is dual degenerate. We can pivot continuous to
get other alternative solutions.
4. PARAMETRICAL LINEAR PROGRAMMING
4
39
Parametrical Linear Programming
In linear programming problems that are arising from applications, the numerical data often represent only rough estimates of quantities that are inherently difficult to predict. It often happens that the coefficients of the
objective function or the right-hand-side constants are not precisely known
at the time the problem is being solved. For example, market prices may
fluctuate, supplies of raw materials and demands for finished products are
often unknown in advance, and production may be effected by a variety of
accidental events such as machine breakdowns. Each variation changes its
objective function, and thus, creates a new problem. Since each of these new
linear programming problems might possibly represent the actual situation,
it is useful to find out how the optimal solutions vary with the changes in
data. The special literature can be find in [3][1][11].
Since we are dealing only with linear models here, it is assumed that the
coefficients of the objective function or the right-hand-side constants vector
vary linearly with the parameters. Let us see the case where there is only
one parameter λ. Such problems, in which are more parameters, are much
more difficult problems. The two types of parametrical linear programming
are parametric cost problem and parametric right-hand-side problem.
4.1
The Parametric Cost Problem
The problem to be solved is of the following form:

min zλ (x) = (c+λc0 )T x 
(Pobj )
Ax = b

x ≥ 0
where c and x are vectors in Rn , b is a vector in Rm , the matrix A is an m ×
n real matrix assume, rank(A) = m. The λ is a parameter, that takes real
values.
In (Pobj ) if z(x) = cT x and z 0 (x) = c0T x, then zλ (x) = z(x) + λz 0 (x). For
any fix value of λ, (Pobj ) is a standard linear programming problem. If AB
is a feasible basis for (Pobj ), it is an optimal basis for a specified value of
λ if all the reduce cost coefficients are nonnegative for that value of λ. The
updating operation can be carried out by treating each cost coefficient as a
function of λ,without giving any specific value for it. All of there reduce cost
coefficients will turn out to be functions of λ, and AB is an optimal basis for
all values of λ that keep all the relative cost coefficients be nonnegative. In
the tableau we write the parametric objective function in two rows. One row
40
4. PARAMETRICAL LINEAR PROGRAMMING
contains the constant terms of the objective function, corresponding to z(x).
The another row contains all the coefficients of λ in the objective function,
corresponding to z 0 (x). As followings:
a11
..
.
· · · a1j
..
.
· · · a1n
..
.
b1
..
.
am1
c1
c01
· · · amj
· · · cj
· · · c0j
· · · amm
· · · cn
· · · c0n
bm
0
0
The question of the feasibility of (Pobj ) does not depend on λ , since λ
affects only the objective function in (Pobj ). Thus if (Pobj ) is infeasible for
some λ , it is infeasible for all values of λ and it terminates.
The parametric cost simplex algorithm begins with an optimal basis for
(Pobj ) for some fixed value of the parameter λ, for example λ = 0 and then
solves (Pobj ) for all values of λ, using only primal simplex pivot steps.
Assume, for specified value of λ (= λ0 ), the problem is solved. Let
xB = (x1 , ..., xm ), associated with the basis AB , be an optimal basic vector. If
cB , c0B are the cost vectors associated with the basic vector xB , y = cTB A−1
B
−1
A
and y0 = c0T
.
The
relative
cost
coefficient
of
x
as
a
function
of
the
j
B B
parameter λ is c¯j = cj − yT a·j and c¯0j = c0j − y0T a·j
If the parameter λ satisfies the condition
c¯j + λ¯
c0j ≥ 0
for all
j.
Then the basis AB is still the optimal basis. Therefore a necessary condition
for AB to be an optimal basis is
c¯j ≥ 0
∀j
such c¯0j = 0
The optimality criterion for AB to be optimal.¯
c0 is c¯j + λ¯
cj ≥ 0 for all j.
0
0
0
If c¯j > 0, consequently λ ≥ c¯j /¯
cj . If c¯j < 0, then λ ≤ c¯j /¯
c0j . Let
o
n
−¯
c
λB = min c¯0 j : j, such that c¯0j < 0
j
=
λB
=
+∞ n
min
−¯
cj
c¯0j
if c¯0j ≥ 0 for all j
o
: j, such that c¯0j > 0
= −∞
if c¯0j ≤ 0 for all j
Definition 29 The interval λ : λB ≤ λ ≤ λB = λB , λB is called as the
optimal interval, AB is an optimal basis.
The value λB is called as the lower characteristic value of the basis AB .
The value λB is called as the upper characteristic value of the basis AB .
41
4. PARAMETRICAL LINEAR PROGRAMMING
If λB < λB , the optimal interval of the basis AB is empty and AB can
never be an optimal basis.
If the optimality interval of a basis AB is nonempty, for the λ ∈ λB , λB ,
an optimal basic feasible solution is:
The basic vector:
xB
All nonbasic variables
Optimal objective value
Optimal dual solution
=
=
=
=
¯ = A−1 b
b
B
0
z¯ + λ¯
z0
y + λy 0
Therefore, in the optimal interval, the optimal solution is the same, but
the optimal objective value is a linear function of λ.
If the λ outside the optimality interval of AB , then AB is not an optimal
basis for the linear programming problem.
Assume, λ > λB , AB is a primal feasible basis for the parametric problem
(Pobj ) with an nonempty optimal interval. Suppose, λB is finite and index
0
ck /¯
ck . When λ > λB , then c¯k + λ¯
c0k <
k attains the minimum, then λB = −¯
0, therefore xk has to be brought into the basic vector. If the a
¯ik ≤ 0
for all i, then the dual infeasibility criterion is satisfied, that means, the
parametric objective function z(x) is unbounded below on the set of feasible
solutions, when λ > λB . If the unboundedness criterion is not satisfied, using
the simplex algorithm, suppose xr is the leaving variable, xk is the entering
variable. We obtain a new basis AB¯ . The optimal interval of AB¯ is another
interval that has its left end point same with λB , the right end point of the
previous basis AB¯ . Similar is the situation, in which λ < λB .
Remark 2 To obtain optimal solutions of (PCP) for λ > λB , after the
entering variable xk is determined, the leaving variable has to be determined
by the primal simplex minimum ratio test in order to retain primal feasibility.
When there is a tie, the leaving basic variable can be chosen arbitrarily among
those eligible, but this could result in cycling at the value of λ = λB . The
algorithm could then keep going indefinitely through a cycle of feasible bases,
everyone of which is only optimal for λ = λB . We will return to this problem
a bit later.
Example 30 Let us solve the following parametric linear programming problem:
42
4. PARAMETRICAL LINEAR PROGRAMMING
max (3 − 6λ)x1 + (2 − 5λ)x2 + (5 + 2λ)x3
x1 + 2x2 + x3 ≤ 430
3x1
+ 2x3 ≤ 460
subject
x1 + 4x2
≤ 420
xj ≥ 0
(j = 1, 2, 3)
At first let λ = 0, then the initial pivot tableau is:
1.
x1
x2
x3
x4
x5
x1
x5
x6
1
3
1
2
0
4
1
2
0
1
0
0
0
1
0
0 430
0 460
1 420
−3 −2 −5
6
5 −2
0
0
0
0
0
0
c
c0
x6
b
0
0
2.
x1
x2
x3
x4
x5
x6
b
x2
x5
x6
1/2
3
−1
1
0
0
1/2
2
−2
1/2
0
−2
0
1
0
0
0
1
215
460
−440
c
c0
−2
7/2
0
−4
1
0 −9/2 −5/2
0
0
0
430
0 −1075
3.
x1
x2
x3
x4
x5
x6
b
x2
x3
x6
−1/4
3/2
2
1
0
0
0
1
0
1/2 −1/4
0
1/2
−2
1
0
0
1
110
230
20
c
c0
4
41/4
0
0
0
1
0 −5/2
2
9/4
0 1350
0 −40
Now, in case λ = 0 we obtain the optimal solution x∗1 = (0, 100, 230, 0, 0, 20)T .
If the following conditions are satisfied, the optimal solution is also optimal
for the parametric problem.



 c¯1
c¯4


 c¯5
+ λ¯
c01
=
4 +
+ λ¯
c04
=
1 −
+ λ¯
c05
=
2 +
41
λ
4
5
λ
2
9
λ
4
≥ 0 ⇒
λ
≥
≥ 0 ⇒
λ
≤
≥ 0 ⇒
λ
≥
−4
41/4
−1
−5/2
−2
9/4
43
4. PARAMETRICAL LINEAR PROGRAMMING
n
o
−4 −2
and λ1B = max 41/4
, 9/4 = − 16
.
41
16 2 Thus in the case, that all λ are in the interval − 41 , 5 , the optimal solution of original parametric LP problem is x∗1 , the value of objective function
is z1∗ = 1350 − 40λ.
1
Consequently, λB = min
n
−1
−5/2
o
=
2
5
If λ > 25 , then c¯4 + λ¯
c04 ≥ 0 condition is not satisfied, we shall bring the x4
into the basic vector, according to the minimum ratio rule x2 is the leaving
basic variable, so we obtain the following pivot tableau:
4.
x1
x2
x3
x4
x3
x6
−1/2
3/2
1
12
0
4
c
c0
9/2
9
-2
5
x4
x5
x6
b
0
1
0
1 −1/2
0
1/2
0
0
0
0
1
200
230
420
0
0
0
0
0 1150
0 460
5/2
1
The current optimal solution is x∗2 = (0, 0, 230, 200, 0, 420)T , according to
the definition of λB and λB :
−9/2 2 −5/2
2
2
λB = +∞ and λB = max
, ,
.
9
5
1
we obtain also the value of objective function is z2∗ = 1150 + 460λ, for all
λ ∈ 25 , +∞ .
If λ < −16
, then c¯1 + λ¯
c01 ≥ 0 condition is not satisfied, we shall bring
41
the x1 into the basic vector, according to the minimum ratio rule x6 is the
leaving basic variable, so we obtain the following pivot tableau:
4.
x1
x2
x3
x4
x5
x6
b
x2
x3
x1
0
0
1
1
0
0
0
1
0
1/4
3/2
−1
−1/8
−1/4
1/2
1/8
−3/4
1/2
205/2
215
10
c
c0
0
0
0
0
0
5
0
-2
1310
0 31/4 -23/8 -41/8 -285/2
44
4. PARAMETRICAL LINEAR PROGRAMMING
The current optimal solution is x∗3 = (10, 205/2, 215, 0, 0, 0)T . If the following conditions are satisfied, the optimal solution is also optimal for the
parametric problem.


c01 = 5 + 31
λ ≥ 0
 c¯1 + λ¯
4
23
0
c¯4 + λ¯
c4 =
λ ≥ 0
8


41
0
c¯5 + λ¯
c5 = -2 − 8 λ ≥ 0
according to the definition of λB and λB :
2
16
0
3
,
=−
and
λB = min
−23/8 −41/8
41
λ3B
= max
−5
31/4
=−
20
.
31
we obtain also the value of objective function is z3∗ = 1310 − 285/2λ, for all
16
λ ∈ − 20
,
−
.
31
41
, then c¯4 + λ¯
c04 ≥ 0 condition is not satisfied, we shall bring
If λ < −20
31
the x1 into the basic vector, according to the minimum ratio rule x6 is the
leaving basic variable, so we obtain the following pivot tableau:
5.
x1
x2
x3
x4
x2
x4
x1
0
0
1
1
0
0
−1/6
2/3
2/3
0
1
0
c
c0
0
0
0 −10/3
0 −31/6
x5
x6
b
−1/12
1/4
−1/6 −1/2
1/3
0
205/2
215
10
0
5/6
1/2
1780/3
0 −19/12 −5/4 −3760/3
The current optimal solution is x∗4 = (460/3, 200/3, 0, 430/3, 0, 0)T , according to the definition of λB and λB :
−1/2
20
10/3 5/6
4
λB = min −
,
,−
=−
and λ3B = −∞.
31/6 19/12
5/4
31
20
we obtain also the value of objective function is z4∗ =, for all λ ∈ −∞, − 31
.
Now we have solved the whole parametric LP problem, the solution is in
the following tableau:
45
4. PARAMETRICAL LINEAR PROGRAMMING
optimal interval the optimal solution the value of objective function
1780
−∞, − 20
( 460
, 200
, 0)T
− 3760
λ
31
3
3
3
3
20 16 , 215)T
λ
− 31 , − 41
(10, 205
1310 − 285
2
2
16 2 − 41 , 5
(0, 100, 230)T
1350 − 40λ
2
,
+∞
(0, 0, 230)T
1150 + 460λ
5
4.2
Parametric Right-Hand-Side problem
Now let us see another type parametric linear programming problem, so
called parametric right-hand-side problem.

 min z(x) = cT x
Ax = b+λb0
(Pb )

x ≥
0
In the parametric right-hand-side problem, the constraints involve the
parameter λ. However, it can be verified from standard results in linear
algebra that if either b and b0 or both do not lie in the linear hull of the
columns of A, then Ax = b + λb0 has a solution for at most a single value of
the parameter λ. This is a trivial case, hence we assume that both b and b0
are in the linear hull. Therefore, if a constraint in Ax = b + λb0 is redundant
for some value of λ, then it is redundant for all values of λ, and it can be
eliminated. Thus we can assume rank(A) = m.
We write the parametric right-hand-side problem in the original tableau:
a11
..
.
am1
c1
· · · a1j
..
.
· · · a1n
..
.
b1
..
.
b01
..
.
· · · amj
· · · cj
· · · amm
· · · cn
bm
0
b0m
0
The parametric right-hand-side simplex algorithm begins with an optimal
basis for (Pb ) for some fixed value of the parameter λ, for example λ = 0 and
then solves (Pb ) for all values of λ, using only dual simplex pivot steps.
Let xB is an optimal basic vector for some specified value of λ = λ0 and
the AB is the associated optimal basis. Let cB be the corresponding reduce
¯ = A−1 b, b
¯ 0 = A−1 b0 , and z¯ = cB A−1 b, z¯0 = cB A−1 b0 . Since
cost vector, b
B
B
B
B
AB is an optimal basis when λ = λ0 , therefore AB is dual feasible. The
parameter λ appears only in the right-hand-side constant vector, hence dual
4. PARAMETRICAL LINEAR PROGRAMMING
46
feasibility of a basis is independent of λ. Thus, AB is an optimal basis for the
parametric right-hand-side problem for all values of λ for which it is primal
¯ b
¯ 0 ≥ 0.
feasible, that is, for all λ satisfying b+λ
The basis AB is primal feasible when λ = λ0 , we must have ¯bi +λ¯b0i ≥ 0
for all i. Therefore a necessary condition for AB to be an optimal basis is
¯bj ≥ 0
∀j
such ¯b0j = 0
The optimality criterion for AB to be optimal is ¯bi +λ¯b0i ≥ 0 for all i.
If ¯b0i > 0, consequently λ ≥ ¯bi /¯b0i . If ¯b0i < 0, then λ ≤ ¯bi /¯b0i . Let
n
o
−¯bi
0
¯
λB = min ¯b0 : i, such that bi < 0
i
= +∞ n
if ¯b0i ≥ 0 for all i
o
−¯bi
0
¯
λB = min ¯b0 : i, such that bi > 0
i
= −∞
if ¯b0 ≤ 0 for all i
i
if λB > λB , AB will never be
a primal feasible basis for any λ. Contrary, if
λB ≤ λB then the λB , λB interval is called characteristic interval or the
optimal interval of the basis AB . For all values of λ in this optimal interval,
the BFS xB is an optimal solution. In this interval, the optimal objective
value and the values of the basic variables in the optimal BFS vary linearly
in λ.
If λB is finite and we have to solve the problem in case λ > λB , with
the assumption, that index k attains the minimum in the definition of λB ,
then λB = −¯bk /¯b0k and for such λ, which λ > λB , ¯bk +λ¯b0k < 0. If the primal
infeasibility criterion (¯
arj ≥ 0 for all j) is satisfied, then the problem is
infeasible whenever λ > λB . In spite of that, if the primal infeasibility
criterion is not satisfied, then we can make a dual simplex pivot step in
the rth row. It gives a consecutive optimal basis AB˜ , whose characteristic
interval to the right of λB . The same procedure is repeated with the new
basis AB˜ until optimal solutions are obtained for all required λ > λB
Return to the original basis AB , suppose λB is finite, we have to solve
the problem in case λ < λB and t attains the maximum in the definition of
λB . The primal infeasibility criterion (¯
atj ≥ 0 for all j) is not satisfied for
λ < λB , a dual simplex pivot step is made in the t th row, and the procedure
continued in a similar manner.
Remark 3 As pointed out in the preceding remark about the parametric cost
simplex algorithm, in trying to get optimal solutions for λ > λB , if there is a
tie in the dual simplex minimum ratio test for the choice of the pivot column,
4. PARAMETRICAL LINEAR PROGRAMMING
47
and the choice is made arbitrarily, cycling can occur at that value of λ = λB
here, too. The same thing can happen when trying to get optimal solutions
for λ < λB .
Methods for resolving this problem of cycling will be discussed later.
Lemma 31 Every parametric right-hand-side problem is the dual of a parametric cost problem and vice versa.
Let us see some properties of the optimal objective value function in
parametric linear programming problems.
Theorem 32 The optimal objective value in a parametric right-hand-side
linear programming problems is
(1) a piecewise linear convex function of the parameter, if the objective
function is being minimized in the problem
(2) a piecewise linear concave function of the parameter, if the objective
function is being maximized in the problem.
Theorem 33 The optimal objective value in a parametric cost linear programming problems is
(1) a piecewise linear concave function of the parameter, if the objective
function is being minimized in the problem
(2) a piecewise linear convex function of the parameter, if the objective
function is being maximized in the problem.
For the proof of these Theorem or Lemma, see [1].
4.3
Resolution of Cycling in The Parametric Simplex
Algorithms
Consider the parametric right-hand-side linear programming problem. Let
us solve it with simplex algorithm. For solving this problem, the parametric
right-hand-side simplex algorithm starts with a basis AB that is optimal
for a fixed value of λ, and determines the optimality interval of AB. If
λB 6= +∞, let J = i : i ties for the minimum in the definition of λB . It
selects an r ∈ J and performs a dual simplex pivot step in row r in the
4. PARAMETRICAL LINEAR PROGRAMMING
48
canonical tableau with respect to the basis AB . This yields an adjacent basis
AB˜ that is also optimal for λ = λB . The lower characteristical value of the
basis AB˜ is λB , and the algorithm continues with AB˜ . However, cycling may
occur at λ = λB .
4.3.1
Resolution of Cycling in the Parametric Right-Hand-Side
Simplex Algorithm
Let I = {1, 2, ..., m}
and xB be an optimal basic vector, for which the optimality interval is λB , λB . The method for resolving cycling in the parametric right-hand-side simplex algorithm requires the use of the lexico dual simplex algorithm, what we have discussed in the third section. Assume, the initial basic vector xB is lexico dual feasible. The AB is the
associated basis, and
¯ = A−1 b, b
¯ 0 = A−1 b0 . Suppose λB < +∞, let T = i : i such ¯bi +λ¯b0i = 0
b
B B
and J = i : i such ¯bi +λ¯b0i = 0 and ¯b0i < 0 , the set of i that tie for the minimum in determining λB . So J ⊂ T. As long as λ = λB , any pivot step
performed in rows i ∈ T , beginning with xB will not change the primal basic solution, because when the basic variable in the pivot row is zero in the
basic solution, the pivot step is a primal degenerate pivot step that leaves
the primal basic solution unchanged. Denote xB˜ is any dual feasible basic
vector containing all the variables xj for j ∈ I\T as basic variables. Suppose
˜ and b
˜ 0 are the associated right-hand-side constant vectors. As conclusion,
b
the basic solution with respect to xB˜ at λ = λB is
˜ b
˜ 0 = b+λ
¯ Bb
¯0
xB˜ = b+λ
and all of the nonbasic variables are equal 0. For all i ∈ T, ˜bi +λB ˜b0i = 0, and
for i ∈
/ T, ˜bi +λB ˜b0i > 0. Thus the upper characteristic value associated with
xB˜ is > λB if ˜b0i > 0 for all i ∈ T. In order to obtain optimal solutions for
values of λ > λB , we consider the subproblem at λ = λB as follows:

P T

min
c¯i xi



i∈I\T
/
P T
0
¯
a
¯ji xi = bj
j∈T
(SP RHSP )

i∈I\T
/


xi
≥ 0
i∈
/ I\T 
The (xi : i ∈
/ I ∩ T ) vector is a dual feasible basic vector for (SPRHSP). The
subproblem can be solved by the lexico dual simplex algorithm in a finite
number of steps without the possibility of cycling. If this process terminates
by satisfying the primal infeasibility criterion in some row, that same row
satisfies the primal infeasibility criterion for the original problem for values
4. PARAMETRICAL LINEAR PROGRAMMING
49
¯ B . On the other hand, if it terminates with an optimal basic vector,
of λ > λ
(xj : j ∈ D), then the basic vector (xj : j ∈ D ∪ (I\T )) is an optimal basic
vector for the original problem at λ = λB , for which the upper characteristic
¯B .
value is > λ
However, it is not necessary to solve the subproblem separately. It can
be carried out on the original problem itself. At each pivot, it is necessary to
check whether the updated b0j ≥ 0 for all j ∈ T . If so, the subproblem pivots
are done and we resume the parametric right-hand-side simplex algorithm.
If not we continue the subproblem pivots by selecting the pivot row as any
arbitrary row in which the present updated b0j ≥ 0, and the pivot column
is choosed by the lexico dual simplex minimum ratio test. After at most a
finite number of such subproblem pivots steps at λ = λB this procedure will
¯ B , or find
either determine, that the original problem is infeasible for λ > λ
an optimal basis for it at λ = λB , for which the upper characteristic value is
¯ B . And then the parametric right-hand-side simplex algorithm
greater than λ
can be continued in the same way.
In similar way we can use the method to determine optimal solutions for
λ < λB .
4.3.2
Resolution of Cycling in the Parametric Cost Simplex Algorithm
Let I = {1, 2, ..., m}
and xB be an optimal basic vector, for which the optimal interval is λB , λB . The method for resolving cycling in the parametric cost simplex algorithm requires the use of the lexico primal simplex
algorithm, what we have discussed in the third section. Assume, the initial basic vector xB is lexico feasible. The AB is the associated basis, and
¯
c = c − cB A−1
c0 = c0 − c0B A−1
B A, furthermore ¯
B A. Suppose λB < +∞, let
0
c0i = 0 and c¯0i < 0}, the
T = {i : i such c¯i +λ¯
ci = 0} and J = {i : i such c¯i +λ¯
set of i that tie for the minimum in determining λB . So J ⊂ T. Denote xB˜
is any primal feasible basic vector containing all the variables xj for j ∈ T
as basic variables. Suppose ˜
c and ˜
c0 are the associated updated cost vectors.
So, for all j ∈ T, c˜j +λ˜
c0j = 0, and c˜j +λ˜
c0j > 0 if j ∈
/ T . The vector xB˜ is
another optimal basic vector at λ = λB value. Thus the upper characteristic
value associated with xB˜ is greater than λB if c˜0j > 0 for all j ∈ T. In order
to obtain optimal solutions for values of λ > λB , we consider the subproblem
at λ = λB as follows:
50
4. PARAMETRICAL LINEAR PROGRAMMING
min
P
a
¯Tji xi
c¯Ti xi
i∈T




≥ 0



P
= ¯bj
i∈T
xi
i∈T
(SP CP )
The vector (xi : i ∈
/ I) is a lexico primal feasible basic vector for (SPCP). The
subproblem can be solved by the lexico primal simplex algorithm in a finite
number of steps without the possibility of cycling. If this process terminates
by satisfying the primal infeasibility criterion in some column, that same
column satisfies the unboundedness criterion for the original problem for
¯ B . On the other hand, if it terminates with an optimal basic
values of λ > λ
vector, the same basis is an optimal basis for the original problem (Pb ) at
¯B .
λ = λB , for which the upper characteristic value is greater than λ
It is not necessary to solve the subproblem separately. It can be carried
out on the original problem itself. At each pivot, it is necessary to check
whether the updated c0j ≥ 0 for all j ∈ T . If so, the subproblem pivots
are done and we resume the parametric cost simplex algorithm. If not we
continue the subproblem pivots by selecting the pivot row as any arbitrary
row in which the present updated b0j ≥ 0, and the pivot column by the
lexico primal simplex minimum ratio test. After at most a finite number of
such subproblem pivots steps at λ = λB this procedure will either determine
that the original problem is unbounded below in the original problem (Pb )
¯ B , or find an optimal basis for it at λ = λB , for which the upper
for λ > λ
¯ B . And then the parametric cost simplex
characteristic value is greater than λ
algorithm can be continued in the same way.
In similar way the method can be used to determine optimal solutions for
λ < λB .
51
5. SENSITIVITY ANALYSIS
5
Sensitivity Analysis
Suppose we have a linear programming problem in practice, which is given
in standard form (P), and an optimal solution is obtained for it. The data
in A, c, b are estimated from practical considerations. Sometimes after the
final optimal solution has been obtained, they have to be changed, other way,
extra constraints or variables have to be added into the model.
The chapter Sensitivity analysis deals with the problem of obtaining an
optimal solution of the modified problem, which starts with the optimal
solution of the original problem. It is also called postoptimality analysis.
Now let us consider the problem, in which only one data changes once a
time. If several changes have to be made, make them one at a time. For
more detail about this topic, see [1][11][3].
In the previous section we have already seen two types of postoptimality
analysis, parametric right-hand-side problem and the parametric cost problem. In this section we discuss various other types of postoptimality analysis.
At first, let us see the fundamental principle of the sensitivity analysis.
Then we will discuss about the sensitivity analysis in coefficient of objective
function, right-hand-side, and matrix.
5.1
The Fundamental Principle of the Sensitivity
Analysis
Assume, a linear programming problem is given in standard form (P).
From the first section, we know, if a basic solution x = (A−1
B b, 0) satisfied
the following two conditions, then it is a optimal solution for the problem
(P).
xB = A−1
B b ≥ 0.
sN = cN − ATN A−T
B cB ≤ 0.
(feasibility)
(optimality)
Let us recall the pivot table in this case.
A−1
B A
A−1
B b
T
cN − (cTB A−1
B A)
−cTB A−1
B b
If the data in the pivot tableau satisfied the previous conditions, then the
basic solution is optimal, and the correspond basis AB is an optimal basis.
Sometimes we have to modify the data in the problem, so it is useful
to find out how the optimal solutions and the optimal basis vary with the
5. SENSITIVITY ANALYSIS
52
changes in data. For instance, the change in b can influence, whether the
xB = A−1
B b ≥ 0 is satisfied yet. The change in c has an effect, whether the
optimality condition sN ≤ 0 is granted still, and the change in data in A
can influence on both of the two conditions. To analysis how these modified
data affect on the optimal solutions, is the job of sensitivity analysis. The
analysis bases on the obtained solution and basis, this is why, it is also called
postoptimality analysis.
The basic theory of sensitivity analysis is to check, whether the two conditions are satisfied still after the change of the data. Summarize the task of
sensitivity analysis is:
Find out the change limits of data, in other words the stable interval, so
that remain the optimal solution or optimal basis.
If the change of data exceed the limits in the previous point, bases on
the original optimal solution or basis, how to obtain the new optimal
solution quickly.
As mentioned above, in practical problem, the following data or condition
modify frequently
1. the coefficient of the objective function, cj
2. the right-hand-side constant, bi
3. the data in matrix aij, consisting introducing an additional constraint
or new variable.
In following, we will discuss about the effect of the changes of data on
optimal solution.
5.2
The sensitivity analysis of coefficients of the
objective function
From the pivot tableau, we can read out, that the change of coefficients in
the objective function may vary the sj ,it may effect, whether the optimality
condition is satisfied. There are two possibilities in this area. First, when
the corresponding variable is in basis, and second, when the corresponding
variable is nonbasic.
Suppose xr is a variable that is not in the optimal basis AB . Assuming
that all the other cost coefficients except cr remain fixed at their specified
values, determine the interval of values of cr , within which AB remains an
53
5. SENSITIVITY ANALYSIS
optimal basis. If the cost coefficient of nonbasic variable xr is changed from
cr into cr + ∆cr , then the basis AB remains an optimal basis as long as
c¯0r = cr + ∆cr − cTB A−T
¯r ≤ 0,
B a
consequently,
∆cr ≥ −¯
cr .
The reduced cost coefficients of all the other variables are independent of
the value of cr , hence they remain nonnegative. If the new value of cr is not
in the closed interval [−¯
cr , +∞], xr is the only nonbasic variable that has
a negative reduced cost coefficient in the modified problem. We shall bring
xr into the basis and continue the algorithm until a terminal basis for the
modified problem is obtained.
Example 34 Suppose the optimal solution for the following linear programming problem is obtained, [11].
min
−x1 − 5x2 − 3x3 − 4x4
2x1 + 3x2 + x3 + 2x4 ≤ 800
5x1 + 4x2 + 3x3 + 4x4 ≤ 1200
subject to
3x1 + 4x2 + 5x3 + 3x4 ≤ 1000
xj ≥ 0
(j = 1, 2, 3, 4)
and we have the optimal tableau:
opt
x1
x5
x4
x2
c
x2
x3
x4
1/4
2
−3/4
0 −13/4
0
−2
1
11/4
13/4
0
11/4
x5
x6
x7
b
0
1
0
1
1/4 −1
0
1 −1
0 −3/4
1
100
200
100
0
0
1/4
1
1300
The questions are:
(1) To remain the current optimal basis, what is the stable interval of the
value of c1 and c3 ?
(2) If the value of c1 change to 5, what is the optimal solution?
From the optimal table, we can read out, that c¯1 = 13
and c¯3 = 11
. To
4
4
remain the optimal basis, ∆cr ≥ −¯
cr has to be satisfied. So we get ∆c1 ≥ − 13
4
13
17
0
0
and ∆c3 ≥ − 11
,
namely
c
=
c
+
∆c
≥
−1
−
=
−
and
c
=
c
+
∆c
≥
1
1
3
3
1
3
4
4
4
11
23
−3 − 4 = − 4 .
54
5. SENSITIVITY ANALYSIS
If c01 = −5 < − 17
, then it is in outside of the stable interval, the optimal
4
solution can not be the same. The newvalue ofthe reduce cost coefficient is
1/4


0
0
T −1
c¯1 = c1 − cB AB a
¯1 = −5 − (0, −4, −5) 
2  = − 43 < 0, so the nonbasic
−3/4
variable x1 should be the entering variable
x1
x2
x3
x4
x5
x6
x7
b
100
200
100
x5
x4
x2
1/4
2
−3/4
0 −13/4
0
−2
1
11/4
0
1
0
1
1/4 −1
0
1 −1
0 −3/4
1
c
−3/4
0
0
0
11/4
1/4
1
1300
continuous the simplex algorithm, we obtain the new optimal table:
x1
x2
x3
x4
x5
x6
x7
b
75
100
175
x5
x1
x2
0
1
0
0 −3 −1/8
0 −1
1/2
1
2
3/8
1
1/8 −7/8
0
1/2 −1/2
0 −3/8
5/8
c
0
0
0
2
3/8
5/8
5/8 1375
Now, suppose all the cost coefficients except cp , which is a basic cost
coefficient, remain fixed at their present value. Determine the interval of
values of cp for which AB remains an optimal basis. Since cp is a basic
cost coefficient, any change in cp changes the dual solution corresponding to
the basis AB , and all the reduce cost coefficients. Suppose cp changes into
c0p = cp + ∆cp , because p ∈ IB , thus
(cB + ∆cB )T A−1
B A =
−1
= cTB A−1
B A + (0, ..., ∆cp , ..., 0)AB A
T
ap1 , a
¯p2 , ..., a
¯pn ),
= cTB A−1
B A + ∆cp (¯
and
c¯0p = cp − cTB A−1
¯p − ∆cp a
¯pj = c¯p − ∆cp a
¯pj
B a
(j = 1, 2, ..., n).
If we want, that the optimal solution remains, then the following condition
has to be satisfied:
¯pj ≥ 0
c¯0p = c¯p − ∆cp a
(j = 1, 2, ..., n).
55
5. SENSITIVITY ANALYSIS
From that we can obtain, if a
¯pj < 0, then ∆cp ≥ c¯p /¯
apj and if a
¯pj > 0, then
∆cp ≤ c¯p /¯
apj . Therefore,
max {¯
cp /¯
apj |¯
apj < 0} ≤ ∆cp ≤ min {¯
cp /¯
apj |¯
apj > 0} .
j
j
If the changed data is not in the interval any more, then we shall continue
the algorithm.
Example 35 For the previous example the stable interval of basic variable
x2 is:
13/4 1/4
11/4 1
max
,
≤ ∆c2 ≤ min
,
−3/4 −3/4
11/4 1
thus − 13 ≤ ∆c2 ≤ 1, if the − 16
≤ c02 ≤ −4, then the optimal solution remains.
3
5.3
The sensitivity analysis of constant on the righthand-side
In the linear programming problem (P) suppose we want to determine the
interval of values of one of the right-hand-side constants, assume br , for which
the basis AB remains optimal.
−1
Due to xB = A−1
B b and z = cB AB b, the changing of bi influences the primal feasibility of the original optimal solution and the optimal value. Hence,
AB remains dual feasible.
Suppose the right-hand-side constant br changes into b0r = br + ∆br and
assume the other data in the problem stay fixed, then the solution changes
according into
x0B = A−1
B (b + ∆b),
in which b = (b1 , b2 , .br .., bm )T and ∆b = (0, ..., ∆br , ..., 0)T . In this case,


0
 .. 
 . 

−1 
−1
−1
−1
−1
0
xB = AB (b + ∆b) =AB b+AB ∆b =AB b+AB  ∆br  =
 . 
 .. 
0
 ¯   −1
  ¯

b1
a1r ∆br
b1 + a−1
1r ∆br
..
..
 ..  
 

.
.
 .  
 

 ¯   −1
  ¯

∆b
=  bi  +  air ∆br  =  bi + a−1
r ,
ir
 .  
 

..
..
 ..  
 

.
.
−1
−1
¯bm
¯bm + a ∆br
amr ∆br
mr
5. SENSITIVITY ANALYSIS
56
−1
−1
th
in which (a−1
row in inverse matrix A−1
1r , a2r , ..., amr ) is r
B . If the optimal
basis AB remains optimal, then the feasibility condition x0B ≥ 0 has to be
satisfied, or in other words,
¯bi + a−1 ∆br ≥ 0
(i = 1, 2, ..., m).
ir
−1
¯ −1
From that we can conclude, if a−1
ir > 0, then ∆br ≥ −bi /air and if air < 0,
then ∆br ≤ −¯bi /a−1
ir . Therefore,
¯
¯
bi −1
bi −1
max − −1 air > 0 ≤ ∆br ≤ min − −1 air < 0 .
air
air
Remark, that in the optimal pivot table, the inverse matrix A−1
B can be
read out in the place of initial basis.
After b changed into b + ∆b, if the optimal basis is not changed, then
the optimal value is:
∗
−1
z0 = cTB A−1
B (b + ∆b) = z + cB AB ∆b
If the changed data is not in the interval any more, then we shall continue
the dual simplex algorithm, until a new optimal basis is obtained.
Example 36 Let us continous the previous example 35, from the optimal
pivot tableau, the A−1
B can be read out:


1
1/4 −1


A−1
1 −1  .
B =  0
0 −3/4
1
Then the stable interval of value of b1 , b2 , b3 ,are:
≤ ∆b1 ≤ +∞, namely,
max − 100
1
−100
≤ ∆b1 ≤ +∞;
100
, − 200
≤ ∆b2 ≤ min − −3/4
, namely, −200 ≤ ∆b1 ≤ 400
;
max − 100
1/4
1
3
max − 100
≤ ∆b3 ≤ min − 100
, − 200
, namely, −100 ≤ ∆b3 ≤ 100.
1
1/4
1
If ∆b3 = −150, then the original optimal solution will be not more optimal, not evenly primal feasible.

x0B

x5


=  x4  = A−1
B (b + ∆b) =
x3



 
250
1
1/4 −1
800
 



=  0
1 −1   1200  =  350  ,
850
−50
0 −3/4
1
57
5. SENSITIVITY ANALYSIS
and


250


z0 = cB A−1
B (b + ∆b) = (0, −4, −5)  350  = −1150
−50
Let us write down these new data into pivot tableau instead of the previous
data, then we get the following pivot table:
x1
x2
x3
x4
x5
x6
x7
b
x5
x4
x2
1/4
2
−3/4
0 −13/4
0
−2
1
11/4
0
1
0
1
1/4 −1
0
1 −1
0 −3/4
1
250
350
−50
c
13/4
0
0
0
1150
11/4
1/4
1
This pivot table is dual feasible, but not primal feasible, let us make a dual
pivot iteration
x1
x5
x4
x2
c
x2
x3
x4
x5
1/4
1/3 −7/3
1
4/3
5/3
1 −4/3 −11/3
0
1
0
0
3 −1/3
11/3
x6
x7
b
1
0
0
0 −2/3
0
1/3
1 −4/3
700/3
850/3
200/3
0
0
4/3 3400/3
Now we get x0∗ = (0, 0, 0, 850
, 700
, 200
, 0)T
3
3
3
and the optimal value is z 0∗ = 3400/3.
5.4
The sensitivity analysis of data in the matrix
This analysis deal with the change of one matrix element, adding constraints,
or adding variables to the linear programming problem. Let us discuss them
one after another.
5.4.1
The change of one matrix element
According to in which column the altered element is the case can be categorized in the following: changes in the coefficients in a nonbasic column vector
or in a basic column.
Changes in the coefficients in a nonbasic column vector
Let xj be a variable that is not in the optimal basis of the given linear
programming problem. And aij is one of the coefficients in the column vector
58
5. SENSITIVITY ANALYSIS
of xj . The question is, if all the other data in the problem except aij remain
fixed at their present values, what is the stable interval of values of aij within
which AB remains an optimal basis. Since xj is a nonbasic variable, a change
in aij does not affect the primal feasibility of AB . A change in aij can only
change the reduce cost coefficient of xj .
aj + ∆¯
aj , then
Suppose the coefficients of xj , ¯
aj changes into ¯
a0j = ¯
aj ,
aj + ∆¯
aj ) = c¯j − cTB A−1
a0j = cj − cTB A−1
c¯0j = cj − cTB A−1
B ∆¯
B (¯
B ¯
f orj = 1, 2, ..., n
the value of c¯0j has to be nonnegative, so that AB remains an optimal basis,
in other word,
cTB A−1
aj ≤ c¯j
(j = 1, 2, ..., n).
B ∆¯
Especially, if ∆¯
aj = (0, ..., ∆aij , ..., 0)T ,

−1
cTB AB
−1
1
, ..., cTB AB
0
..
.



T −1
,
...,
c
A
B B m  ∆aij
i
 .
 ..
0




¯j ,
 = cTB A−1
B i ∆aij ≤ c


from that, we obtain
if
cTB A−1
B
if
−1
cTB AB
i
i
> 0,
then ∆aij ≤ c¯j / cTB A−1
B
< 0,
−1
then ∆aij ≥ c¯j / cTB AB
i
i
.
That determines a closed interval for ∆aij , as long as ∆aij is in this
interval AB remains an optimal basis. But if it is outside this interval, make
the change, bring xj into the basis, and continue with the application of the
algorithm until a new optimal basis is obtained.
Changes in the coefficients in a basic column vector
Suppose xj is a basic variable, the change of its coefficients ¯
aj , can have
.
Therefore
it
influences
on not
effect on both basis AB and the inverse A−1
B
only the optimality of the solution, but also the feasibility, both of the two
condition discussed in the beginning of this section. Here we won’t introduce
the general solution, in this case, the solution depends on the chaged pivot
table.
59
5. SENSITIVITY ANALYSIS
5.4.2
Adding a new variable
Suppose a new variable xn+1 has to be added to a given linear programming
problem, the corresponding cost value is cn+1 , and column vector in the
matrix is an+1 = (a1,n+1 , a2,n+1 , ..., am,n+1 )T . Then we shall attach xn+1 as
an nonbasic variable, and add the following column to the original optimal
table,


a
¯1,n+1
 a

 ¯2,n+1 
¯
an+1 = A−1
a
=


..
B n+1


.
a
¯m,n+1
and
an+1 ,
c¯n+1 = cn+1 − cTB A−1
B ¯
in this way, we obtain the new pivot tableau.
A−1
B A
¯
an+1
A−1
B b
T
cN − (cTB A−1
B A)
c¯n+1
−cTB A−1
B b
If c¯n+1 ≥ 0, then the original optimal solution remains optimal, otherwise
continue with the application of the algorithm until a new optimal basis is
obtained.
5.4.3
Adding a new constraint
Consider the linear programming problem (P) and the optimal basis AB for
it. Suppose the additional constraint is
am+1,1 x1 + am+1,2 x2 + ... + am+1,n xn ≤ bm+1,
in which am+1,j (j = 1, 2, ..., n) and bm+1 are constant.
Let P 0 denote the set of feasible solutions of the modified problem, then
P 0 ⊂ P. Then z(x∗ ) = min {z(x) : x ∈ P} ≤ min {z(x) : x ∈ P 0 }. Hence,
if x∗ ∈ P 0 , the x∗ satisfies the additional constraint, x∗ remains optimal to
the modified problem. On the other hand, x∗ does not satisfy the additional
constraint, then
x∗n+1 = −(am+1,1 x∗1 + am+1,2 x∗2 + ... + am+1,n x∗n ) + bm+1 < 0
in which, xn+1 is the slack variable corresponding to the new additional
constraint.
60
5. SENSITIVITY ANALYSIS
The modified problem is
min
subject to
n
P
z(x) =
cj xj + 0xn+1
j=1
A
0
x
b
.
=
(m+1)
xn+1
bm+1
a
1
xj ≥ 0
(for all j)
To solve that we shall add one row and one column to the original optimal
tableau, that the slack variable xn+1 can be added as a basic variable. In the
column according to the xn+1 there is only one ”1” in the row according to
the basic variable xn+1 , and in the same row, the constants am+1,j have to
be written under the corresponding variable xj (j = 1, 2, ..., n).
In the bigger pivot tableau, pivot iteration should be made in the (i, i)
position, i ∈ IB is basic index, because the new row destroys the unit matrix
in the pivot tableau. Notice, this pivot tableau is obtained by adding a
row and a column to the original optimal table. This is why all of the
computation are only in the new row. After these pivot steps a new pivot
table is obtained. From this pivot table, we can read out, that the reduced
cost coefficients remain the same, thus the modified problem is still dual
feasible. If A−1
B bm+1 ≥ 0, then it is still primal feasible, it means also, that
the optimal solution remains optimal. But if it is negative, then we can solve
the problem by using the dual simplex algorithm.
To understand more about this problem and the method, let us consider
the following example.
Example 37 Let us add the constraint
4x1 + 2x2 − 2x3 + 4x4 ≤ 600
to the example in subsection 5.2. The original optimal table is:
opt
x1
x5
x4
x2
c
x2
x3
x4
1/4
2
−3/4
0 −13/4
0
−2
1
11/4
13/4
0
11/4
x5
x6
x7
b
0
1
0
1
1/4 −1
0
1 −1
0 −3/4
1
100
200
100
0
0
1/4
Now let us add a row and a column into it.
1
1300
61
5. SENSITIVITY ANALYSIS
x1
x2
x3
x4
0 −13/4
0
−2
1
11/4
x5
x6
x7
x8
b
0
1
0
1
1/4 −1
0
1 −1
0 −3/4
1
0
0
0
100
200
100
x5
x4
x2
1/4
2
−3/4
x8
4
2
−2
4
0
0
0
1
0
c
13/4
0
11/4
0
0
1/4
1
0
1300
x3
x4
x5
x6
x7
x8
b
After the pivoting:
x1
x2
x5
x4
x2
x8
1/4
2
−3/4
−5/2
0 −13/4
0
−2
1
11/4
0
1/2
0
1
0
0
1
1/4 −1
0
1 −1
0 −3/4
1
0 −5/2
2
0
100
0
200
0
100
1 −400
c
13/4
0
0
0
0
x1
x2
11/4
x3
x4
x5
1/4
x6
1
x7
x8
b
60
40
220
160
x5
x4
x2
x∪6
0
1
0
1
0 −16/5
0 −9/5
1
13/5
0
-1/5
0
1
0
0
1
0
0
0
0 −4/5
1/10
0 −1/5
2/5
0
2/5 −3/10
1 −4/5 −2/5
c
3
0
0
0
0
14/5
6/5
1300
1/10 1260
So, the optimal solution is x∗ = (1, 220, 0, 40, 50, 150, 0, 0)T and the optimal value is z ∗ = 1260.
Now suppose the additional constraint is an equality
am+1,1 x1 + am+1,2 x2 + ... + am+1,n xn = bm+1 .
If the optimal solution x∗ satisfies the additional constraint, then it remains
the optimal solution, otherwise we can write the coefficients into the pivot
tableau and solve the problem by using the simplex algorithm.
Remark 4 When we add a constraint to the problem, the rank of the matrix usually increases. If the additional constraint is an inequality, the slack
variable becomes just a basic variable. On the other hand, if the additional
5. SENSITIVITY ANALYSIS
62
constraint is an equality, there is no slack variable to become a basic variable,
so we need a auxiliary variable as a basic variable. After that, we use the
two stage simplex algorithm to eliminate it.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
6
63
Analysis of an oil production planning model
The linear programming system Aspen PIMS construct from the problem a
file in MPS format, MPSPROB.mps. We analyze the big size oil production planning model in the structure and quantitative considerations. The
analysis is especially in the feasible and local optimal viewpoints.
With CPLEX we tried solve this model with several linear programming
methods, unfortunately, each of them signal the infeasibility of the model.
During the analysis we want chiefly find out the reasons, what can occur
the infeasibility. The solution structure and the optimal value could vary,
hanging up the applied method, if the model were feasible.
We processed an article from Michael A. Tucker, (LP modelling-Past,
Present and Future), taking feasibility and mathematical model into consideration. In the article author gives a summary of the spreader model methods
in oil industry and their numerical experiences in past, present and future.
In order to analysis file in mps format, we introduce the mps linear programming, industrial standard construction.
The PIMS program applies recourse model, with that we obtain alternate
optimal solutions. We analyze them in the consideration of objective function
and structure. Beside that we examine, how much do the solutions, which
are obtained by recourse model, hurt conditions. We come to the conclusion,
that the solutions hurt great percentage of the conditions.
In order to obtain more accurate model and solutions, it was essential
to make a analyzing system, with which the feasibility can be checked, the
structure of MPS file can be changed. In this section we will also discuss
about this analyzing program, which is developed in C++ programming
language for windows platform. We will see also it’s documentation.
6.1
Short analysis about the structure and size of the
model
Currently, we use the Aspen PIMS program to make the product planning
models and also with it’s help to solve the models. The Aspen PIMS is a
commercial LP systems, which is registered trademark of Aspen Technology,
besides there are Petrofine, registered trademark of KBC Process Technologies and GRTMPS, registered trademark of Haverly System, theses system
are all developed for the oil industry in order to generate the product planning model. We will discuss about the program and it’s background in the
next part of our section.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
64
The Aspen PIMS program makes us linear programming models in MPS
format, which is a description about industrial standard linear programming
problems, used in spreader area. This format is adopted by most commercial
codes, and is not restricted to the MPS series of codes developed originally
for IBM computers. For more information, see [12]
Now, let us see a short survey of the structure and qualitative properties of
a linear programming model file, which are generated by PIMS. Our analysis
is based on this file.
At first, let us have a look at the structure of the input matrix, which
can be read out in the following picture.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
During this color picture is the structure easy to understand. The blue
points represent the number, which value is -1. The red points is +1. And
any other non-zero value are symbolized with black points. As appropriate,
the null elements are not drawn on the picture. The green zone means,
that the right-hand-side constant of that constraint condition is non-zero.
The violet columns appropriate to the negative coefficients in the objective
function, and the yellow columns appropriate to the positive coefficients in
the objective function. It is noticeable, that somewhere covers one point up
another in the picture, this is due to the big size of the input matrix and the
accuracy. In the followings let us look over some characteristic data.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
The number of decision variables:
The number of constraint conditions:
The number of non-zero elements in the input matrix
The number of elements having value 1
The number of elements having value −1
The number of positive elements (but not 1)
The number of negative elements (but not 1)
The number of equality conditions:
The number of smaller or equality conditions:
The number of smaller or equality conditions:
The number of neutral conditions (objective function):
Conditions according to the value of right-hand-side constants
the right-hand-side constant is 0:
the right-hand-side constant is nonnegative:
the right-hand-side constant is nonpositive:
the right-hand-side constant is 1:
the right-hand-side constant is −1:
the right-hand-side constant is greater than b (b 6= 0)
the right-hand-side constant is smaller than b (b 6= 0)
The number of nonnegative variables
The number of free variables
The number of variables with fix value
The number of variables having lower bound
The number of variables having upper bound
Variables having zero coefficients in objective function
Variables having positive coefficients in objective function
Variables having negative coefficients in objective function
Variables with fixed positive coefficients in objective function
Variables with fixed negative coefficients in objective function
65
6471
5601
37603
6318
3630
6774
20881
5051
295
239
16
5051
238
167
0
0
1
128
6345
41
85
3329
3346
6250
149
72
41
41
First of all, it is worth to consider, that a lot of important decision variables, which have non-zero coefficients in objective function, are prominently
fixed in the model.
In the viewpoint of results, the most important variables are those, belong
to them the coefficients in objective function are non-zero. The number of
such variables is small related to the whole problem, in out example, the
number is only about two hundred. Among them 72 coefficients are negative,
in the case of maximal objective function, such variable play important roles.
While the positive coefficients are 149, (such variables model generally the
realization decision.)
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
66
We can remark here similar proportion between the right-hand-side constants with value 0 and with value non-zero. According to the analysis very
many constraint (5051) have right-hand-side constants with value zero. Such
equality conditions represent generally balance conditions, processes, by accident material balance. The constraint conditions with none-zero righthand-side constants, which number is 129, model in general the qualitative
or quantitative conditions.
It is important to highlight the big number of fixed variables, it is 85
such variables. The overwhelming majority of such variables, according to
the data 82, belong to non-zero coefficients in the objective function, in other
words, they can be treated as essential decision variables. Such conditions
with fixed value will not participate in the optimum.
Similar consideration deserves about the big number of neutral conditions,
which is 16. According to the description about MPS format about the linear
programming problems [12], the neutral conditions are objective functions.
Another possibility is, if some conditions are set to be neutral, then they
don’t participate in the optimal process. In other words, they are deleted in
the viewpoint of results. In our MPS file belong the 16 neutral rows, there
are 2 objective functions, are 14 constraint conditions.
Any other conclusions can be also followed from the structure of MPSPROB.mps file. It is remarkable, there are a lot of null elements in both
objective function and right-hand-side of the LP model. That tell us, this
problem is both primal and dual degenerate. Consequently, it is numerically
difficult to handle. In other words, if the problem has optimal solution, then
it is very likely that, the problem has more alternate optimal solutions of
different constructions.
6.2
Flow Chart of solution by the PIMS
Now let us inspect, how are the product planning models for oil industry produced with the help of the PIMS programming system. We shall consider it
with modelling aspects, optimization aspects and respectively solution analysis aspects.
At first, let us have a look at the following picture about the flow chart of
solution, which can give us a more accurate, reasonable consideration about
the current modelling steps.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
67
Flow Chart
PIMS
Data collection and
processing
Choice of algorithm,
parameter settings
LP generation
Distributive
Recursion
Solving LP by
XPressMP
Solution report
In the previous subsection, we have known, that the PIMS program is
used to generate product planning models. Actually, PIMS is only a modelling language program, which can generate the model in lots of formats,
e.g. mps format, about this format we have already discussed.
Now through the previous picture, the structure of the PIMS program is
perspicuous.
At first, the collected data should be sent to PIMS as input data. It gives
users options to choose the algorithms and parameters. With the initial data
the PIMS generate us a linear programming problem. Because, there are
nonlinear constraints in the model, they are linearized, with guessed values.
After that, PIMS calls the XPressMP Linear programming solver to solve
the problem. If XPressMP find it unsolvable, then the algorithm enters in a
so called distributive recursion, which modify the value of some data in the
matrix or in the right-hand side. Afterwards, PIMS begins the cycle from
the parameter setting again. The cycle is go through, until XPressMP can
solve the modified model, or PIMS find this problem is ”strong infeasible”,
which means, the problem is still unsolvable after many cycle iterations.
As previously mentioned, there are nonlinear, nonconvex constraints in
the model, which are linearized with approximated values. This progress
can not only occur numerical instability but also infeasibility, which means,
that the modified problem is unsolvable. A bit later we will discuss about
linearization in detail. Here about the distributive recursion (DR) we do not
want to spend more words, because of the lack of specialized literature about
commercial technique, it is for us still like a black box.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
6.3
68
The oil industrial product planning model
Let us have a look more closer, in more detail at oil industrial product planning model. We will see what a role does the linear programming problem
in that model play. In order to be more familiar with oil industrial product
planning model, we found reference material. Unfortunately, because this
area involves many commercial aspects, very few material are public. This
section is based on the article from Michael A. Tucker [13].
During this article the past, present, future of the roles of linear programming in oil industrial model are easy to survey. Due to the substantial change
of the LP modelling systems, non-linear distributive recursion techniques
have been developed, swing cuts have been semi-automated, multi-period and
multi-refinery modelling capabilities have been improved, reformulated gasoline blending capabilities have been incorporated into the systems. Because
of the substantial change of LP model structure, the yield-driven models with
stream properties are moved to recursed property-driven models.
In past, one typical oil industrial model are characteristic of the followings. The crude unit models were presented by a series of yield structures
for each crude. The characteristics were:
• Each cutting scheme had , including max naphtha, max kerosene, etc.
a separate yield structure for each crude.
• The major side-cuts from each crude (and from certain cuts from each
cutting scheme) are unique streams, with unique names and unique
(fixed) qualities.
• The crude assay table data was generally generated by a crude assay
system containing a library of crude oils. Yields and properties from
the assay system were generally those predicted from laboratory TBP
distillations.
The downstream process unit sub-models were also represented by series
of multiple yield structures.
In the interest to distinguish one crude from another, hundreds of yield
structures for process unit sub-model are frequently generated to represent
each possible crude side-cut at multiple operating conditions and multiple
cutting schemes. In many case, between 4 and 20 versions of each product
stream might the process unit be structured to produce. The main characteristics about this type models are the followings:
• Generally, there were 3 to 10 times as many individual streams in the
model as there were streams in the ”real”refinery.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
69
• Generally, there were 10 to 20 times as many final product blending
options as there were actual component blending options available in
the ”real” refinery.
The models having the above mentioned characteristics tended to overoptimize the processing and blending of individual crude and their associated
side-cuts throughout the refinery. Those models frequently overstated the
value difference among individual crudes. The distributive recursion(DR)
are found to handle such problems, that is based on the ideas of Bill Hart
mathematician 1 . The distributive recursion technique provides the critical
linkage between upstream changes in pool qualities to the quality changes
in the downstream destinations. Early versions of the DR technique were
slow and difficultly manageable, today, it has become more faster, better
and more reliable.
Currently, there is an infinite variety of modelling techniques being used
in different companies. We can categorize current modelling in the various
areas of the LP model structure.
The distributive recursion techniques are frequently used to set the properties of the most important products. The base, feed property drivers, and
operating variable shifts all contain coefficients that calculate the appropriate recursed product properties. In such model, the coefficients can be either
fixed or determined by recursion. In general, if the properties of products
remains relatively constant regardless or the feed or operating shifts, then
the property may be fixed, otherwise, they may be determined by recursion.
The application of distributive recursion has eliminated the need for multiple
versions of the same stream. The obtained streams match more closely the
streams in the ”real” refinery.
6.4
Problems about the model and the appropriate
analysis
In the most of the current oil industrial linear programming models have
some degree of pooling and property recursion. The distributive recursion
technique requires initial estimations for some data. the distributive recursion is a method for nonlinear programming problem. As many nonlinear
programming method, there will be risks of local optimal solution, in other
words, the solution is optimal only in the own surrounding, not in the view
of the whole problem. In such a case, the value and features of solutions
depend on the initial estimated value, which is determined by the recursion,
1
Bill Hart, a senior mathematician from Haverly Systems.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
70
depend on the applied recursion technique, depend evenly on the properties
of the applied linear programming method.
Example 38 Example for Nonlinear modelling technique: Suppose, one con2 x2 +...+bk xk
≤ 0, where x1 + x2 + ... + xk > 0
straint is : b1 xx1 1+b
+x2 +...+xk
It follows that b1 x1 + b2 x2 + ... + bk xk ≤ 0.
It is important to emphasize, that the local optimal phenomenon is not
a property the linear programming problem. In every linear programming
problem, all optimal solutions are at the same time global optimal. The
local optimal phenomenon is caused by the nonlinear technique underlying
the distribute recursion. The distributive recursion technique may change
the constraint matrix and the right-hand-side vector of the original linear
problem.
The following picture shows us an example for local optima:
The phenomena of Local Optima
Caused by nonlinear nonconvex modeling
techniques
In a case for model containing moderate degree of recursion, one situation
between the following two may occur:
• The model users have experienced local optimal problems, and they
know about the existence of local optimum.
• The model users don’t know the existence of local optimal problems,
however, the local optimum exist, but they have not yet been discovered
by accident or testing.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
71
According to our experience, almost every such model, which uses the
recursion, have local optimum problem. This is a very serious problem, that
can evenly lead to a lack of confidence in the results of model. Although
local optimum problem in linear programming model supplied with recursion
technique cannot be completely eliminate, but they can be managed. The
handing of this problem can be a combination of the followings:
• Good built model structure, which avoid unnecessary recursion iterations,
• Correctly set the initial estimated values and recursion control parameters. The system default recursion control parameters are not always
the best settings.
• Testing for, searching and then analyzing the local optimal conditions.
Finding the conditions under which the local optimal occur and correcting these conditions.
There is a trend for the applied oil industrial linear programming problems, that more and more multi-period and multi-refinery models are used.
However, we have to call the attention, that such recursed models are susceptible to be unstable, when attempting to use both multi-period and multi refinery in the same case. There are also limitations on the use of multi-refinery.
In general, no more than 3 refineries may be attempted with recursed models.
Such model using big size recursion can be unstable, as represented in the
following picture.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
72
At the present time, there are two main best practice methods for monitoring LP performance, maintaining and updating LP models.
First is an overall refinery backcasting process, which may be used to
identify overall model weaknesses, for example, refinery gain higher than
actual, any profit opportunities. Backcasting is to compare the actual results
of the previous month with the plan and an actuated LP case, where the LP
attempts to match the actual results.
The second major activity is for process engineers to monitor their process
units on a weekly or bi-weekly basis. Lab samples, along with plant historian
data are used to compare the actual process unit results against the LP
predicted results for the same period. Process engineers and planners monitor
and compare the actual against the LP predicted and decide if the LP submodel needs updating. In most cases, a process simulator model may be
re-calibrated and used to generate the LP base and delta correctors on an asneeded basis. With this technique, process unit sub-models may be updated
as frequently as several times a year using plant data and tuned simulator
models.
For the problems, one possible solution is suggested. To work out simulator interfaces, with them, the LP system may be directed to an outside
application during each recursion pass. This outside application may be as
simple as a single non-linear equation or as complex as several large complex
process simulation models.
The current concept is that model coefficients that were early static, now
become dynamic, in other words, changing after each recursion iteration.
The correctors are calculated externally from the model and built it back
into the model. These yield correctors are not updated during recursion.
In such way, the LP model predicted yields become less accurate, if the LP
optimized feed deviates significantly from the base feed quality assumptions.
With the help of simulator interface, the linear feed property correctors
may now be calculated and updated during each recursion pass. This improves the accuracy since the final converged solution of the LP more closely
matches the results of the simulation prediction.
However, the risk of local optimal solutions may increase. Since more and
more coefficients are determined by the recursion method, furthermore the
initial assumptions for feed quality become increasingly important as they
may influence the final solution.
6.4.1
Local optimum and alternative optimal solution
As we saw, one of the most important problems of the oil industrial product
planning model is the existence of local optimal solutions. In principle, the
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
73
local optimum problem belongs to nonlinear programming and never occurs
in linear programming. In general, a big size nonlinear programming problem
is very difficult to solve, especially, due to the local optimums. It is hard to
gain the global optimal solution. If we are in an optimal solution, we are not
able to decide, whether we are in the global optimum or in a local one. In
addition, we have no information about other local optimums or the global
optimum at all.
In many commercial LP systems, such like Aspen PIMS, the nonlinear
distributive recursion techniques have been enhanced in the linear programming models. This exactly means, that we transform the nonlinear programming system to a linear programming model with the help of the distributive
recursion technique. The algorithm starts with initial estimated data. If
the produced linear programming model is not good enough, then the data
should be modified until the model can be solved. As follows, the solution of
the produced linear programming model usually differs from the solution of
the original nonlinear programming problem. If the model is made through
many recursion iterations, it is possible, that completely different problem is
solved than the original one. Consequently, it is essential to verify, whether
the obtained solution is really the solution of the original problem.
In the second section we have discussed, that a linear programming problem can have alternative optimal solutions, when it is dual degenerate. In our
mps file, it is to note, that there are a lot of zero elements in both the objective function and the right-hand-side of the LP model. That shows us, that
this problem is both primal and dual degenerate. If such a linear problem
has an optimal solution, then it also has alternate optimal solutions besides
due to the dual degeneracy. So, the model is difficult to handle numerically.
Suppose, a linear programming problem is given, the question is, how can
we obtain the alternative optimal solutions of it. The answer are followings:
• It is known from the literate of linear programming that when the
columns of the input matrix or tableau of the problem has been permuted and applied the same algorithm. If the linear problem is dual
degenerate, exist alternative optimal solutions, then different optimal
solution might be computed depending on the permutation.
• As we have mentioned in the second section, if there exist alternate optimal solutions, then the convex combinations of them are also optimal
solutions.
• If we make from the objective function an equality constraint, and add
a new objective function to the problem and solve it again, then the
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
74
objective value for the original problem remains. Because of the change
in structure, we can find alternate optimal solutions.
• If we solve the problem with pivot algorithm, and we have decided, that
the problem has alternate optimum, then we can get alternate optimal
solutions during continuous pivot iterations.
• If we solve the problem with different algorithms.
In practise, it is very useful to find different alternate optimum. Because
of the different structure of the solutions, the comparing and analyzing can
help us make more appropriate decisions.
We know about the alternate optimal solutions of the linear programming
problem, that the objective values of the different alternate optimal solutions
are same. However, if we change the structure of the original oil industrial
product planning nonlinear programming model , e.g. the columns permutation, then we can also gain different optimal solutions from the recursed
linear programming problems. But it is remarkable, that the objective values of that solutions are generally not equal. The explanation about that is
follows, from the same nonlinear programming problem with different structure, different linear programming problems are obtained by the recursion.
The optimal solutions of such linear programming problems are generally
only the local optimal solutions of the initial nonlinear problem.
As we have mentioned, that the local optimum is a difficult manageable
problem for nonlinear programming. But at the same time, the local optimal
solutions comprise much useful information, with that we can make more
accurate decisions. Unfortunately, in current market, there is not such a
reliable program for the big size nonlinear programming problems as the
CPLEX2 software for linear programming systems. At first we did not have
any tools to obtain and analyze the local optimum, even the CPLEX can
not show us the alternative optimum solutions automatically. On the other
hand, a serviceable tool, which can give us the alternative optimal solutions,
are very demanding.
6.4.2
Infeasibility and being unbounded
We know, that the oil industrial product planning problems generally leads to
nonlinear programming with given objective functions. Nonlinear constraints
are usually replaced with linear constraints with guessed data. So we get a
2
www.ilog.com, CPLEX solves a linear programming problem even with great size in
primal simplex, dual simplex and newton barrier method.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
75
linear programming problem. Such problems are solved by the commercial
LP systems, like Aspen PIMS, in which the distributive recursion technique
is applied. Some data, which represent the nonlinear constraint function, of
the initial linear programming model, are estimated and not exactly. If it is
necessary, the distributive recursion changed the guessed, or estimated data
of the problem, until the model is solvable, sometimes even many recursion
iteration are adopted.
Unfortunately, this process involves such risks, that a slightly different
problem can be solved than the original. It can evenly happen, that the
PIMS find, the modified, recursed problem is infeasible or even unbounded,
so it can not solve the problem, as well. To avoid these dangers, as we
mentioned above, we shall pay more attention to set more accurate initial
model, avoid unnecessary recursions.
Beside these attempts, it is quite essential to take the necessary steps
to analyzing the problem and the solutions. For example, it is important
to answer the questions like: are they really the solutions of the original
problem, how many conditions are not satisfied, how situate the solutions
related to the feasible solution set.
Therefore, such a post analyzing, checking system, which contains the
functions as mentioned above, is unavoidably required.
6.5
Model collecting
Above we have talked about the program system PIMS, its technique, its
advantages and disadvantages. We have also discussed about the problem
in detail. Now it is time to summarize our analyzing and collecting steps of
model.
In the following the flow chart with the completed necessary complementary steps can be seen.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
76
Completed Flow Chart
PIMS
Data collection and
processing
Report, classifying
changes
Choice of algorithm,
parameter settings
LP generation
Distributive
Recursion
Solving LP by
XPressMP
Solution report
Solution analysis,
parameter resetting
Look-back analysis
It is noticeable, that the model analysis is a sufficient step to get correct initial model before all the algorithm steps. Because our model has a
huge size, in order to update it regularly, the automation is necessary. The
following viewpoints are important for a correct model.
• Zero constraints
• Repeated constraints
• Always true constraints
• Mistype possibilities
• Badly scaled rows
• Free, only once occurring variables
• Neutral rows
• Fixed variables
• Tight cone-like constraints
In the step Solution Analysis, parameter resetting, we shall test the
model. Suppose our LP solver is fixed, and the implementation of DR is
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
77
analyzed. According to the above discussed, the initial nonlinear programming model is solved as sequential LP-s with the linearization by DR. With
the help of many local optimum, deterministic algorithms, we shall analyze
many different guessed values, different parameters, different LP methods,
so that we can get more accurate optimal solutions.
The sensitivity analysis belongs also to this analysis step. As we have
seen in the last section, the sensitivity analysis and parametric programming
are post-optimal algorithms, it makes the modification of the final linear
programming model easier.
For the time being, the distributive recursion seems like to be a black box
for us. However, we have to make checking analysis, so that we can be more
and more familiar with it. We know, if the PIMS find the model is unsolvable,
then it enters the DR, which changes some values of the data, we want to
know how does it work. So, we can build the model with different guessed
initial values, or we change the right-hand side vector after the algorithm
according to the bounds, then the new model should be solved with PIMS
again. Of course such check and report after optimization shall be completed
by automate program.
In final, the look-back analysis should not be ignored. The changed model
by DR may not identify the original problem. The look back analysis should
be made on the modified model. We want to know, whether the realized
production is feasible, what would be the corresponding objective value, what
are the correct guessed values for the realized production.
6.6
User documentation about the analyzing program
In the last part of this thesis, let us have a look at the analyzing program,
which is developed as a tool for our analyzing, studying about the qualification of the solutions, it is also able to give the possibility for getting more
local optimum and alternative optimal solutions of the models.
At first, we will talk about the problem statements and the goal of the
system. Afterwards, the system requirement and general user’s guide of the
program are presented.
Problem statement and the goal of the system
The task of the system or the problems to solve are the followings: Handling MPS file data directly
Principle tasks:
• Be able to read mps format file, loading mps file.
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
78
• Be able to modify the structure of the linear programming model. E.g.
the order of the variable can be changed, the objective function can be
transformed to a condition.
• Be able to read solution of that model
• Be able to check, whether the given solution satisfies the constraint
conditions, which is determined by the model
• Be able to count out the right-hand-side vector according to the given
solution
• Be able to count out, how far is the solution away from the feasible
solution set of the model, if it is necessary
• Be able to change the right-hand-side, if the solution is outside of the
feasible solution set
• Be able to modify the model and the mps file.
• Be able to save the counted right-hand-side vectors
• Be able to save the modified model
• Be able to compare MPS models
• Be able to regenerate RHS values from solutions, because the distributive recursion have changed the constraint matrix
• Be able to permute data to find alternate optimal solutions
• Be able to compare structure and value of obtained alternate local
optimal solutions
• Be able to fixe value of objective value, and defining new objective
function: a way to find alternate optimal solutions of LP problem
System requirements
Hardware requirements :
• Pentium II & Celeron Processor
• 5 MB storage capacity on hard disk
• 64 MB RAM
6. ANALYSIS OF AN OIL PRODUCTION PLANNING MODEL
79
• Mouse
Software requirements :
• Windows 98 or later version
General user0 s guide
Starting the program :
Starting the program with double click at the mpseditor.exe file, or Run
mpseditor.exe file.
The functions:
From menu file open, we can give the input mps file. In order to check,
whether one solution is feasible, the program can read the solution through
the button ”VectorInput”. The button on the right side, ”Ax=b” give us
the corresponding ”b” vector. Using the scroll, we can glance through the
rows, or columns, concurrently, the objective function, bounds for variables
and bounds for constraints. Because of the big size, it is more efficient to
use the search button, if we want to check one special row, or column. On
the vector ”x” and ”b”, there are two pop-up menu, on which we can save
the hurt bounds into a txt file, or change the bounds according to the vector
”x” or ”b”, then the modified model can be saved in mps format. If there
are more objective functions, we can choose which appear on the screen.
On the followings, the screen shot of the program can be viewed.
REFERENCES
80
References
[1] Katta G. Murty, Linear Programming, Library of Congress Cataloging in Publication Data, (1983) ISBN 0-471-09725-X.
´s Tibor, Line´aris Programoz´as, k´ezirat, (2002).
[2] Ille
´tal, Linear Programming, W.H.Freeman, New York,
[3] Vasek Chva
(1980), ISBN 0-7167-1195-8.
[4] Fried Ervin, Algebra I. (Elemi ´es line´aris algebra), , National Textbook
Publisher stock company, Budapest, (2000) ISBN9-6319-1176-4.
´s, Egy v´eges criss-cross m´odszer ´es alkalmaz´asai, MTA
[5] Terlaky Tama
SZTAKI Tanyulm´anyok 179/1986, Budapest, (1986).
´s, Criss-cross methods: A fresh view
[6] Fukuda K. and Terlaky Tama
on pivot algorithms, Mathematical Programming 79 369-395, (1997).
´s, Monotonic Build-Up
[7] Kurt M. Anstericher and Terlaky Tama
Simplex Algorithm For Linear Programming, Operations Research 42.
556-561, (1991).
´s Tibor and Me
´sza
´ros Katalin, A New and Constructive Proof
[8] Ille
of Two Basic Results of Linear Programming Yugoslav Journal of Operations Research, 11:15-30, (2001).
[9] R.G.Bland, New Finite Pivoting Rules for the Simplex Method, Mathematics of Operations Research 2, 103-107, (1977).
[10] Komei Fukuda and Makoto Namiki, On extremal behaviors of
Murty’s least index method, Mathematical Programming, (1994).
[11] Deng Chengliang, The theory and method of operational research,
Hua Zhong Technical University Publischer, (2001) ISBN 7-5609-1279-6.
[12] Bruce A. Murtagh, Advanced Linear Programming: Computation
and Practice, McGraw-Hill International Book, New York and London,
(1981).
[13] Michaeil A. Tucker, LP modelling - Past, Present and Future,
www.kbcat.com/pdfs/tech, (Dec, 2001).