Aim NATCOR Convex Optimization Linear Programming 1 What should a 1-hour PhD lecture on LP achieve? Audience members have Different backgrounds in LP Different reasons to know about LP Julian Hall Answer School of Mathematics University of Edinburgh “Mature” review of LP theory—don’t worry if you don’t follow! Identify an alternative to the tableau simplex method Establish a result which [email protected] Is nice in itself Leads into “Structure and matrix sparsity”: Wednesday 13:30–15:30 24th June 2014 J. A. J. Hall Overview NATCOR Convex Optimization: Linear Programming 1 2/1 What is linear programming (LP)? The most important model used in optimal decision-making Grew out of US Army Air Force logistics problems in WW2 What is LP? General LP problems First practical LP problem was formulated by George Dantzig in 1946 Dantzig invented the principal solution technique—the simplex algorithm—in 1947 The feasible region Basic solutions Fundamental results In the “Top 10 algorithms of the 20th century” “The algorithm that runs the world” New Scientist (2012) The simplex method Algorithm Implementation Linear algebra G. B. Dantzig (1914-2005) LP and the simplex method have revolutionised organised human decision-making J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 3/1 . J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 4/1 General LP problems: Introduction General LP problem is maximize f = c T x subject to Ax ≤ b, x ≥0 Problem has n variables and m constraints Feasible region is a convex polyhedron in Rn Vertices are at intersection of n constraint/bound planes Aid vertex characterization and algebraic analysis by introducing slack variables General LP problems For each inequality introduce a slack variable xn+i Transforms the inequality into an equation and bound on xn+i Inherent LP problem is unchanged, but is in standard form maximize f = c T x subject to Ax = b, c where c = and A = A I has rank m 0 J. A. J. Hall General LP problems: The feasible region, vertices and solutions A set N of n indices of nonbasic variables with value zero at x A set B of m indices of basic variables whose values are then uniquely defined by the m equations x2 Corresponding to N and B the following are defined K The orthant x ≥ 0 6/1 The point x ∈ Rn+m is a basic solution of an LP problem in standard form if there is a partition of {1, 2, . . . , n + m} into x3 The hyperplane of solutions of Ax = b NATCOR Convex Optimization: Linear Programming 1 General LP problems: Basic solutions For an LP in standard form, Ax = b where A = A I ∈ Rm×(n+m) Characterization of the feasible region The feasible region K is the intersection of x ≥0 Corresponding to the n indices in N The matrix N ∈ Rm×n is the n columns of A The vector x N of nonbasic variables is the n components of x The vector c N of nonbasic costs is the n components of c K is a convex set Definition of a feasible vertex A feasible vertex is a point x ∈ K which does not lie strictly within any line segment joining two points in K x1 Corresponding to the m indices in B The basis matrix B ∈ Rm×m is the m columns of A and is nonsingular The vector x B of basic variables is the m components of x The vector c B of basic costs is the m components of c Result: If an LP has an optimal solution then there is an optimal solution at a vertex J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 7/1 J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 8/1 General LP problems: The partitioned LP General LP problems: Results The partitioned LP in standard form is maximize f subject to = cT + cT B xB N xN Bx B + Nx N x B ≥ 0, x N ≥ 0 Result: x is a vertex of K iff x is a basic feasible solution Consequence: Solution methods need only consider basic feasible solutions = b Result: A point x ∈ K is an optimal solution of an LP problem if it is a basic feasible solution with non-positive reduced costs cbN = c N − N T B −T c B Consequence: Same LP as in standard form Corresponds to reordering the components of x according to sets B and N Condition cbN ≤ 0 allows the optimality of a basic feasible solution to be checked If cbN 6≤ 0 the simplex algorithm identifies a basic feasible solution with better objective value Equations are Bx B + Nx N = b so x B = B −1 b − B −1 Nx N b where b b = B −1 b x N = 0 at x so values of the basic variables are x B = b, T Substituting for x B in the objective function gives f = fb + cbN x N , where This is the key to solving LP problems b fb = c T B b is the objective value when x N = 0 cbN = c N − N T B −T c B is the vector of reduced costs b ≥ 0 then x ≥ 0 is referred to as a basic feasible solution If b J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 9/1 J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 10 / 1 Simplex method: Choosing an improving direction Observe: If cbN 6≤ 0 there exists q such that cbq > 0 The simplex method Let xq0 be the q th nonbasic variable xB dB If x is partitioned as then consider x + αd for d partitioned as xN dN Only nonbasic variable xq0 is increased from zero if d N = e q [e q is column q of I ] For feasibility Ad = 0 iff Bd B + Ne q = 0 d B = −B −1 Ne q = −b aq ⇐⇒ where Bb a q = a q and a q is column q of N If x + αd is feasible for α > 0 then the objective increases strictly by αb cq J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 12 / 1 Simplex method: Identifying the step length in the improving direction Simplex method: Identifying the new basic feasible solution On x + αd for α ≥ 0, components of x N remain feasible since d N = e q At x + αd let xp0 be the zeroed basic variable Interchanging p 0 and q 0 between B and N yields a partition at x + αd with Any limit on the feasibility of x + αd is given by the values of the basic variables xB ≥ 0 xN = 0 b − αb xB = b aq x + αd is a new basic feasible solution since its basis matrix is nonsingular bq has positive components If a If α > 0 then For α sufficiently large, at least one component of x B will be zeroed The smallest of these values of α is the greatest step α which can be made in the direction d whilst maintaining feasibility x + αd is a distinct, “new” vertex Objective at x + αd is strictly greater (by αb cq ) than the objective at x If α > 0 always holds then termination is provable bq ≤ 0 no basic variable is zeroed so the LP is unbounded If a J. A. J. Hall If α = 0 then x is degenerate and the simplex algorithm can cycle/stall NATCOR Convex Optimization: Linear Programming 1 13 / 1 Simplex method: Algorithm description 1 x+αd 2 Determine the nonbasic variable xq0 with most positive reduced cost 3 1 x K x2 If the reduced costs are non-positive then stop The solution is optimal 2 Determine the nonbasic variable xq0 with most positive reduced cost Determine the feasible direction d when xq0 is increased from zero 3 Determine the feasible direction d when xq0 is increased from zero 4 If no basic variable is zeroed on x + αd then stop The LP is unbounded 4 If no basic variable is zeroed on x + αd then stop The LP is unbounded 5 Determine the first basic variable xp0 to be zeroed on x + αd 5 Determine the first basic variable xp0 to be zeroed on x + αd 6 Make xp0 nonbasic and xq0 basic 6 Make xp0 nonbasic and xq0 basic 7 Go to 1 7 Go to 1 x1 Unbounded step if q 0 = 3 J. A. J. Hall 14 / 1 x3 Description of the simplex algorithm Given a basic feasible solution x x2 If the reduced costs are non-positive then stop The solution is optimal NATCOR Convex Optimization: Linear Programming 1 Simplex method: Algorithm description x3 Description of the simplex algorithm Given a basic feasible solution x J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 15 / 1 x K x+αd x1 Bounded step if q 0 = 1 J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 16 / 1 Simplex method: Algorithm definition Simplex method: Basis matrix update Definition of the simplex algorithm Given a basic feasible solution x with B and N 1 If c bN ≤ 0 then stop: the solution is optimal Updating B Each simplex iteration exchanges the p th entry of B with the q th entry of N Determine the index q 0 ∈ N of the variable xq0 with most positive reduced cost cbq bq = B −1 a q , where a q is column q of N Let a 2 3 Column p of B is replaced by the vector a q [column q of N] The updated basis matrix B 0 is given by bq ≤ 0 then stop: the LP is unbounded If a 4 T B 0 = B + aq e T p − ap e p = B I + (b a q − e p )e T p argmin bbi corresponding to p = i=1,...,m abiq ab >0 5 Determine the index p 0 ∈ B of the variable xp0 6 Exchange indices p 0 and q 0 between B and N to yield a new basic feasible solution 7 Go to 1 iq = BE E = I + (b a q − e p )e T p where This result has fundamental theoretical and practical consequences J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 17 / 1 J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 Simplex method: B 0 is nonsingular B 0 is nonsingular iff E is nonsingular [Since B 0 = BE , where E = I + (b a q − e p )e T p and B is nonsingular] Nonsingularity of E may be established as follows For a matrix A = I + uv T the (Sherman-Morrison) formula for A−1 is −1 A . 1 =I− uv T 1 + vTu when The simplex method: Implementation T v u 6= −1 bq − e p and v = e p Hence, using u = a E −1 = I − 1 (b a q − e p )e T p abpq Since the pivot value in the simplex iteration is abpq 6= 0, it follows that E is nonsingular J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 19 / 1 18 / 1 Simplex method: Implementation Simplex method: Implementation (RSM) The data for a simplex iteration are The reduced costs cbN = c N − N T B −T c B For the reduced costs cbN = c N − N T B −T c B observe that bq = B −1 a q The pivotal column a b = B −1 b The reduced RHS b cbN = c N − N T π Solve B T π = c B Form z = N T π Then cbN = c N − z Requires O(mn) storage and O(mn) computation per iteration Inefficient and prohibitively expensive for large problems bq = B −1 a q For the pivotal column a bq = B −1 a q as required, forms cbN Revised simplex method (RSM): computes a b = B −1 b and updates b Solve Bb aq = aq b = B −1 b For the reduced RHS b 2 Requires up to O(m ) storage and O(m ) computation per iteration but vastly less for sparse LP problems Efficient for large (sparse) problems J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 π = B −T c B so cbN may be formed as How to obtain these vectors determines the efficiency of the simplex implementation b and cbN in a rectangular Standard simplex method (SSM): maintains B −1 N, b tableau 2 where b − αb b Exploit x B = b a q to update b 21 / 1 Simplex method: Implementation (RSM) J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 22 / 1 Simplex method: Implementation (RSM) LU decomposition GE applied to B yields the decomposition B = LU at cost O(m3 ) L is the lower triangular matrix of elimination multipliers Diagonal entries are all one U is the upper triangular matrix after GE Each iteration of the revised simplex method requires Solution of Bb aq = aq Solution of B T π = c B Linear systems with lower (upper) triangular coefficient matrix can be solved by forward (backward) substitution Given B = LU, solve Bb a q = a q as Gaussian elimination (GE) can be used to solve individual systems at cost O(m3 ) Nature of systems in the revised simplex method allows solution at cost O(m2 ) Obtain an invertible representation of B via LU decomposition Exploit B 0 = B I + (b a q − e p )e T p Ly = a q then Ub aq = y Since B T = U T LT , solve B T π = c B as to update the invertible representation of B UT y = cB then LT π = y Cost of each forward and backward substitution is O(m2 ) J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 23 / 1 J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 24 / 1 Simplex method: Implementation (RSM) Simplex method: Implementation (RSM) Updating the invertible representation of B Recall h i B 0 = BE where E = I + (b a q − e p )e T p and E −1 = I − B 0 x = b may be solved as BE x = b ⇐⇒ By = b; x =E −1 1 (b a q − e p )e T p abpq Solving all systems at cost O(m2 ) each Start with B = {n + 1, n + 2, . . . , n + m} so B = I has trivial LU decomposition L=U=I Use the Fletcher-Matthews technique to update the LU decomposition itself y Similarly B 0 T x = b may be solved as T T E B x = b ⇐⇒ y = E −T b; Allows the LU decomposition of B 0 to be obtained by updating the LU decomposition of B at a cost O(m2 ) Numerically stable (unlike PF update) No recalculation of LU decomposition of B is required T B x =y Since E has only one non-trivial column, it may be stored using a single vector and the index p and operations with E −1 cost O(m) This product form (PF) update technique can be extended to perform multiple updates Eventually the cost of working with all the matrices of the form E will dominate Then preferable to recompute the LU decomposition at cost O(m3 ) J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 25 / 1 J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 26 / 1 Simplex method: Why use it? Interior point methods (IPM) are the “modern” alternative to the simplex method For single LP problems IPM are frequently faster The simplex method: Why use it? For some classes of single LP problems the simplex method is faster When solving sequences of related LP problems the simplex method is preferable Branch-and-bound for discrete optimization Sequential linear programming for nonlinear optimization Why? Simplex method yields a basic feasible solution Simplex method can be re-started easily from an optimal solution of one LP to solve a related LP quickly 67 years old and going strong! J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 28 / 1 Linear Programming 1: Summary Identified that solution methods need only consider basic feasible solutions of LPs Described the simplex algorithm Identified that efficient implementation depends on techniques for solving related systems of equations Remains to consider how to exploit LP problem structure and matrix sparsity: Wednesday 13:30–15:30 J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 29 / 1
© Copyright 2024