Applied Optimization
VOLUME 97
Series Editors:
Panos M. Pardalos
University of Florida, U.S.A.
Donald W. Heam
University of Florida, U.S.A.
Contents
PREFACE XV
TABLE OF NOTATION xix
1 INTRODUCTION 1
1.1 What is mathematical optimization? 1
1.2 Objective and constraint functions 4
1.3 Basic optimization concepts 6
1.3.1 Simplest class of problems:
Unconstrained one-dimensional minimization . . . 6
1.3.2 Contour representation of a function of two variables
(n = 2) 7
1.3.3 Contour representation of constraint functions . . 10
1.3.4 Contour representations of constrained optimization
problems 11
1.3.5 Simple example illustrating the formulation and
solution of an optimization problem 12
1.3.6 Maximization 14
1.3.7 The special case of Linear Programming 14
viii CONTENTS
1.3.8 Scahng of design variables 15
1.4 Further mathematical prerequisites 16
1.4.1 Convexity 16
1.4.2 Gradient vector of/(x) 18
1.4.3 Hessian matrix of/(x) 20
1.4.4 The quadratic function in R^ 20
1.4.5 The directional derivative of /(x) in the direction u 21
1.5 Unconstrained minimization 21
1.5.1 Global and local minima; saddle points 23
1.5.2 Local characterization of the behaviour of a multivariable
function 24
1.5.3 Necessary and sufficient conditions for a strong
local minimum at x* 26
1.5.4 General indirect method for computing x* 28
1.6 Exercises 31
2 LINE SEARCH DESCENT METHODS FOR UNCONSTRAINED
MINIMIZATION 33
2.1 General line search descent algorithm for unconstrained
minimization 33
2.1.1 General structure of a line search descent method . 34
2.2 One-dimensional line search 35
2.2.1 Golden section method 36
2.2.2 Powell's quadratic interpolation algorithm 38
2.2.3 Exercises 39
CONTENTS ix
2.3 First order line search descent methods 40
2.3.1 The method of steepest descent 40
2.3.2 Conjugate gradient methods 43
2.4 Second order line search descent methods 49
2.4.1 Modified Newton's method 50
2.4.2 Quasi-Newton methods 51
2.5 Zero order methods and computer
optimization subroutines 52
2.6 Test functions 53
3 STANDARD METHODS FOR CONSTRAINED OPTIMIZATION
57
3.1 Penalty function methods for constrained minimization . . 57
3.1.1 The penalty function formulation 58
3.1.2 Illustrative examples 58
3.1.3 Sequential unconstrained minimization technique
(SUMT) 59
3.1.4 Simple example 60
3.2 Classical methods for constrained
optimization problems 62
3.2.1 Equality constrained problems and the Lagrangian
function 62
3.2.2 Classical approach to optimization with inequality
constraints: the KKT conditions 70
3.3 Saddle point theory and duahty 73
3.3.1 Saddle point theorem 73
X CONTENTS
3.3.2 Duality 74
3.3.3 Duality theorem 74
3.4 Quadratic programming 77
3.4.1 Active set of constraints 78
3.4.2 The method of Theil and Van de Panne 78
3.5 Modern methods for constrained optimization 81
3.5.1 The gradient projection method 81
3.5.2 Multiplier methods 89
3.5.3 Sequential quadratic programming (SQP) 93
4 NEW GRADIENT-BASED TRAJECTORY AND APPROXIMATION
METHODS 97
4.1 Introduction 97
4.1.1 Why new algorithms? 97
4.1.2 Research at the University of Pretoria 98
4.2 The dynamic trajectory optimization
method 100
4.2.1 Basic dynamic model 100
4.2.2 Basic algorithm for unconstrained problems (LFOP)lOl
4.2.3 Modification for constrained problems (LFOPC) . 101
4.3 The spherical quadratic steepest descent method 105
4.3.1 Introduction 105
4.3.2 Classical steepest descent method revisited . . . . 106
4.3.3 The SQSD algorithm 107
CONTENTS xi
4.3.4 Convergence of the SQSD method 108
4.3.5 Numerical results and conclusion 113
4.3.6 Test functions used for SQSD 117
4.4 The Dynamic-Q optimization algorithm 119
4.4.1 Introduction 119
4.4.2 The Dynamic-Q method 119
4.4.3 Numerical results and conclusion 123
4.5 A gradient-only line search method for conjugate gradient
methods 126
4.5.1 Introduction 126
4.5.2 Formulation of optimization problem 127
4.5.3 Gradient-only line search 128
4.5.4 Conjugate gradient search directions and SUMT . 133
4.5.5 Numerical results 135
4.5.6 Conclusion 139
4.6 Global optimization using dynamic search trajectories . . 139
4.6.1 Introduction 139
4.6.2 The Snyman-Fatti trajectory method 141
4.6.3 The modified bouncing ball trajectory method . . 143
4.6.4 Global stopping criterion 145
4.6.5 Numerical results 148
5 EXAMPLE PROBLEMS 151
5.1 Introductory examples 151
xii CONTENTS
5.2 Line search descent methods 157
5.3 Standard methods for constrained
optimization 170
5.3.1 Penalty function problems 170
5.3.2 The Lagrangian method applied to
equality constrained problems 172
5.3.3 Solution of inequality constrained problems
via auxiliary variables 181
5.3.4 Solution of inequality constrained problems via
the Karush-Kuhn-Tucker conditions 184
5.3.5 Solution of constrained problems via
the dual problem formulation 191
5.3.6 Quadratic programming problems 194
5.3.7 Application of the gradient projection method . .199
5.3.8 Application of the augmented Lagrangian method 202
5.3.9 Application of the sequential quadratic programming
method 203
6 SOME THEOREMS 207
6.1 Characterization of functions and minima 207
6.2 Equality constrained problem 211
6.3 Karush-Kuhn-Tucker theory 215
6.4 Saddle point conditions 218
6.5 Conjugate gradient methods 223
6.6 DFP method 227
CONTENTS xiii
A THE SIMPLEX METHOD FOR LINEAR PROGRAMMING
PROBLEMS 233
A.l Introduction 233
A.2 Pivoting for increase in objective function 235
A.3 Example 236
A.4 The auxiliary problem for problem with infeasible origin . 238
A.5 Example of auxiliary problem solution 239
A.6 Degeneracy . 241
A.7 The revised simplex method 242
A.8 An iteration of the RSM 244 |