Approximate Greatest Descent Methods for Optimization with Equality Constraints

Approximate Greatest Descent Methods for Optimization with Equality Constraints

0.00 Avg rating0 Votes
Article ID: iaor20111976
Volume: 148
Issue: 3
Start Page Number: 505
End Page Number: 527
Publication Date: Mar 2011
Journal: Journal of Optimization Theory and Applications
Authors:
Keywords: heuristics
Abstract:

In an optimization problem with equality constraints the optimal value function divides the state space into two parts. At a point where the objective function is less than the optimal value, a good iteration must increase the value of the objective function. Thus, a good iteration must be a balance between increasing or decreasing the objective function and decreasing a constraint violation function. This implies that at a point where the constraint violation function is large, we should construct noninferior solutions relative to points in a local search region. By definition, an accessory function is a linear combination of the objective function and a constraint violation function. We show that a way to construct an acceptable iteration, at a point where the constraint violation function is large, is to minimize an accessory function. We develop a two‐phases method. In Phase I some constraints may not be approximately satisfied or the current point is not close to the solution. Iterations are generated by minimizing an accessory function. Once all the constraints are approximately satisfied, the initial values of the Lagrange multipliers are defined. A test with a merit function is used to determine whether or not the current point and the Lagrange multipliers are both close to the optimal solution. If not, Phase I is continued. If otherwise, Phase II is activated and the Newton method is used to compute the optimal solution and fast convergence is achieved.

Reviews

Required fields are marked *. Your email address will not be published.