On combining feasibility, descent and superlinear convergence in inequality constrained optimization

On combining feasibility, descent and superlinear convergence in inequality constrained optimization

0.00 Avg rating0 Votes
Article ID: iaor19941954
Country: Netherlands
Volume: 59
Issue: 2
Start Page Number: 261
End Page Number: 276
Publication Date: Apr 1993
Journal: Mathematical Programming (Series A)
Authors: ,
Abstract:

Extension of quasi-Newton techniques from unconstrained to constrained optimization via Sequential Quadratic Programming (SQP) presents several difficulties. Among these are the possible inconsistency, away from the solution of first order approximations to the constraints, resulting in infeasibility of the quadratic programs; and the task of selecting a suitable merit function, to induce global convergence. In the case of inequality constrained optimization, both of these difficulties disappear if the algorithm is forced to generate iterates that all satisfy the constraints, and that yield monotonically decreasing objective function values. (Feasibility of the successive iterates is in fact required in many contexts such as in real-time applications or when the objective function is not well defined outside the feasible set.) It has been recently shown that this can be achieved while preserving local two-step superlinear convergence. In this note, the essential ingredients for an SQP-based method exhibiting the desired properties are highlighted. Correspondingly, a class of such algorithms is described and analyzed. Tests performed with an efficient implementation are discussed.

Reviews

Required fields are marked *. Your email address will not be published.