Convergence of the BFGS method for LC convex constrained optimization

Convergence of the BFGS method for LC convex constrained optimization

0.00 Avg rating0 Votes
Article ID: iaor1997678
Country: United States
Volume: 34
Issue: 6
Start Page Number: 2051
End Page Number: 2063
Publication Date: Nov 1996
Journal: SIAM Journal on Control and Optimization
Authors:
Abstract:

This paper proposes a BFGS-SQP method for linearly constrained optimization where the objective function f is required only to have a Lipschitz gradient. The Karush-Kuhn-Tucker system of the problem is equivalent to a system of nonsmooth equations F(v)=0. At every step a quasi-Newton matrix is updated if ||F(vk)|| satisfies a rule. This method converges globally, and the rate of convergence is superlinear when f is twice strongly differentiable at a solution of the optimization problem. No assumptions on the constraints are required. This generalizes the classical convergence theory of the BFGS method, which requires a twice continuous differentiability assumption on the objective function. Applications to stochastic programs with recourse on a CM5 parallel computer are discussed.

Reviews

Required fields are marked *. Your email address will not be published.