A globally convergent version of the Polak–Ribière conjugate gradient method

A globally convergent version of the Polak–Ribière conjugate gradient method

0.00 Avg rating0 Votes
Article ID: iaor1999956
Country: Netherlands
Volume: 78
Issue: 3
Start Page Number: 375
End Page Number: 391
Publication Date: Sep 1997
Journal: Mathematical Programming
Authors: ,
Abstract:

In this paper we propose a new line search algorithm that ensures global convergence of the Polak–Ribière conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line search every limit point produced by the Polak–Ribière iteration is a stationary point of the objective function. Moreover, we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the same good features of the Polak–Ribière method, while avoiding pathological situations.

Reviews

Required fields are marked *. Your email address will not be published.