An efficient hybrid conjugate gradient method for unconstrained optimization

An efficient hybrid conjugate gradient method for unconstrained optimization

0.00 Avg rating0 Votes
Article ID: iaor20022500
Country: Netherlands
Volume: 103
Issue: 1
Start Page Number: 33
End Page Number: 47
Publication Date: Mar 2001
Journal: Annals of Operations Research
Authors: ,
Abstract:

Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar βk with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that one of the hybrid methods is especially efficient for the given test problems.

Reviews

Required fields are marked *. Your email address will not be published.