A new nonlinear conjugate gradient method for unconstrained optimization

A new nonlinear conjugate gradient method for unconstrained optimization

0.00 Avg rating0 Votes
Article ID: iaor20061417
Country: Japan
Volume: 48
Issue: 4
Start Page Number: 284
End Page Number: 296
Publication Date: Dec 2005
Journal: Journal of the Operations Research Society of Japan
Authors: ,
Keywords: Conjugate gradient method, Global convergence
Abstract:

Conjugate gradient methods are widely used for large scale unconstrained optimization problems. Most of conjugate gradient methods don't always generate a descent search direction, so the descent condition is usually assumed in the analysis and implementations. Dai and Yuan proposed a conjugate gradient method which generates a descent search direction at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we give a new conjugate gradient method based on the study of Dai and Yuan, and show that our method always produces a descent search direction and converges globally if the Wolfe conditions are satisfied. Moreover our method has the second-order curvature information with a higher precision which uses the modified secant condition proposed by Zhang et al. and Zhang and Xu. Our numerical results show that our method is very efficient for given standard test problems, if we make a good choice of a parameter included in our method.

Reviews

Required fields are marked *. Your email address will not be published.