A nonmonotone memory gradient method for unconstrained optimization

A nonmonotone memory gradient method for unconstrained optimization

0.00 Avg rating0 Votes
Article ID: iaor20082015
Country: Japan
Volume: 50
Issue: 1
Start Page Number: 31
End Page Number: 45
Publication Date: Mar 2007
Journal: Journal of the Operations Research Society of Japan
Authors:
Keywords: optimization, programming: mathematical, programming: nonlinear
Abstract:

Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele & Cantrell and Cragg & Levy. Recently Narushima & Yabe proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.

Reviews

Required fields are marked *. Your email address will not be published.