A modified Polak‐Ribière‐Polyak conjugate gradient algorithm for nonsmooth convex programs

A modified Polak‐Ribière‐Polyak conjugate gradient algorithm for nonsmooth convex programs

0.00 Avg rating0 Votes
Article ID: iaor20141356
Volume: 255
Issue: 12
Start Page Number: 86
End Page Number: 96
Publication Date: Jan 2014
Journal: Journal of Computational and Applied Mathematics
Authors: , ,
Keywords: programming: convex
Abstract:

The conjugate gradient (CG) method is one of the most popular methods for solving smooth unconstrained optimization problems due to its simplicity and low memory requirement. However, the usage of CG methods is mainly restricted to solving smooth optimization problems so far. The purpose of this paper is to present efficient conjugate gradient‐type methods to solve nonsmooth optimization problems. By using the Moreau–Yosida regulation (smoothing) approach and a nonmonotone line search technique, we propose a modified Polak–Ribière–Polyak (PRP) CG algorithm for solving a nonsmooth unconstrained convex minimization problem. Our algorithm possesses the following three desired properties. (i) The search direction satisfies the sufficient descent property and belongs to a trust region automatically; (ii) the search direction makes use of not only the gradient information but also the function value information; and (iii) the algorithm inherits an important property of the well‐known PRP method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. Under standard conditions, we show that the algorithm converges globally to an optimal solution. Numerical experiment shows that our algorithm is effective and suitable for solving large‐scale nonsmooth unconstrained convex optimization problems.

Reviews

Required fields are marked *. Your email address will not be published.