Computational comparisons of dual conjugate gradient algorithms for strictly convex networks

Computational comparisons of dual conjugate gradient algorithms for strictly convex networks

0.00 Avg rating0 Votes
Article ID: iaor19992014
Country: United Kingdom
Volume: 25
Issue: 4
Start Page Number: 333
End Page Number: 349
Publication Date: Apr 1998
Journal: Computers and Operations Research
Authors: , ,
Keywords: programming: nonlinear
Abstract:

This paper presents a Lagrangian dual conjugate gradient algorithm for solving a variety of nonlinear network problems with strictly convex, differentiable, and separable objective functions. The proposed algorithm belongs to an iterative dual scheme, which converges to a point within a given tolerance-relative residual error. By exploiting the special structure of the network constraints, this approach is able to solve large problems with minimal computer memory requirements because no matrix operations are necessary. An extensive computational study has been carried out using different direction generators, line search procedures, and restart schemes. Two conjugate gradient direction formulas, the Polak–Ribière and the memoryless BFGS, have been compared. In addition, two restart methods – the Beale's restart and the gradient restart – are tested for their effectiveness. A Newton method line search was tested against a bisection line search. Computational comparisons using statistical analysis have also been reported to illustrate the effectiveness of each combination of these procedures. The proposed method works well for network problems with quadratics, entropy, cubic, polynomial, and logarithm types of objective functions. Our computational experiments indicate that the most effective Lagrangian dual algorithm should include the combination of the Polak–Ribière conjugate gradient formula, the Newton line search, and the gradient restart.

Reviews

Required fields are marked *. Your email address will not be published.