Large-scale convex optimization via saddle point computation

Large-scale convex optimization via saddle point computation

0.00 Avg rating0 Votes
Article ID: iaor20011010
Country: United States
Volume: 47
Issue: 1
Start Page Number: 93
End Page Number: 101
Publication Date: Jan 1999
Journal: Operations Research
Authors: ,
Keywords: optimization
Abstract:

This article proposes large-scale convex optimization problems to be solved via saddle points of the standard Lagrangian. A recent approach for saddle point computation is specialized, by way of a specific perturbation technique and unique scaling method, to convex optimization problems with differentiable objective and constraint functions. In each iteration the update directions for primal and dual variables are determined by gradients of the Lagrangian. These gradients are evaluated at perturbed points that are generated from current points via auxiliary mappings. The resulting algorithm suits massively parallel computing, though in this article we consider only a serial implementation. We test a version of our code embedded within GAMS on 16 nonlinear problems, which are mainly large. These models arise from multistage optimization of economic systems. For larger problems with adequate precision requirements, our implementation appears faster than MINOS.

Reviews

Required fields are marked *. Your email address will not be published.