A geometrically convergent subgradient optimization method for nonlinearly constrained convex programs

A geometrically convergent subgradient optimization method for nonlinearly constrained convex programs

0.00 Avg rating0 Votes
Article ID: iaor1988255
Country: United States
Volume: 13
Issue: 3
Start Page Number: 512
End Page Number: 523
Publication Date: Aug 1988
Journal: Mathematics of Operations Research
Authors:
Abstract:

In this paper we extend the geometrically convergent subgradient optimization method for minimizing a convex function, proposed independently by Goffin and also Shor and Gamburd, to a class of constrained convex programs. We permit both arbitrary set constraints and convex inequality constraints. Such programs arise, for example, in sizing trunks in a multi-hour alternate routing telecommunications network and in certain location and approximation problems. The geometric convergence rate depends upon a condition number of the convex program that we define. Our proof of geometric convergence, when explicit convex inequality constraints are present, is the first such result for subgradient optimization. We compare the iterates generated by our method to the limiting behavior, as the penalty parameter approaches infinity, of the sequence of iterates obtained by applying Goffin’s method to the exact penalty function corresponding to the convex program. We show that geometric convergence still holds when -subgradients (for sufficiently small >0) are used instead of subgradients.

Reviews

Required fields are marked *. Your email address will not be published.