Article ID: | iaor20061426 |
Country: | United States |
Volume: | 30 |
Issue: | 3 |
Start Page Number: | 765 |
End Page Number: | 784 |
Publication Date: | Aug 2005 |
Journal: | Mathematics of Operations Research |
Authors: | Levy Adam B. |
Successive approximation methods appear throughout numerical optimization, where a solution to an optimization problem is sought as the limit of solutions to a succession of simpler approximation problems. Such methods include essentially any standard penalty method, barrier method, trust region method, augmented Lagrangian method, or sequential quadratic programming (SQP) method, as well as many other methods. The approximation problems on which a successive approximation method is based typically depend on parameters, in which case the performance of the method is related to the corresponding sequence of parameters. For many successive approximation methods, the sequence of parameters might need only approach some parameter target set for the method to have nice convergence properties. Successive approximation methods could be analyzed as examples of a generic inclusion solving method from Levy because the solutions to the approximation problems satisfy necessary optimality inclusions. However, the inclusion solving method from Levy was developed for single-parameter target points. In this paper, we extend the results from Levy to allow parameter target sets and apply these results to the convergence analysis of successive approximation methods. We focus on two important convergence issues: (1) the rate of convergence of the iterates generated by a successive approximation method and (2) the validity of the limit as a solution to the original problem. An augmented Lagrangian method allowing quite general parameter updating is explored in detail to illustrate how the framework presented here can expose interesting new alternatives for numerical optimization.