Article ID: | iaor19951902 |
Country: | Netherlands |
Volume: | 63 |
Issue: | 1 |
Start Page Number: | 65 |
End Page Number: | 82 |
Publication Date: | Jan 1994 |
Journal: | Mathematical Programming (Series A) |
Authors: | Higle Julia L., Sen Suvrajeet, Au Kelly T. |
In many instances, the exact evaluation of an objective function and its subgradients can be computationally demanding. By way of example, the authors cite problems that arise within the context of stochastic optimization, where the objective function is typically defined via multi-dimensional integration. In this paper, they address the solution of such optimization problems by exploring the use of successive approximation schemes within subgradient optimization methods. The authors refer to this new class of methods as inexact subgradient algorithms. With relatively mild conditions imposed on the approximations, they show that the inexact subgradient algorithms inherit properties associated with their traditional (i.e., exact) counterparts. Within the context of stochastic optimization, the conditions that the authors impose allow a relaxation of requirements traditionally imposed on steplengths in stochastic quasi-gradient methods. Additionally, they study methods in which steplengths may be defined adaptively, in a manner that reflects the improvement in the objective function approximations as the iterations proceed. The authors illustrate the applicability of the present approach by proposing an inexact subgradient optimization method for the solution of stochastic linear programs.