Convergence analysis of stochastic algorithms

Convergence analysis of stochastic algorithms

0.00 Avg rating0 Votes
Article ID: iaor20041239
Country: United States
Volume: 21
Issue: 3
Start Page Number: 615
End Page Number: 628
Publication Date: Aug 1996
Journal: Mathematics of Operations Research
Authors: ,
Abstract:

This paper investigates asymptotic convergence to stationary points of gradient descent algorithms where the functions involved are not available in closed form but are approximated by sequences of random functions. The algorithms take large stepsizes and use progressively finer precision at successive iterations. Two kinds of optimization algorithms are considered: one, where the limiting function (to be minimized) is differentiable but the random approximating function can be non-differentiable, and the other, where both the limiting and approximating functions are non-differentiable and convex. Such functions often arise in discrete event dynamic system-models in various application areas. The analysis brings together ideas and techniques from the disciplines of nonlinear programming and nondifferentiable optimization on one hand, and stochastic processes and especially uniform laws of large numbers, on the other hand. A general algorithm framework is first developed and analyzed, and then applied to the specific algorithms considered. The analysis extends the results derived to date for similar algorithms, and has a potential for further extensions in proving convergence of other algorithms as well.

Reviews

Required fields are marked *. Your email address will not be published.