On the convergence of algorithms with implications for stochastic and nondifferentiable optimization

On the convergence of algorithms with implications for stochastic and nondifferentiable optimization

0.00 Avg rating0 Votes
Article ID: iaor19921524
Country: United States
Volume: 17
Issue: 1
Start Page Number: 112
End Page Number: 131
Publication Date: Feb 1992
Journal: Mathematics of Operations Research
Authors:
Abstract:

Studies of the convergence of algorithms often revolve around the existence of a function with respect to which monotonic descent is required. This paper shows that under relatively lenient conditions, ‘stage-dependent descent’ (not necessarily monotonic) is sufficient to guarantee convergence. This development also provides the impetus to examine optimization algorithms. It shows that one of the important avenues in the study of convergence, namely, the theory of epi-convergence imposes stronger conditions than are necessary to establish the convergence of an optimization algorithm. Working from a relaxation of epi-convergence, the paper introduces the notion of ∂-compatibility, and proves several results that permit relaxations of conditions imposed by previous approaches to algorithmic convergence. Finally, to illustrate the usefulness of the concepts, it combines stage-dependent descent with results derivable from ∂-compatibility to provide a basis for the convergence of a general algorithmic statement that might be used for stochastic and nondifferentiable optimization.

Reviews

Required fields are marked *. Your email address will not be published.