Globally convergent stochastic optimization with optimal asymptotic distribution

Globally convergent stochastic optimization with optimal asymptotic distribution

0.00 Avg rating0 Votes
Article ID: iaor19992650
Country: United Kingdom
Volume: 35
Issue: 2
Start Page Number: 395
End Page Number: 406
Publication Date: Jun 1998
Journal: Journal of Applied Probability
Authors:
Abstract:

A stochastic gradient descent method is combined with a consistent auxiliary estimate to achieve global convergence of the recursion. Using step lengths converging to zero slower than 1/n and averaging the trajectories, yields the optimal convergence rate of 1/√(n) and the optimal variance of the asymptotic distribution. Possible applications can be found in maximum likelihood estimation, regression analysis, training of artificial neural networks, and stochastic optimization.

Reviews

Required fields are marked *. Your email address will not be published.