New error bounds and their applications to convergence analysis of iterative algorithms

New error bounds and their applications to convergence analysis of iterative algorithms

0.00 Avg rating0 Votes
Article ID: iaor20013610
Country: Germany
Volume: 88
Issue: 2
Start Page Number: 341
End Page Number: 355
Publication Date: Jan 2000
Journal: Mathematical Programming
Authors:
Abstract:

We present two new error bounds for optimization problems over a convex set whose objective function f is either semianalytic or γ-strictly convex, with γ ≥ 1. We then apply these error bounds to analyze the rate of convergence of a wide class of iterative descent algorithms for the aforementioned optimization problem. Our analysis shows that the function sequence {f(xk)} converges at least at the sublinear rate of k–ϵ for some positive constant ϵ, where k is the iteration index. Moreover, the distances from the iterate sequence {xk} to the set of stationary points of the optimization problem converge to zero at least sublinearly.

Reviews

Required fields are marked *. Your email address will not be published.