We present two new error bounds for optimization problems over a convex set whose objective function f is either semianalytic or γ-strictly convex, with γ ≥ 1. We then apply these error bounds to analyze the rate of convergence of a wide class of iterative descent algorithms for the aforementioned optimization problem. Our analysis shows that the function sequence {f(xk)} converges at least at the sublinear rate of k–ϵ for some positive constant ϵ, where k is the iteration index. Moreover, the distances from the iterate sequence {xk} to the set of stationary points of the optimization problem converge to zero at least sublinearly.