Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization

Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization

0.00 Avg rating0 Votes
Article ID: iaor201526240
Volume: 9
Issue: 5
Start Page Number: 961
End Page Number: 979
Publication Date: Jun 2015
Journal: Optimization Letters
Authors: ,
Keywords: programming: convex
Abstract:

A great deal of interest of solving large‐scale convex optimization problems has, recently, turned to gradient method and its variants. To ensure rates of linear convergence, current theory regularly assumes that the objective functions are strongly convex. This paper goes beyond the traditional wisdom by studying a strictly weaker concept than the strong convexity, named restricted strongly convexity which was recently proposed and can be satisfied by a much broader class of functions. Utilizing the restricted strong convexity, we derive rates of linear convergence for (in)exact gradient‐type methods. Besides, we obtain two by‐products: (1) we rederive rates of linear convergence of inexact gradient method for a class of structured smooth convex optimizations; (2) we improve the rate of linear convergence for the linearized Bregman algorithm.

Reviews

Required fields are marked *. Your email address will not be published.