A sparsity preserving stochastic gradient methods for sparse regression

A sparsity preserving stochastic gradient methods for sparse regression

0.00 Avg rating0 Votes
Article ID: iaor2014702
Volume: 58
Issue: 2
Start Page Number: 455
End Page Number: 482
Publication Date: Jun 2014
Journal: Computational Optimization and Applications
Authors: , ,
Keywords: statistics: regression, stochastic processes
Abstract:

We propose a new stochastic first‐order algorithm for solving sparse regression problems. In each iteration, our algorithm utilizes a stochastic oracle of the subgradient of the objective function. Our algorithm is based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory lectures on convex optimization: a basic course, Kluwer, Amsterdam, 2003). The convergence rate of our algorithm depends continuously on the noise level of the gradient. In particular, in the limiting case of noiseless gradient, the convergence rate of our algorithm is the same as that of optimal deterministic gradient algorithms. We also establish some large deviation properties of our algorithm. Unlike existing stochastic gradient methods with optimal convergence rates, our algorithm has the advantage of readily enforcing sparsity at all iterations, which is a critical property for applications of sparse regressions.

Reviews

Required fields are marked *. Your email address will not be published.