A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training

A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training

0.00 Avg rating0 Votes
Article ID: iaor20106365
Volume: 47
Issue: 2
Start Page Number: 179
End Page Number: 206
Publication Date: Oct 2010
Journal: Computational Optimization and Applications
Authors: ,
Keywords: support vector machines
Abstract:

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar's notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.

Reviews

Required fields are marked *. Your email address will not be published.