Optimization problems in statistical learning: Duality and optimality conditions

Optimization problems in statistical learning: Duality and optimality conditions

0.00 Avg rating0 Votes
Article ID: iaor20115398
Volume: 213
Issue: 2
Start Page Number: 395
End Page Number: 404
Publication Date: Sep 2011
Journal: European Journal of Operational Research
Authors: ,
Keywords: duality
Abstract:

Regularization methods are techniques for learning functions from given data. We consider regularization problems the objective function of which consisting of a cost function and a regularization term with the aim of selecting a prediction function f with a finite representation f ( · ) = Σ i = 1 n c i k ( · , X i ) equ1 which minimizes the error of prediction. Here the role of the regularizer is to avoid overfitting. In general these are convex optimization problems with not necessarily differentiable objective functions. Thus in order to provide optimality conditions for this class of problems one needs to appeal on some specific techniques from the convex analysis. In this paper we provide a general approach for deriving necessary and sufficient optimality conditions for the regularized problem via the so‐called conjugate duality theory. Afterwards we employ the obtained results to the Support Vector Machines problem and Support Vector Regression problem formulated for different cost functions.

Reviews

Required fields are marked *. Your email address will not be published.