Tuning parameter selection in penalized generalized linear models for discrete data

Tuning parameter selection in penalized generalized linear models for discrete data

0.00 Avg rating0 Votes
Article ID: iaor201522342
Volume: 68
Issue: 4
Start Page Number: 276
End Page Number: 292
Publication Date: Nov 2014
Journal: Statistica Neerlandica
Authors: , ,
Keywords: estimation, penalty functions, regularisation techniques, logistic regression
Abstract:

In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. 2014 The Authors. Statistica Neerlandica 2014 VVS.

Reviews

Required fields are marked *. Your email address will not be published.