Regularization in Regression : Comparing Bayesian and Frequentist Methods in a Poorly Informative Situation
We propose a global noninformative approach for Bayesian variable selection that builds onZellner’s g-priors and is similar to Liang et al. (2008). Our proposal does not require any kindof calibration. In the case of a benchmark, we compare Bayesian and frequentist regularizationapproaches under a low informative constraint when the number of variables is almost equalto the number of observations. The simulated and real dataset experiments we present herehighlight the appeal of Bayesian regularization methods, when compared with alternatives.They dominate frequentist methods in the sense they provide smaller prediction errors whileselecting the most relevant variables in a parsimonious way.
|Date of creation:||2010|
|Date of revision:|
|Contact details of provider:|| Postal: 15 Boulevard Gabriel Peri 92245 Malakoff Cedex|
Phone: 01 41 17 60 81
Web page: http://www.crest.fr
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:crs:wpaper:2010-43. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Florian Sallaberry)
If references are entirely missing, you can add them using this form.