Bayesian adaptive Lasso quantile regression
Recently, variable selection by penalized likelihood has attracted much research interest. In this paper, we propose adaptive Lasso quantile regression (BALQR) from a Bayesian perspective. The method extends the Bayesian Lasso quantile regression by allowing different penalization parameters for different regression coefficients. Inverse gamma prior distributions are placed on the penalty parameters. We treat the hyperparameters of the inverse gamma prior as unknowns and estimate them along with the other parameters. A Gibbs sampler is developed to simulate the parameters from the posterior distributions. Through simulation studies and analysis of a prostate cancer data set, we compare the performance of the BALQR method proposed with six existing Bayesian and non-Bayesian methods. The simulation studies and the prostate cancer data analysis indicate that the BALQR method performs well in comparision to the other approaches.
|Date of creation:||Jul 2011|
|Contact details of provider:|| Postal: Hoveniersberg 4, B-9000 Gent|
Phone: ++ 32 (0) 9 264 34 61
Fax: ++ 32 (0) 9 264 35 92
Web page: http://www.ugent.be/eb
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Zou, Hui, 2006. "The Adaptive Lasso and Its Oracle Properties," Journal of the American Statistical Association, American Statistical Association, vol. 101, pages 1418-1429, December.
- D. F. Benoit & D. Van Den Poel, 2010. "Binary quantile regression: A Bayesian approach based on the asymmetric Laplace density," Working Papers of Faculty of Economics and Business Administration, Ghent University, Belgium 10/662, Ghent University, Faculty of Economics and Business Administration.
- Park, Trevor & Casella, George, 2008. "The Bayesian Lasso," Journal of the American Statistical Association, American Statistical Association, vol. 103, pages 681-686, June.
- Wang, Hansheng & Li, Guodong & Jiang, Guohua, 2007. "Robust Regression Shrinkage and Consistent Variable Selection Through the LAD-Lasso," Journal of Business & Economic Statistics, American Statistical Association, vol. 25, pages 347-355, July.
- Koenker, Roger, 2004. "Quantile regression for longitudinal data," Journal of Multivariate Analysis, Elsevier, vol. 91(1), pages 74-89, October.
- Koenker,Roger, 2005.
Cambridge University Press, number 9780521845731.
- Roger Koenker & Kevin F. Hallock, 2001. "Quantile Regression," Journal of Economic Perspectives, American Economic Association, vol. 15(4), pages 143-156, Fall.
- Koenker,Roger, 2005. "Quantile Regression," Cambridge Books, Cambridge University Press, number 9780521608275.
- Chris Hans, 2009. "Bayesian lasso regression," Biometrika, Biometrika Trust, vol. 96(4), pages 835-845.
- Hideo Kozumi & Genya Kobayashi, 2009. "Gibbs Sampling Methods for Bayesian Quantile Regression," Discussion Papers 2009-02, Kobe University, Graduate School of Business Administration.
- Fan J. & Li R., 2001. "Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties," Journal of the American Statistical Association, American Statistical Association, vol. 96, pages 1348-1360, December.
- Yu, Keming & Moyeed, Rana A., 2001. "Bayesian quantile regression," Statistics & Probability Letters, Elsevier, vol. 54(4), pages 437-447, October.
- Yu, Keming & Stander, Julian, 2007. "Bayesian analysis of a Tobit quantile regression model," Journal of Econometrics, Elsevier, vol. 137(1), pages 260-276, March. Full references (including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:rug:rugwps:11/728. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Nathalie Verhaeghe)
If references are entirely missing, you can add them using this form.