Cross-validated bagged learning
Many applications aim to learn a high dimensional parameter of a data generating distribution based on a sample of independent and identically distributed observations. For example, the goal might be to estimate the conditional mean of an outcome given a list of input variables. In this prediction context, bootstrap aggregating (bagging) has been introduced as a method to reduce the variance of a given estimator at little cost to bias. Bagging involves applying an estimator to multiple bootstrap samples and averaging the result across bootstrap samples. In order to address the curse of dimensionality, a common practice has been to apply bagging to estimators which themselves use cross-validation, thereby using cross-validation within a bootstrap sample to select fine-tuning parameters trading off bias and variance of the bootstrap sample-specific candidate estimators. In this article we point out that in order to achieve the correct bias variance trade-off for the parameter of interest, one should apply the cross-validation selector externally to candidate bagged estimators indexed by these fine-tuning parameters. We use three simulations to compare the new cross-validated bagging method with bagging of cross-validated estimators and bagging of non-cross-validated estimators.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 98 (2007)
Issue (Month): 9 (October)
|Contact details of provider:|| Web page: http://www.elsevier.com/wps/find/journaldescription.cws_home/622892/description#description|
|Order Information:|| Postal: http://www.elsevier.com/wps/find/supportfaq.cws_home/regional|
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Borra, Simone & Di Ciaccio, Agostino, 2002. "Improving nonparametric regression methods by bagging and boosting," Computational Statistics & Data Analysis, Elsevier, vol. 38(4), pages 407-420, February.
- Mark van der Laan & Sandrine Dudoit & Aad van der Vaart, 2004. "The Cross-Validated Adaptive Epsilon-Net Estimator," U.C. Berkeley Division of Biostatistics Working Paper Series 1141, Berkeley Electronic Press.
- Hothorn, Torsten & Lausen, Berthold, 2005. "Bundling classifiers by bagging trees," Computational Statistics & Data Analysis, Elsevier, vol. 49(4), pages 1068-1078, June.
- Peter Hall & Richard J. Samworth, 2005. "Properties of bagged nearest neighbour classifiers," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 67(3), pages 363-379.
- Birkner Merrill D. & Sinisi Sandra E. & van der Laan Mark J., 2005. "Multiple Testing and Data Adaptive Regression: An Application to HIV-1 Sequence Data," Statistical Applications in Genetics and Molecular Biology, De Gruyter, vol. 4(1), pages 1-30, April.
- Sinisi Sandra E & van der Laan Mark J., 2004. "Deletion/Substitution/Addition Algorithm in Learning with Applications in Genomics," Statistical Applications in Genetics and Molecular Biology, De Gruyter, vol. 3(1), pages 1-40, August.
When requesting a correction, please mention this item's handle: RePEc:eee:jmvana:v:98:y:2007:i:9:p:1693-1704. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Shamier, Wendy)
If references are entirely missing, you can add them using this form.