IDEAS home Printed from
   My bibliography  Save this article

Variable selection by Random Forests using data with missing values


  • Hapfelmeier, A.
  • Ulm, K.


Variable selection has been suggested for Random Forests to improve data prediction and interpretation. However, the basic element, i.e. variable importance measures, cannot be computed straightforward when there are missing values in the predictor variables. Possible solutions are multiple imputation, complete case analysis and the use of a self-contained importance measure that is able to deal with missing values. Simulation and application studies have been conducted to investigate the properties of these procedures when combined with two popular variable selection methods. Findings and recommendations: Complete case analysis should not be used as it led to inaccurate variable selection. Multiple imputation is the method of choice if the selection of a variable is supposed to reflect its potential relevance in a complete data setting. However, Random Forests are commonly used without any preprocessing of the data as they are known to implicitly deal with missing values. In such a case, the application of the self-contained importance measure permits the selection of variables that are of relevance in these actual prediction models.

Suggested Citation

  • Hapfelmeier, A. & Ulm, K., 2014. "Variable selection by Random Forests using data with missing values," Computational Statistics & Data Analysis, Elsevier, vol. 80(C), pages 129-139.
  • Handle: RePEc:eee:csdana:v:80:y:2014:i:c:p:129-139
    DOI: 10.1016/j.csda.2014.06.017

    Download full text from publisher

    File URL:
    Download Restriction: Full text for ScienceDirect subscribers only.

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    1. van Buuren, Stef & Groothuis-Oudshoorn, Karin, 2011. "mice: Multivariate Imputation by Chained Equations in R," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 45(i03).
    2. Strobl, Carolin & Boulesteix, Anne-Laure & Augustin, Thomas, 2007. "Unbiased split selection for classification trees based on the Gini Index," Computational Statistics & Data Analysis, Elsevier, vol. 52(1), pages 483-501, September.
    3. Kung, Yi-Hung & Lin, Chang-Ting & Shih, Yu-Shan, 2012. "Split variable selection for tree modeling on rank data," Computational Statistics & Data Analysis, Elsevier, vol. 56(9), pages 2830-2836.
    4. Horton, Nicholas J. & Kleinman, Ken P., 2007. "Much Ado About Nothing: A Comparison of Missing Data Methods and Software to Fit Incomplete Data Regression Models," The American Statistician, American Statistical Association, vol. 61, pages 79-90, February.
    5. Doove, L.L. & Van Buuren, S. & Dusseldorp, E., 2014. "Recursive partitioning for missing data imputation in the presence of interaction effects," Computational Statistics & Data Analysis, Elsevier, vol. 72(C), pages 92-104.
    6. Archer, Kellie J. & Kimes, Ryan V., 2008. "Empirical characterization of random forest variable importance measures," Computational Statistics & Data Analysis, Elsevier, vol. 52(4), pages 2249-2260, January.
    7. Hapfelmeier, A. & Hothorn, T. & Ulm, K., 2012. "Recursive partitioning on incomplete data using surrogate decisions and multiple imputation," Computational Statistics & Data Analysis, Elsevier, vol. 56(6), pages 1552-1565.
    8. Lee, Tzu-Haw & Shih, Yu-Shan, 2006. "Unbiased variable selection for classification trees with multivariate responses," Computational Statistics & Data Analysis, Elsevier, vol. 51(2), pages 659-667, November.
    9. Shih, Yu-Shan & Tsai, Hsin-Wen, 2004. "Variable selection bias in regression trees with constant fits," Computational Statistics & Data Analysis, Elsevier, vol. 45(3), pages 595-607, April.
    10. Hapfelmeier, A. & Ulm, K., 2013. "A new variable selection approach using Random Forests," Computational Statistics & Data Analysis, Elsevier, vol. 60(C), pages 50-69.
    Full references (including those not matched with items on IDEAS)


    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.

    Cited by:

    1. repec:eee:jotrge:v:78:y:2019:i:c:p:70-86 is not listed on IDEAS
    2. Lee, Min Cherng & Mitra, Robin, 2016. "Multiply imputing missing values in data sets with mixed measurement scales using a sequence of generalised linear models," Computational Statistics & Data Analysis, Elsevier, vol. 95(C), pages 24-38.


    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:80:y:2014:i:c:p:129-139. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu). General contact details of provider: .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.