IDEAS home Printed from https://ideas.repec.org/a/spr/advdac/v12y2018i4d10.1007_s11634-016-0276-4.html
   My bibliography  Save this article

A computationally fast variable importance test for random forests for high-dimensional data

Author

Listed:
  • Silke Janitza

    (University of Munich)

  • Ender Celik

    (University of Munich)

  • Anne-Laure Boulesteix

    (University of Munich)

Abstract

Random forests are a commonly used tool for classification and for ranking candidate predictors based on the so-called variable importance measures. These measures attribute scores to the variables reflecting their importance. A drawback of variable importance measures is that there is no natural cutoff that can be used to discriminate between important and non-important variables. Several approaches, for example approaches based on hypothesis testing, were developed for addressing this problem. The existing testing approaches require the repeated computation of random forests. While for low-dimensional settings those approaches might be computationally tractable, for high-dimensional settings typically including thousands of candidate predictors, computing time is enormous. In this article a computationally fast heuristic variable importance test is proposed that is appropriate for high-dimensional data where many variables do not carry any information. The testing approach is based on a modified version of the permutation variable importance, which is inspired by cross-validation procedures. The new approach is tested and compared to the approach of Altmann and colleagues using simulation studies, which are based on real data from high-dimensional binary classification settings. The new approach controls the type I error and has at least comparable power at a substantially smaller computation time in the studies. Thus, it might be used as a computationally fast alternative to existing procedures for high-dimensional data settings where many variables do not carry any information. The new approach is implemented in the R package vita.

Suggested Citation

  • Silke Janitza & Ender Celik & Anne-Laure Boulesteix, 2018. "A computationally fast variable importance test for random forests for high-dimensional data," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 12(4), pages 885-915, December.
  • Handle: RePEc:spr:advdac:v:12:y:2018:i:4:d:10.1007_s11634-016-0276-4
    DOI: 10.1007/s11634-016-0276-4
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11634-016-0276-4
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11634-016-0276-4?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ruoqing Zhu & Donglin Zeng & Michael R. Kosorok, 2015. "Reinforcement Learning Trees," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(512), pages 1770-1784, December.
    2. Phipson Belinda & Smyth Gordon K, 2010. "Permutation P-values Should Never Be Zero: Calculating Exact P-values When Permutations Are Randomly Drawn," Statistical Applications in Genetics and Molecular Biology, De Gruyter, vol. 9(1), pages 1-16, October.
    3. Kim H. & Loh W.Y., 2001. "Classification Trees With Unbiased Multiway Splits," Journal of the American Statistical Association, American Statistical Association, vol. 96, pages 589-604, June.
    4. Anne-Laure Boulesteix, 2015. "Ten Simple Rules for Reducing Overoptimistic Reporting in Methodological Computational Research," PLOS Computational Biology, Public Library of Science, vol. 11(4), pages 1-6, April.
    5. Hapfelmeier, A. & Ulm, K., 2013. "A new variable selection approach using Random Forests," Computational Statistics & Data Analysis, Elsevier, vol. 60(C), pages 50-69.
    6. Janitza, Silke & Tutz, Gerhard & Boulesteix, Anne-Laure, 2016. "Random forest for ordinal responses: Prediction and variable selection," Computational Statistics & Data Analysis, Elsevier, vol. 96(C), pages 57-73.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Costa, Alexandre Bonnet R. & Ferreira, Pedro Cavalcanti G. & Gaglianone, Wagner P. & Guillén, Osmani Teixeira C. & Issler, João Victor & Lin, Yihao, 2021. "Machine learning and oil price point and density forecasting," Energy Economics, Elsevier, vol. 102(C).
    2. KONDO Satoshi & MIYAKAWA Daisuke & SHIRAKI Kengo & SUGA Miki & USUKI Teppei, 2019. "Using Machine Learning to Detect and Forecast Accounting Fraud," Discussion papers 19103, Research Institute of Economy, Trade and Industry (RIETI).
    3. Hornung, Roman & Boulesteix, Anne-Laure, 2022. "Interaction forests: Identifying and exploiting interpretable quantitative and qualitative interaction effects," Computational Statistics & Data Analysis, Elsevier, vol. 171(C).
    4. Hapfelmeier, Alexander & Hornung, Roman & Haller, Bernhard, 2023. "Efficient permutation testing of variable importance measures by the example of random forests," Computational Statistics & Data Analysis, Elsevier, vol. 181(C).
    5. Riccardo Rosati & Luca Romeo & Gianalberto Cecchini & Flavio Tonetto & Paolo Viti & Adriano Mancini & Emanuele Frontoni, 2023. "From knowledge-based to big data analytic model: a novel IoT and machine learning based decision support system for predictive maintenance in Industry 4.0," Journal of Intelligent Manufacturing, Springer, vol. 34(1), pages 107-121, January.
    6. Araujo, Gustavo Silva & Gaglianone, Wagner Piazza, 2023. "Machine learning methods for inflation forecasting in Brazil: New contenders versus classical models," Latin American Journal of Central Banking (previously Monetaria), Elsevier, vol. 4(2).
    7. Oyebayo Ridwan Olaniran & Ali Rashash R. Alzahrani, 2023. "On the Oracle Properties of Bayesian Random Forest for Sparse High-Dimensional Gaussian Regression," Mathematics, MDPI, vol. 11(24), pages 1-29, December.
    8. Massimiliano Fessina & Giambattista Albora & Andrea Tacchella & Andrea Zaccaria, 2022. "Which products activate a product? An explainable machine learning approach," Papers 2212.03094, arXiv.org.
    9. Jin Yutong & Benkeser David, 2022. "Identifying HIV sequences that escape antibody neutralization using random forests and collaborative targeted learning," Journal of Causal Inference, De Gruyter, vol. 10(1), pages 280-295, January.
    10. Hediger, Simon & Michel, Loris & Näf, Jeffrey, 2022. "On the use of random forest for two-sample testing," Computational Statistics & Data Analysis, Elsevier, vol. 170(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gérard Biau & Erwan Scornet, 2016. "A random forest guided tour," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 25(2), pages 197-227, June.
    2. Strobl, Carolin & Boulesteix, Anne-Laure & Augustin, Thomas, 2007. "Unbiased split selection for classification trees based on the Gini Index," Computational Statistics & Data Analysis, Elsevier, vol. 52(1), pages 483-501, September.
    3. Rina Friedberg & Julie Tibshirani & Susan Athey & Stefan Wager, 2018. "Local Linear Forests," Papers 1807.11408, arXiv.org, revised Sep 2020.
    4. Liangyuan Hu & Lihua Li, 2022. "Using Tree-Based Machine Learning for Health Studies: Literature Review and Case Series," IJERPH, MDPI, vol. 19(23), pages 1-13, December.
    5. Masha Shunko & Julie Niederhoff & Yaroslav Rosokha, 2018. "Humans Are Not Machines: The Behavioral Impact of Queueing Design on Service Time," Management Science, INFORMS, vol. 64(1), pages 453-473, January.
    6. Weijun Wang & Dan Zhao & Liguo Fan & Yulong Jia, 2019. "Study on Icing Prediction of Power Transmission Lines Based on Ensemble Empirical Mode Decomposition and Feature Selection Optimized Extreme Learning Machine," Energies, MDPI, vol. 12(11), pages 1-21, June.
    7. Yiyi Huo & Yingying Fan & Fang Han, 2023. "On the adaptation of causal forests to manifold data," Papers 2311.16486, arXiv.org, revised Dec 2023.
    8. Crystal T. Nguyen & Daniel J. Luckett & Anna R. Kahkoska & Grace E. Shearrer & Donna Spruijt‐Metz & Jaimie N. Davis & Michael R. Kosorok, 2020. "Estimating individualized treatment regimes from crossover designs," Biometrics, The International Biometric Society, vol. 76(3), pages 778-788, September.
    9. Ruoqing Zhu & Ying-Qi Zhao & Guanhua Chen & Shuangge Ma & Hongyu Zhao, 2017. "Greedy outcome weighted tree learning of optimal personalized treatment rules," Biometrics, The International Biometric Society, vol. 73(2), pages 391-400, June.
    10. Romero, Julian & Rosokha, Yaroslav, 2018. "Constructing strategies in the indefinitely repeated prisoner’s dilemma game," European Economic Review, Elsevier, vol. 104(C), pages 185-219.
    11. Shih, Yu-Shan & Tsai, Hsin-Wen, 2004. "Variable selection bias in regression trees with constant fits," Computational Statistics & Data Analysis, Elsevier, vol. 45(3), pages 595-607, April.
    12. Buczak, Philip & Horn, Daniel & Pauly, Markus, 2024. "Old but Gold or New and Shiny? Comparing Tree Ensembles for Ordinal Prediction with a Classic Parametric Approach," OSF Preprints v7bcf, Center for Open Science.
    13. Theresa Ullmann & Anna Beer & Maximilian Hünemörder & Thomas Seidl & Anne-Laure Boulesteix, 2023. "Over-optimistic evaluation and reporting of novel cluster algorithms: an illustrative study," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 17(1), pages 211-238, March.
    14. Sameer Al-Dahidi & Piero Baraldi & Miriam Fresc & Enrico Zio & Lorenzo Montelatici, 2024. "Feature Selection by Binary Differential Evolution for Predicting the Energy Production of a Wind Plant," Energies, MDPI, vol. 17(10), pages 1-19, May.
    15. Lkhagvadorj Munkhdalai & Tsendsuren Munkhdalai & Oyun-Erdene Namsrai & Jong Yun Lee & Keun Ho Ryu, 2019. "An Empirical Comparison of Machine-Learning Methods on Bank Client Credit Assessments," Sustainability, MDPI, vol. 11(3), pages 1-23, January.
    16. Roman Hornung, 2020. "Ordinal Forests," Journal of Classification, Springer;The Classification Society, vol. 37(1), pages 4-17, April.
    17. Marcella Corduas & Alfonso Piscitelli, 2017. "Modeling university student satisfaction: the case of the humanities and social studies degree programs," Quality & Quantity: International Journal of Methodology, Springer, vol. 51(2), pages 617-628, March.
    18. Lee, Tzu-Haw & Shih, Yu-Shan, 2006. "Unbiased variable selection for classification trees with multivariate responses," Computational Statistics & Data Analysis, Elsevier, vol. 51(2), pages 659-667, November.
    19. Ollech, Daniel & Webel, Karsten, 2020. "A random forest-based approach to identifying the most informative seasonality tests," Discussion Papers 55/2020, Deutsche Bundesbank.
    20. Pedro Delicado & Daniel Peña, 2023. "Understanding complex predictive models with ghost variables," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 32(1), pages 107-145, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:advdac:v:12:y:2018:i:4:d:10.1007_s11634-016-0276-4. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.