IDEAS home Printed from https://ideas.repec.org/a/eee/jmvana/v161y2017icp191-212.html
   My bibliography  Save this article

Feature screening in large scale cluster analysis

Author

Listed:
  • Banerjee, Trambak
  • Mukherjee, Gourab
  • Radchenko, Peter

Abstract

We propose a novel methodology for feature screening in the clustering of massive datasets, in which both the number of features and the number of observations can potentially be very large. Taking advantage of a fusion penalization based convex clustering criterion, we propose a highly scalable screening procedure that efficiently discards non-informative features by first computing a clustering score corresponding to the clustering tree constructed for each feature, and then thresholding the resulting values. We provide theoretical support for our approach by establishing uniform non-asymptotic bounds on the clustering scores of the “noise” features. These bounds imply perfect screening of non-informative features with high probability and are derived via careful analysis of the empirical processes corresponding to the clustering trees that are constructed for each of the features by the associated clustering procedure. Through extensive simulation experiments, we compare the performance of our proposed method with other screening approaches popularly used in cluster analysis and obtain encouraging results. We demonstrate empirically that our method is applicable to cluster analysis of big datasets arising in single-cell gene expression studies.

Suggested Citation

  • Banerjee, Trambak & Mukherjee, Gourab & Radchenko, Peter, 2017. "Feature screening in large scale cluster analysis," Journal of Multivariate Analysis, Elsevier, vol. 161(C), pages 191-212.
  • Handle: RePEc:eee:jmvana:v:161:y:2017:i:c:p:191-212
    DOI: 10.1016/j.jmva.2017.08.001
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0047259X17300271
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.jmva.2017.08.001?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Johnstone, Iain M. & Lu, Arthur Yu, 2009. "On Consistency and Sparsity for Principal Components Analysis in High Dimensions," Journal of the American Statistical Association, American Statistical Association, vol. 104(486), pages 682-693.
    2. Wei‐Chien Chang, 1983. "On Using Principal Components before Separating a Mixture of Two Multivariate Normal Distributions," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 32(3), pages 267-275, November.
    3. Shen, Xiaotong & Huang, Hsin-Cheng, 2010. "Grouping Pursuit Through a Regularization Solution Surface," Journal of the American Statistical Association, American Statistical Association, vol. 105(490), pages 727-739.
    4. T. Tony Cai & Wenguang Sun, 2017. "Optimal screening and discovery of sparse signals with applications to multistage high throughput studies," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(1), pages 197-223, January.
    5. Arias-Castro, Ery & Pu, Xiao, 2017. "A simple approach to sparse clustering," Computational Statistics & Data Analysis, Elsevier, vol. 105(C), pages 217-228.
    6. Peter Radchenko & Gourab Mukherjee, 2017. "Convex clustering via l 1 fusion penalization," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(5), pages 1527-1546, November.
    7. Chan, Yao-ban & Hall, Peter, 2010. "Using Evidence of Mixed Populations to Select Variables for Clustering Very High-Dimensional Data," Journal of the American Statistical Association, American Statistical Association, vol. 105(490), pages 798-809.
    8. Sijian Wang & Ji Zhu, 2008. "Variable Selection for Model-Based High-Dimensional Clustering and Its Application to Microarray Data," Biometrics, The International Biometric Society, vol. 64(2), pages 440-448, June.
    9. Howard D. Bondell & Brian J. Reich, 2008. "Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR," Biometrics, The International Biometric Society, vol. 64(1), pages 115-123, March.
    10. Xiaotong Shen & Hsin-Cheng Huang & Wei Pan, 2012. "Simultaneous supervised clustering and feature selection over a graph," Biometrika, Biometrika Trust, vol. 99(4), pages 899-914.
    11. M.‐Y. Cheng & P. Hall, 1998. "Calibrating the excess mass and dip tests of modality," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 60(3), pages 579-589.
    12. Jerome H. Friedman & Jacqueline J. Meulman, 2004. "Clustering objects on subsets of attributes (with discussion)," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 66(4), pages 815-849, November.
    13. Witten, Daniela M. & Tibshirani, Robert, 2010. "A Framework for Feature Selection in Clustering," Journal of the American Statistical Association, American Statistical Association, vol. 105(490), pages 713-726.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Peter Radchenko & Gourab Mukherjee, 2017. "Convex clustering via l 1 fusion penalization," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(5), pages 1527-1546, November.
    2. Floriello, Davide & Vitelli, Valeria, 2017. "Sparse clustering of functional data," Journal of Multivariate Analysis, Elsevier, vol. 154(C), pages 1-18.
    3. Arias-Castro, Ery & Pu, Xiao, 2017. "A simple approach to sparse clustering," Computational Statistics & Data Analysis, Elsevier, vol. 105(C), pages 217-228.
    4. Jeon, Jong-June & Kwon, Sunghoon & Choi, Hosik, 2017. "Homogeneity detection for the high-dimensional generalized linear model," Computational Statistics & Data Analysis, Elsevier, vol. 114(C), pages 61-74.
    5. Hosik Choi & Seokho Lee, 2019. "Convex clustering for binary data," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 13(4), pages 991-1018, December.
    6. Ronglai Shen & Qianxing Mo & Nikolaus Schultz & Venkatraman E Seshan & Adam B Olshen & Jason Huse & Marc Ladanyi & Chris Sander, 2012. "Integrative Subtype Discovery in Glioblastoma Using iCluster," PLOS ONE, Public Library of Science, vol. 7(4), pages 1-9, April.
    7. Bouveyron, Charles & Brunet-Saumard, Camille, 2014. "Model-based clustering of high-dimensional data: A review," Computational Statistics & Data Analysis, Elsevier, vol. 71(C), pages 52-78.
    8. Mihee Lee & Haipeng Shen & Jianhua Z. Huang & J. S. Marron, 2010. "Biclustering via Sparse Singular Value Decomposition," Biometrics, The International Biometric Society, vol. 66(4), pages 1087-1095, December.
    9. Wang, Wuyi & Su, Liangjun, 2021. "Identifying latent group structures in nonlinear panels," Journal of Econometrics, Elsevier, vol. 220(2), pages 272-295.
    10. Zhang, Yingying & Wang, Huixia Judy & Zhu, Zhongyi, 2019. "Quantile-regression-based clustering for panel data," Journal of Econometrics, Elsevier, vol. 213(1), pages 54-67.
    11. Baolin Wu, 2013. "Sparse cluster analysis of large-scale discrete variables with application to single nucleotide polymorphism data," Journal of Applied Statistics, Taylor & Francis Journals, vol. 40(2), pages 358-367, February.
    12. Peña, Daniel & Prieto Fernández, Francisco Javier & Rendon Aguirre, Janeth Carolina, 2017. "Clustering Big Data by Extreme Kurtosis Projections," DES - Working Papers. Statistics and Econometrics. WS 24522, Universidad Carlos III de Madrid. Departamento de Estadística.
    13. Charles Bouveyron & Camille Brunet-Saumard, 2014. "Discriminative variable selection for clustering with the sparse Fisher-EM algorithm," Computational Statistics, Springer, vol. 29(3), pages 489-513, June.
    14. repec:jss:jstsof:47:i05 is not listed on IDEAS
    15. Maarten M. Kampert & Jacqueline J. Meulman & Jerome H. Friedman, 2017. "rCOSA: A Software Package for Clustering Objects on Subsets of Attributes," Journal of Classification, Springer;The Classification Society, vol. 34(3), pages 514-547, October.
    16. Marion, Rebecca & Lederer, Johannes & Govaerts, Bernadette & von Sachs, Rainer, 2021. "VC-PCR: A Prediction Method based on Supervised Variable Selection and Clustering," LIDAM Discussion Papers ISBA 2021040, Université catholique de Louvain, Institute of Statistics, Biostatistics and Actuarial Sciences (ISBA).
    17. Gaynor, Sheila & Bair, Eric, 2017. "Identification of relevant subtypes via preweighted sparse clustering," Computational Statistics & Data Analysis, Elsevier, vol. 116(C), pages 139-154.
    18. Sunkyung Kim & Wei Pan & Xiaotong Shen, 2013. "Network-Based Penalized Regression With Application to Genomic Data," Biometrics, The International Biometric Society, vol. 69(3), pages 582-593, September.
    19. Lu Tang & Peter X.‐K. Song, 2021. "Poststratification fusion learning in longitudinal data analysis," Biometrics, The International Biometric Society, vol. 77(3), pages 914-928, September.
    20. Thierry Chekouo & Alejandro Murua, 2018. "High-dimensional variable selection with the plaid mixture model for clustering," Computational Statistics, Springer, vol. 33(3), pages 1475-1496, September.
    21. Pedro Galeano & Daniel Peña, 2019. "Data science, big data and statistics," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 28(2), pages 289-329, June.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:jmvana:v:161:y:2017:i:c:p:191-212. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/622892/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.