IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v53y2009i7p2747-2753.html
   My bibliography  Save this article

A simple method for screening variables before clustering microarray data

Author

Listed:
  • Krzanowski, Wojtek J.
  • Hand, David J.

Abstract

A simple and computationally fast procedure is proposed for screening a large number of variables prior to cluster analysis. Each variable is considered in turn, the sample is divided into the two groups that maximise the ratio of between-group to within-group sum of squares for that variable, and the achieved value of this ratio is tested to see if it is significantly greater than what would be expected when partitioning a sample from a single homogeneous population. Those variables that achieve significance are then used in the cluster analysis. It is suggested that significance levels be assessed using a Monte Carlo computational procedure; by assuming within-group normality an analytical approximation is derived, but caution in its use is advocated. Computational details are provided for both the partitioning and the testing. The procedure is applied to several microarray data sets, showing that it can often achieve good results both quickly and simply.

Suggested Citation

  • Krzanowski, Wojtek J. & Hand, David J., 2009. "A simple method for screening variables before clustering microarray data," Computational Statistics & Data Analysis, Elsevier, vol. 53(7), pages 2747-2753, May.
  • Handle: RePEc:eee:csdana:v:53:y:2009:i:7:p:2747-2753
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167-9473(09)00036-X
    Download Restriction: Full text for ScienceDirect subscribers only.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Lawrence Hubert & Phipps Arabie, 1985. "Comparing partitions," Journal of Classification, Springer;The Classification Society, vol. 2(1), pages 193-218, December.
    2. Mahlet G. Tadesse & Joseph G. Ibrahim & George L. Mutter, 2003. "Identification of Differentially Expressed Genes in High-Density Oligonucleotide Arrays Accounting for the Quantification Limits of the Technology," Biometrics, The International Biometric Society, vol. 59(3), pages 542-554, September.
    3. Liu, Yufeng & Hayes, David Neil & Nobel, Andrew & Marron, J. S, 2008. "Statistical Significance of Clustering for High-Dimension, Low–Sample Size Data," Journal of the American Statistical Association, American Statistical Association, vol. 103(483), pages 1281-1293.
    4. Sinae Kim & Mahlet G. Tadesse & Marina Vannucci, 2006. "Variable selection in clustering via Dirichlet process mixture models," Biometrika, Biometrika Trust, vol. 93(4), pages 877-893, December.
    5. Michael Brusco & J. Cradit, 2001. "A variable-selection heuristic for K-means clustering," Psychometrika, Springer;The Psychometric Society, vol. 66(2), pages 249-270, June.
    6. E. Fowlkes & R. Gnanadesikan & J. Kettenring, 1988. "Variable selection in clustering," Journal of Classification, Springer;The Classification Society, vol. 5(2), pages 205-228, September.
    7. Tadesse, Mahlet G. & Sha, Naijun & Vannucci, Marina, 2005. "Bayesian Variable Selection in Clustering High-Dimensional Data," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 602-617, June.
    8. Douglas Steinley & Michael J. Brusco, 2007. "Initializing K-means Batch Clustering: A Critical Evaluation of Several Techniques," Journal of Classification, Springer;The Classification Society, vol. 24(1), pages 99-121, June.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Brusco, Michael J. & Steinley, Douglas, 2011. "Exact and approximate algorithms for variable selection in linear discriminant analysis," Computational Statistics & Data Analysis, Elsevier, vol. 55(1), pages 123-131, January.
    2. Brusco, Michael J., 2014. "A comparison of simulated annealing algorithms for variable selection in principal component analysis and discriminant analysis," Computational Statistics & Data Analysis, Elsevier, vol. 77(C), pages 38-53.
    3. Gao, Jinxin & Hitchcock, David B., 2010. "James-Stein shrinkage to improve k-means cluster analysis," Computational Statistics & Data Analysis, Elsevier, vol. 54(9), pages 2113-2127, September.
    4. Pacheco, Joaquín & Casado, Silvia & Porras, Santiago, 2013. "Exact methods for variable selection in principal component analysis: Guide functions and pre-selection," Computational Statistics & Data Analysis, Elsevier, vol. 57(1), pages 95-111.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Cathy Maugis & Gilles Celeux & Marie-Laure Martin-Magniette, 2009. "Variable Selection for Clustering with Gaussian Mixture Models," Biometrics, The International Biometric Society, vol. 65(3), pages 701-709, September.
    2. Aurora Torrente & Juan Romo, 2021. "Initializing k-means Clustering by Bootstrap and Data Depth," Journal of Classification, Springer;The Classification Society, vol. 38(2), pages 232-256, July.
    3. Michael Brusco & Douglas Steinley, 2015. "Affinity Propagation and Uncapacitated Facility Location Problems," Journal of Classification, Springer;The Classification Society, vol. 32(3), pages 443-480, October.
    4. Douglas Steinley & Michael Brusco, 2008. "Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures," Psychometrika, Springer;The Psychometric Society, vol. 73(1), pages 125-144, March.
    5. Isabella Morlini & Sergio Zani, 2012. "Dissimilarity and similarity measures for comparing dendrograms and their applications," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 6(2), pages 85-105, July.
    6. Crook Oliver M. & Gatto Laurent & Kirk Paul D. W., 2019. "Fast approximate inference for variable selection in Dirichlet process mixtures, with an application to pan-cancer proteomics," Statistical Applications in Genetics and Molecular Biology, De Gruyter, vol. 18(6), pages 1-20, December.
    7. Thierry Chekouo & Alejandro Murua, 2018. "High-dimensional variable selection with the plaid mixture model for clustering," Computational Statistics, Springer, vol. 33(3), pages 1475-1496, September.
    8. Matthieu Marbac & Mohammed Sedki & Tienne Patin, 2020. "Variable Selection for Mixed Data Clustering: Application in Human Population Genomics," Journal of Classification, Springer;The Classification Society, vol. 37(1), pages 124-142, April.
    9. Tsai, Chieh-Yuan & Chiu, Chuang-Cheng, 2008. "Developing a feature weight self-adjustment mechanism for a K-means clustering algorithm," Computational Statistics & Data Analysis, Elsevier, vol. 52(10), pages 4658-4672, June.
    10. Jerzy Korzeniewski, 2016. "New Method Of Variable Selection For Binary Data Cluster Analysis," Statistics in Transition New Series, Polish Statistical Association, vol. 17(2), pages 295-304, June.
    11. Brian J. Reich & Howard D. Bondell, 2011. "A Spatial Dirichlet Process Mixture Model for Clustering Population Genetics Data," Biometrics, The International Biometric Society, vol. 67(2), pages 381-390, June.
    12. Ricardo Fraiman & Badih Ghattas & Marcela Svarc, 2013. "Interpretable clustering using unsupervised binary trees," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 7(2), pages 125-145, June.
    13. Stefano Tonellato & Andrea Pastore, 2013. "On the comparison of model-based clustering solutions," Working Papers 2013:05, Department of Economics, University of Venice "Ca' Foscari".
    14. J. Fernando Vera & Rodrigo Macías, 2021. "On the Behaviour of K-Means Clustering of a Dissimilarity Matrix by Means of Full Multidimensional Scaling," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 489-513, June.
    15. Chakraborty, Sounak, 2009. "Bayesian binary kernel probit model for microarray based cancer classification and gene selection," Computational Statistics & Data Analysis, Elsevier, vol. 53(12), pages 4198-4209, October.
    16. Monsuru Adepeju & Samuel Langton & Jon Bannister, 2021. "Anchored k-medoids: a novel adaptation of k-medoids further refined to measure long-term instability in the exposure to crime," Journal of Computational Social Science, Springer, vol. 4(2), pages 655-680, November.
    17. Michael Brusco & Douglas Steinley, 2007. "A Comparison of Heuristic Procedures for Minimum Within-Cluster Sums of Squares Partitioning," Psychometrika, Springer;The Psychometric Society, vol. 72(4), pages 583-600, December.
    18. Michael C. Thrun & Alfred Ultsch, 2021. "Using Projection-Based Clustering to Find Distance- and Density-Based Clusters in High-Dimensional Data," Journal of Classification, Springer;The Classification Society, vol. 38(2), pages 280-312, July.
    19. Niwan Wattanakitrungroj & Saranya Maneeroj & Chidchanok Lursinsap, 2017. "Versatile Hyper-Elliptic Clustering Approach for Streaming Data Based on One-Pass-Thrown-Away Learning," Journal of Classification, Springer;The Classification Society, vol. 34(1), pages 108-147, April.
    20. Anzanello, Michel J. & Fogliatto, Flavio S., 2011. "Selecting the best clustering variables for grouping mass-customized products involving workers' learning," International Journal of Production Economics, Elsevier, vol. 130(2), pages 268-276, April.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:53:y:2009:i:7:p:2747-2753. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.