IDEAS home Printed from https://ideas.repec.org/a/spr/psycho/v82y2017i2d10.1007_s11336-017-9561-1.html
   My bibliography  Save this article

Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data

Author

Listed:
  • J. Fernando Vera

    (University of Granada)

  • Rodrigo Macías

    (Centro de Investigación en Matemáticas, Unidad Monterrey)

Abstract

One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode $$N\times N$$ N × N dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.

Suggested Citation

  • J. Fernando Vera & Rodrigo Macías, 2017. "Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data," Psychometrika, Springer;The Psychometric Society, vol. 82(2), pages 275-294, June.
  • Handle: RePEc:spr:psycho:v:82:y:2017:i:2:d:10.1007_s11336-017-9561-1
    DOI: 10.1007/s11336-017-9561-1
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11336-017-9561-1
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11336-017-9561-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Rocci, Roberto & Vichi, Maurizio, 2008. "Two-mode multi-partitioning," Computational Statistics & Data Analysis, Elsevier, vol. 52(4), pages 1984-2003, January.
    2. J. C. Gower & W. J. Krzanowski, 1999. "Analysis of distance for structured multivariate data and extensions to multivariate analysis of variance," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 48(4), pages 505-519.
    3. Glenn Milligan & Martha Cooper, 1985. "An examination of procedures for determining the number of clusters in a data set," Psychometrika, Springer;The Psychometric Society, vol. 50(2), pages 159-179, June.
    4. Mark Chiang & Boris Mirkin, 2010. "Intelligent Choice of the Number of Clusters in K-Means Clustering: An Experimental Study with Different Cluster Spreads," Journal of Classification, Springer;The Classification Society, vol. 27(1), pages 3-40, March.
    5. J. Vera & Rodrigo Macías & Willem Heiser, 2009. "A Latent Class Multidimensional Scaling Model for Two-Way One-Mode Continuous Rating Dissimilarity Data," Psychometrika, Springer;The Psychometric Society, vol. 74(2), pages 297-315, June.
    6. Willem Heiser & Patrick Groenen, 1997. "Cluster differences scaling with a within-clusters loss component and a fuzzy successive approximation strategy to avoid local minima," Psychometrika, Springer;The Psychometric Society, vol. 62(1), pages 63-83, March.
    7. Robert Tibshirani & Guenther Walther & Trevor Hastie, 2001. "Estimating the number of clusters in a data set via the gap statistic," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 63(2), pages 411-423.
    8. Sugar, Catherine A. & James, Gareth M., 2003. "Finding the Number of Clusters in a Dataset: An Information-Theoretic Approach," Journal of the American Statistical Association, American Statistical Association, vol. 98, pages 750-763, January.
    9. Douglas Steinley & Michael J. Brusco, 2007. "Initializing K-means Batch Clustering: A Critical Evaluation of Several Techniques," Journal of Classification, Springer;The Classification Society, vol. 24(1), pages 99-121, June.
    10. Melnykov, Volodymyr & Chen, Wei-Chen & Maitra, Ranjan, 2012. "MixSim: An R Package for Simulating Data to Study Performance of Clustering Algorithms," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 51(i12).
    11. Wayne DeSarbo & J. Carroll & Linda Clark & Paul Green, 1984. "Synthesized clustering: A method for amalgamating alternative clustering bases with differential weighting of variables," Psychometrika, Springer;The Psychometric Society, vol. 49(1), pages 57-78, March.
    12. Glenn Milligan, 1985. "An algorithm for generating artificial test clusters," Psychometrika, Springer;The Psychometric Society, vol. 50(1), pages 123-127, March.
    13. J. Vera & Rodrigo Macías & Willem Heiser, 2013. "Cluster Differences Unfolding for Two-Way Two-Mode Preference Rating Data," Journal of Classification, Springer;The Classification Society, vol. 30(3), pages 370-396, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. J. Fernando Vera & Rodrigo Macías, 2021. "On the Behaviour of K-Means Clustering of a Dissimilarity Matrix by Means of Full Multidimensional Scaling," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 489-513, June.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. J. Fernando Vera & Rodrigo Macías, 2021. "On the Behaviour of K-Means Clustering of a Dissimilarity Matrix by Means of Full Multidimensional Scaling," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 489-513, June.
    2. J. Vera & Rodrigo Macías & Willem Heiser, 2013. "Cluster Differences Unfolding for Two-Way Two-Mode Preference Rating Data," Journal of Classification, Springer;The Classification Society, vol. 30(3), pages 370-396, October.
    3. Li, Pai-Ling & Chiou, Jeng-Min, 2011. "Identifying cluster number for subspace projected functional data clustering," Computational Statistics & Data Analysis, Elsevier, vol. 55(6), pages 2090-2103, June.
    4. Yi Peng & Yong Zhang & Gang Kou & Yong Shi, 2012. "A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set," PLOS ONE, Public Library of Science, vol. 7(7), pages 1-9, July.
    5. Z. Volkovich & Z. Barzily & G.-W. Weber & D. Toledano-Kitai & R. Avros, 2012. "An application of the minimal spanning tree approach to the cluster stability problem," Central European Journal of Operations Research, Springer;Slovak Society for Operations Research;Hungarian Operational Research Society;Czech Society for Operations Research;Österr. Gesellschaft für Operations Research (ÖGOR);Slovenian Society Informatika - Section for Operational Research;Croatian Operational Research Society, vol. 20(1), pages 119-139, March.
    6. Koltcov, Sergei, 2018. "Application of Rényi and Tsallis entropies to topic modeling optimization," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 512(C), pages 1192-1204.
    7. Douglas Steinley, 2007. "Validating Clusters with the Lower Bound for Sum-of-Squares Error," Psychometrika, Springer;The Psychometric Society, vol. 72(1), pages 93-106, March.
    8. Lingsong Meng & Dorina Avram & George Tseng & Zhiguang Huo, 2022. "Outcome‐guided sparse K‐means for disease subtype discovery via integrating phenotypic data with high‐dimensional transcriptomic data," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 71(2), pages 352-375, March.
    9. Fujita, André & Takahashi, Daniel Y. & Patriota, Alexandre G., 2014. "A non-parametric method to estimate the number of clusters," Computational Statistics & Data Analysis, Elsevier, vol. 73(C), pages 27-39.
    10. Kaczynska, S. & Marion, R. & Von Sachs, R., 2020. "Comparison of Cluster Validity Indices and Decision Rules for Different Degrees of Cluster Separation," LIDAM Discussion Papers ISBA 2020009, Université catholique de Louvain, Institute of Statistics, Biostatistics and Actuarial Sciences (ISBA).
    11. Douglas Steinley & Michael Brusco, 2008. "Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures," Psychometrika, Springer;The Psychometric Society, vol. 73(1), pages 125-144, March.
    12. Julian Rossbroich & Jeffrey Durieux & Tom F. Wilderjans, 2022. "Model Selection Strategies for Determining the Optimal Number of Overlapping Clusters in Additive Overlapping Partitional Clustering," Journal of Classification, Springer;The Classification Society, vol. 39(2), pages 264-301, July.
    13. Fischer, Aurélie, 2011. "On the number of groups in clustering," Statistics & Probability Letters, Elsevier, vol. 81(12), pages 1771-1781.
    14. Fang, Yixin & Wang, Junhui, 2012. "Selection of the number of clusters via the bootstrap method," Computational Statistics & Data Analysis, Elsevier, vol. 56(3), pages 468-477.
    15. Jane L. Harvill & Priya Kohli & Nalini Ravishanker, 2017. "Clustering Nonlinear, Nonstationary Time Series Using BSLEX," Methodology and Computing in Applied Probability, Springer, vol. 19(3), pages 935-955, September.
    16. Z. Volkovich & D. Toledano-Kitai & G.-W. Weber, 2013. "Self-learning K-means clustering: a global optimization approach," Journal of Global Optimization, Springer, vol. 56(2), pages 219-232, June.
    17. Mark Chiang & Boris Mirkin, 2010. "Intelligent Choice of the Number of Clusters in K-Means Clustering: An Experimental Study with Different Cluster Spreads," Journal of Classification, Springer;The Classification Society, vol. 27(1), pages 3-40, March.
    18. Yujia Li & Xiangrui Zeng & Chien‐Wei Lin & George C. Tseng, 2022. "Simultaneous estimation of cluster number and feature sparsity in high‐dimensional cluster analysis," Biometrics, The International Biometric Society, vol. 78(2), pages 574-585, June.
    19. Fang, Yixin & Wang, Junhui, 2011. "Penalized cluster analysis with applications to family data," Computational Statistics & Data Analysis, Elsevier, vol. 55(6), pages 2128-2136, June.
    20. Henner Gimpel & Daniel Rau & Maximilian Röglinger, 2018. "Understanding FinTech start-ups – a taxonomy of consumer-oriented service offerings," Electronic Markets, Springer;IIM University of St. Gallen, vol. 28(3), pages 245-264, August.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:psycho:v:82:y:2017:i:2:d:10.1007_s11336-017-9561-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.