IDEAS home Printed from https://ideas.repec.org/a/spr/jclass/v36y2019i1d10.1007_s00357-018-9285-7.html
   My bibliography  Save this article

Optimal Landmark Point Selection Using Clustering for Manifold Modeling and Data Classification

Author

Listed:
  • Manazhy Rashmi

    (Research Scholar, NIT Calicut)

  • Praveen Sankaran

    (NIT Calicut)

Abstract

As data volume and dimensions continue to grow, effective and efficient methods are needed to obtain the low dimensional features of the data that describe its true structure. Most nonlinear dimensionality reduction methods (NLDR) utilize the Euclidean distance between the data points to form a general idea of the data manifold structure. Isomap uses the geodesic distance between data points and then uses classical multidimensional scaling(cMDS) to obtain low dimensional features. As the data size increases Isomap becomes complex. To overcome this disadvantage, Landmark Isomap (L-Isomap) uses selected data points called landmark points and finds the geodesic distance from these points to all other non-landmark points. Traditionally, landmark points are randomly selected without considering any statistical property of the data manifold. We contend that the quality of the features extracted is dependent on the selection of the landmark points. In applications such as data classification, the net accuracy is dependent on the quality of the features selected, and hence landmark points selection might play a crucial role. In this paper, we propose a clustering approach to obtain the landmark points. These new points are now used to represent the data, and Fisher’s linear discriminants are used for classification. The proposed method is tested with different datasets to verify the efficacy of the approach.

Suggested Citation

  • Manazhy Rashmi & Praveen Sankaran, 2019. "Optimal Landmark Point Selection Using Clustering for Manifold Modeling and Data Classification," Journal of Classification, Springer;The Classification Society, vol. 36(1), pages 94-112, April.
  • Handle: RePEc:spr:jclass:v:36:y:2019:i:1:d:10.1007_s00357-018-9285-7
    DOI: 10.1007/s00357-018-9285-7
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s00357-018-9285-7
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s00357-018-9285-7?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Douglas Steinley & Michael J. Brusco, 2007. "Initializing K-means Batch Clustering: A Critical Evaluation of Several Techniques," Journal of Classification, Springer;The Classification Society, vol. 24(1), pages 99-121, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jerzy Korzeniewski, 2016. "New Method Of Variable Selection For Binary Data Cluster Analysis," Statistics in Transition New Series, Polish Statistical Association, vol. 17(2), pages 295-304, June.
    2. Ricardo Fraiman & Badih Ghattas & Marcela Svarc, 2013. "Interpretable clustering using unsupervised binary trees," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 7(2), pages 125-145, June.
    3. Faicel Chamroukhi, 2016. "Piecewise Regression Mixture for Simultaneous Functional Data Clustering and Optimal Segmentation," Journal of Classification, Springer;The Classification Society, vol. 33(3), pages 374-411, October.
    4. Aurora Torrente & Juan Romo, 2021. "Initializing k-means Clustering by Bootstrap and Data Depth," Journal of Classification, Springer;The Classification Society, vol. 38(2), pages 232-256, July.
    5. Jiang, Yawen & Jia, Caiyan & Yu, Jian, 2013. "An efficient community detection method based on rank centrality," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 392(9), pages 2182-2194.
    6. J. Fernando Vera & Rodrigo Macías, 2021. "On the Behaviour of K-Means Clustering of a Dissimilarity Matrix by Means of Full Multidimensional Scaling," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 489-513, June.
    7. Gianluigi Migliavacca & Marco Rossi & Dario Siface & Matteo Marzoli & Hakan Ergun & Raúl Rodríguez-Sánchez & Maxime Hanot & Guillaume Leclerq & Nuno Amaro & Aleksandr Egorov & Jawana Gabrielski & Björ, 2021. "The Innovative FlexPlan Grid-Planning Methodology: How Storage and Flexible Resources Could Help in De-Bottlenecking the European System," Energies, MDPI, vol. 14(4), pages 1-28, February.
    8. Michael Brusco & Douglas Steinley, 2015. "Affinity Propagation and Uncapacitated Facility Location Problems," Journal of Classification, Springer;The Classification Society, vol. 32(3), pages 443-480, October.
    9. Gehad Ismail Sayed & Ashraf Darwish & Aboul Ella Hassanien, 2020. "Binary Whale Optimization Algorithm and Binary Moth Flame Optimization with Clustering Algorithms for Clinical Breast Cancer Diagnoses," Journal of Classification, Springer;The Classification Society, vol. 37(1), pages 66-96, April.
    10. Dirk Depril & Iven Mechelen & Tom Wilderjans, 2012. "Lowdimensional Additive Overlapping Clustering," Journal of Classification, Springer;The Classification Society, vol. 29(3), pages 297-320, October.
    11. Monsuru Adepeju & Samuel Langton & Jon Bannister, 2021. "Anchored k-medoids: a novel adaptation of k-medoids further refined to measure long-term instability in the exposure to crime," Journal of Computational Social Science, Springer, vol. 4(2), pages 655-680, November.
    12. Michael Brusco & Douglas Steinley, 2007. "A Comparison of Heuristic Procedures for Minimum Within-Cluster Sums of Squares Partitioning," Psychometrika, Springer;The Psychometric Society, vol. 72(4), pages 583-600, December.
    13. Michael C. Thrun & Alfred Ultsch, 2021. "Using Projection-Based Clustering to Find Distance- and Density-Based Clusters in High-Dimensional Data," Journal of Classification, Springer;The Classification Society, vol. 38(2), pages 280-312, July.
    14. Simon Wiersma & Dr. Michael Heinrich & Prof. Dr. Tobias Just, 2018. "La Aplicación del Análisis Clúster en los Mercados Inmobiliarios," LARES lares_2018_paper_23-heinr, Latin American Real Estate Society (LARES).
    15. Daniel McNeish & Jeffrey R. Harring, 2017. "The Effect of Model Misspecification on Growth Mixture Model Class Enumeration," Journal of Classification, Springer;The Classification Society, vol. 34(2), pages 223-248, July.
    16. Tom Wilderjans & Dirk Depril & Iven Van Mechelen, 2013. "Additive Biclustering: A Comparison of One New and Two Existing ALS Algorithms," Journal of Classification, Springer;The Classification Society, vol. 30(1), pages 56-74, April.
    17. Niwan Wattanakitrungroj & Saranya Maneeroj & Chidchanok Lursinsap, 2017. "Versatile Hyper-Elliptic Clustering Approach for Streaming Data Based on One-Pass-Thrown-Away Learning," Journal of Classification, Springer;The Classification Society, vol. 34(1), pages 108-147, April.
    18. Junqi Wang & Rundong Liu & Linfeng Zhang & Hussain Syed ASAD & Erlin Meng, 2019. "Triggering Optimal Control of Air Conditioning Systems by Event-Driven Mechanism: Comparing Direct and Indirect Approaches," Energies, MDPI, vol. 12(20), pages 1-20, October.
    19. Gehad Ismail Sayed & Ashraf Darwish & Aboul Ella Hassanien, 2018. "A New Chaotic Whale Optimization Algorithm for Features Selection," Journal of Classification, Springer;The Classification Society, vol. 35(2), pages 300-344, July.
    20. Krzanowski, Wojtek J. & Hand, David J., 2009. "A simple method for screening variables before clustering microarray data," Computational Statistics & Data Analysis, Elsevier, vol. 53(7), pages 2747-2753, May.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jclass:v:36:y:2019:i:1:d:10.1007_s00357-018-9285-7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.