IDEAS home Printed from https://ideas.repec.org/a/spr/jclass/v36y2019i1d10.1007_s00357-018-9285-7.html
   My bibliography  Save this article

Optimal Landmark Point Selection Using Clustering for Manifold Modeling and Data Classification

Author

Listed:
  • Manazhy Rashmi

    (Research Scholar, NIT Calicut)

  • Praveen Sankaran

    (NIT Calicut)

Abstract

As data volume and dimensions continue to grow, effective and efficient methods are needed to obtain the low dimensional features of the data that describe its true structure. Most nonlinear dimensionality reduction methods (NLDR) utilize the Euclidean distance between the data points to form a general idea of the data manifold structure. Isomap uses the geodesic distance between data points and then uses classical multidimensional scaling(cMDS) to obtain low dimensional features. As the data size increases Isomap becomes complex. To overcome this disadvantage, Landmark Isomap (L-Isomap) uses selected data points called landmark points and finds the geodesic distance from these points to all other non-landmark points. Traditionally, landmark points are randomly selected without considering any statistical property of the data manifold. We contend that the quality of the features extracted is dependent on the selection of the landmark points. In applications such as data classification, the net accuracy is dependent on the quality of the features selected, and hence landmark points selection might play a crucial role. In this paper, we propose a clustering approach to obtain the landmark points. These new points are now used to represent the data, and Fisher’s linear discriminants are used for classification. The proposed method is tested with different datasets to verify the efficacy of the approach.

Suggested Citation

  • Manazhy Rashmi & Praveen Sankaran, 2019. "Optimal Landmark Point Selection Using Clustering for Manifold Modeling and Data Classification," Journal of Classification, Springer;The Classification Society, vol. 36(1), pages 94-112, April.
  • Handle: RePEc:spr:jclass:v:36:y:2019:i:1:d:10.1007_s00357-018-9285-7
    DOI: 10.1007/s00357-018-9285-7
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s00357-018-9285-7
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s00357-018-9285-7?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Douglas Steinley & Michael J. Brusco, 2007. "Initializing K-means Batch Clustering: A Critical Evaluation of Several Techniques," Journal of Classification, Springer;The Classification Society, vol. 24(1), pages 99-121, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Faicel Chamroukhi, 2016. "Piecewise Regression Mixture for Simultaneous Functional Data Clustering and Optimal Segmentation," Journal of Classification, Springer;The Classification Society, vol. 33(3), pages 374-411, October.
    2. Jiang, Yawen & Jia, Caiyan & Yu, Jian, 2013. "An efficient community detection method based on rank centrality," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 392(9), pages 2182-2194.
    3. J. Fernando Vera & Rodrigo Macías, 2021. "On the Behaviour of K-Means Clustering of a Dissimilarity Matrix by Means of Full Multidimensional Scaling," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 489-513, June.
    4. Gianluigi Migliavacca & Marco Rossi & Dario Siface & Matteo Marzoli & Hakan Ergun & Raúl Rodríguez-Sánchez & Maxime Hanot & Guillaume Leclerq & Nuno Amaro & Aleksandr Egorov & Jawana Gabrielski & Björ, 2021. "The Innovative FlexPlan Grid-Planning Methodology: How Storage and Flexible Resources Could Help in De-Bottlenecking the European System," Energies, MDPI, vol. 14(4), pages 1-28, February.
    5. Michael Brusco & Douglas Steinley, 2015. "Affinity Propagation and Uncapacitated Facility Location Problems," Journal of Classification, Springer;The Classification Society, vol. 32(3), pages 443-480, October.
    6. Michael Brusco & Douglas Steinley, 2007. "A Comparison of Heuristic Procedures for Minimum Within-Cluster Sums of Squares Partitioning," Psychometrika, Springer;The Psychometric Society, vol. 72(4), pages 583-600, December.
    7. Tom Wilderjans & Dirk Depril & Iven Van Mechelen, 2013. "Additive Biclustering: A Comparison of One New and Two Existing ALS Algorithms," Journal of Classification, Springer;The Classification Society, vol. 30(1), pages 56-74, April.
    8. Junqi Wang & Rundong Liu & Linfeng Zhang & Hussain Syed ASAD & Erlin Meng, 2019. "Triggering Optimal Control of Air Conditioning Systems by Event-Driven Mechanism: Comparing Direct and Indirect Approaches," Energies, MDPI, vol. 12(20), pages 1-20, October.
    9. Gehad Ismail Sayed & Ashraf Darwish & Aboul Ella Hassanien, 2018. "A New Chaotic Whale Optimization Algorithm for Features Selection," Journal of Classification, Springer;The Classification Society, vol. 35(2), pages 300-344, July.
    10. Meldrum, James R. & Champ, Patricia A. & Bond, Craig A., 2013. "Heterogeneous nonmarket benefits of managing white pine bluster rust in high-elevation pine forests," Journal of Forest Economics, Elsevier, vol. 19(1), pages 61-77.
    11. Antonello Maruotti & Antonio Punzo, 2021. "Initialization of Hidden Markov and Semi‐Markov Models: A Critical Evaluation of Several Strategies," International Statistical Review, International Statistical Institute, vol. 89(3), pages 447-480, December.
    12. Jerzy Korzeniewski, 2016. "New Method Of Variable Selection For Binary Data Cluster Analysis," Statistics in Transition new series, Główny Urząd Statystyczny (Polska), vol. 17(2), pages 295-304, June.
    13. Joeri Hofmans & Eva Ceulemans & Douglas Steinley & Iven Mechelen, 2015. "On the Added Value of Bootstrap Analysis for K-Means Clustering," Journal of Classification, Springer;The Classification Society, vol. 32(2), pages 268-284, July.
    14. Juan José Fernández-Durán & María Mercedes Gregorio-Domínguez, 2021. "Consumer Segmentation Based on Use Patterns," Journal of Classification, Springer;The Classification Society, vol. 38(1), pages 72-88, April.
    15. Briamonte, Lucia & Piatto, Paolo & Macaluso, Dario & Rubertucci, Mariagrazia, 2023. "Trends and support models in public expenditure on agriculture: An italian perspective," Economia agro-alimentare / Food Economy, Italian Society of Agri-food Economics/Società Italiana di Economia Agro-Alimentare (SIEA), vol. 25(2), October.
    16. Jaehong Yu & Hua Zhong & Seoung Bum Kim, 2020. "An Ensemble Feature Ranking Algorithm for Clustering Analysis," Journal of Classification, Springer;The Classification Society, vol. 37(2), pages 462-489, July.
    17. Ekaterina Kovaleva & Boris Mirkin, 2015. "Bisecting K-Means and 1D Projection Divisive Clustering: A Unified Framework and Experimental Comparison," Journal of Classification, Springer;The Classification Society, vol. 32(3), pages 414-442, October.
    18. Douglas Steinley & Gretchen Hendrickson & Michael Brusco, 2015. "A Note on Maximizing the Agreement Between Partitions: A Stepwise Optimal Algorithm and Some Properties," Journal of Classification, Springer;The Classification Society, vol. 32(1), pages 114-126, April.
    19. Li Lin & Daniel T. L. Shek, 2021. "Meaning-in-Life Profiles among Chinese Late Adolescents: Associations with Readiness for Political Participation," IJERPH, MDPI, vol. 18(11), pages 1-16, May.
    20. Volodymyr Melnykov & Semhar Michael, 2020. "Clustering Large Datasets by Merging K-Means Solutions," Journal of Classification, Springer;The Classification Society, vol. 37(1), pages 97-123, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jclass:v:36:y:2019:i:1:d:10.1007_s00357-018-9285-7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.