IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0314111.html
   My bibliography  Save this article

Optimizing deep learning models for glaucoma screening with vision transformers for resource efficiency and the pie augmentation method

Author

Listed:
  • Sirikorn Sangchocanonta
  • Pakinee Pooprasert
  • Nichapa Lerthirunvibul
  • Kanyarak Patchimnan
  • Phongphan Phienphanich
  • Adirek Munthuli
  • Sujittra Puangarom
  • Rath Itthipanichpong
  • Kitiya Ratanawongphaibul
  • Sunee Chansangpetch
  • Anita Manassakorn
  • Visanee Tantisevi
  • Prin Rojanapongpun
  • Charturong Tantibundhit

Abstract

Glaucoma is the leading cause of irreversible vision impairment, emphasizing the critical need for early detection. Typically, AI-based glaucoma screening relies on fundus imaging. To tackle the resource and time challenges in glaucoma screening with convolutional neural network (CNN), we chose the Data-efficient image Transformers (DeiT), a vision transformer, known for its reduced computational demands, with preprocessing time decreased by a factor of 10. Our approach utilized the meticulously annotated GlauCUTU-DATA dataset, curated by ophthalmologists through consensus, encompassing both unanimous agreement (3/3) and majority agreement (2/3) data. However, DeiT’s performance was initially lower than CNN. Therefore, we introduced the “pie method," an augmentation method aligned with the ISNT rule. Along with employing polar transformation to improved cup region visibility and alignment with the vision transformer’s input to elevated performance levels. The classification results demonstrated improvements comparable to CNN. Using the 3/3 data, excluding the superior and nasal regions, especially in glaucoma suspects, sensitivity increased by 40.18% from 47.06% to 88.24%. The average area under the curve (AUC) ± standard deviation (SD) for glaucoma, glaucoma suspects, and no glaucoma were 92.63 ± 4.39%, 92.35 ± 4.39%, and 92.32 ± 1.45%, respectively. With the 2/3 data, excluding the superior and temporal regions, sensitivity for diagnosing glaucoma increased by 11.36% from 47.73% to 59.09%. The average AUC ± SD for glaucoma, glaucoma suspects, and no glaucoma were 68.22 ± 4.45%, 68.23 ± 4.39%, and 73.09 ± 3.05%, respectively. For both datasets, the AUC values for glaucoma, glaucoma suspects, and no glaucoma were 84.53%, 84.54%, and 91.05%, respectively, which approach the performance of a CNN model that achieved 84.70%, 84.69%, and 93.19%, respectively. Moreover, the incorporation of attention maps from DeiT facilitated the precise localization of clinically significant areas, such as the disc rim and notching, thereby enhancing the overall effectiveness of glaucoma screening.

Suggested Citation

  • Sirikorn Sangchocanonta & Pakinee Pooprasert & Nichapa Lerthirunvibul & Kanyarak Patchimnan & Phongphan Phienphanich & Adirek Munthuli & Sujittra Puangarom & Rath Itthipanichpong & Kitiya Ratanawongph, 2025. "Optimizing deep learning models for glaucoma screening with vision transformers for resource efficiency and the pie augmentation method," PLOS ONE, Public Library of Science, vol. 20(3), pages 1-28, March.
  • Handle: RePEc:plo:pone00:0314111
    DOI: 10.1371/journal.pone.0314111
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0314111
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0314111&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0314111?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0314111. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.