Author
Listed:
- Steve Okyere-Gyamfi
- Michael Asante
- Kwame Ofosuhene Peasah
- Yaw Marfo Missah
- Vivian Akoto-Adjepong
Abstract
To enhance crop yield, detecting leaf diseases has become a crucial research focus. Deep learning and computer vision excel in digital image processing. Various techniques grounded in deep learning have been utilized for detecting plant leaf diseases; however, achieving high accuracy remains a challenge. Basic convolutional neural networks (CNNs) in deep learning struggle with issues such as the abnormal orientation of images, rotation, and others, resulting in subpar performance. CNNs also need extensive data covering a wide range of variations to deliver strong performance. CapsNet is an innovative deep-learning architecture designed to address the limitations of CNNs. It performs well without needing a vast amount of data in various variations. CapsNets have their limitations, such as the encoder network considering every element in the image and the crowding issue. Due to this, they perform well on simple image recognition tasks but struggle with more complex images. To address these challenges, we introduced a new CapsNet model known as CCFM-CapsNet. This model incorporates CLAHE to reduce image noise and CDH to extract crucial features. Also, max-pooling and dropout layers are incorporated in the original CapsNet model for identifying and classifying diseases in apples, bananas, grapes, corn, mangoes, pepper, potatoes, rice, tomato and also for classifying fashion-MNIST and CIFAR-10 datasets. The proposed CCFM-CapsNet demonstrates significantly high validation accuracies, achieving 99.53%, 95.24%, 99.75%, 97.40%, 99.13%, 100%, 99.77%, 100%, 98.54%, 93.48%, and 82.34% with corresponding parameters in millions(M) 4.68M, 4.68M, 4.68M, 4.68M, 4.79M, 4.63M, 4.66M, 4.68M, 4.84M, 2.39M, and 4.84M for the datasets aforementioned respectively, outperforming the traditional CapsNet and other advanced CapsNet models. Consequently, the CCFM-CapsNet model can be utilized effectively as a smart tool for identifying plant diseases and also in achieving Sustainable Development Goal 2 (Zero Hunger), which aims to end global hunger by the year 2030.
Suggested Citation
Steve Okyere-Gyamfi & Michael Asante & Kwame Ofosuhene Peasah & Yaw Marfo Missah & Vivian Akoto-Adjepong, 2025.
"Contrast limited adaptive histogram equalization (CLAHE) and colour difference histogram (CDH) feature merging capsule network (CCFMCapsNet) for complex image recognition,"
PLOS ONE, Public Library of Science, vol. 20(10), pages 1-27, October.
Handle:
RePEc:plo:pone00:0335393
DOI: 10.1371/journal.pone.0335393
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0335393. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.