Author
Listed:
- Heyang Wang
(College of Information Technology, Jilin Agricultural University, Changchun 130118, China)
- Jinghuan Hu
(College of Information Technology, Jilin Agricultural University, Changchun 130118, China)
- Yunlong Ji
(College of Information Technology, Jilin Agricultural University, Changchun 130118, China)
- Chong Peng
(College of Electronic Science and Engineering, Jilin University, Changchun 130012, China)
- Yu Bao
(School of Life Science, Changchun Normal University, Changchun 130032, China)
- Hang Zhu
(College of Information Technology, Jilin Agricultural University, Changchun 130118, China)
- Caocan Zhu
(College of Information Technology, Jilin Agricultural University, Changchun 130118, China)
- Mengchao Chen
(College of Information Technology, Jilin Agricultural University, Changchun 130118, China)
- Ye Mu
(College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Jilin Province Agricultural Internet of Things Technology Collaborative Innovation Center, Changchun 130118, China
Jilin Province Intelligent Environmental Engineering Research Center, Changchun 130118, China)
- Hongyu Guo
(College of Engineering and Technology, Jilin Agricultural University, Changchun 130118, China)
Abstract
High-efficiency and precise detection of crop ears in the field is a core component of intelligent agricultural yield estimation. However, challenges such as overlapping ears caused by dense planting, complex background interference, and blurred boundaries of small targets severely limit the accuracy and practicality of existing detection models. This paper introduces LiteFocus-YOLO(LF-YOLO), an efficient small-object detection model. By synergistically enhancing feature expression through cross-scale texture optimization and attention mechanisms, it achieves high-precision identification of maize tassels and wheat ears. The model innovatively incorporates the following: The Lightweight Target-Aware Attention Module (LTAM) strengthens high-frequency feature expression for small targets while reducing background interference, enhancing robustness in densely occluded scenes. The Cross-Feature Fusion Module (CFFM) addresses semantic detail loss through deep-shallow feature fusion modulation, optimizing small target localization accuracy. The experiment validated performance on the drone-based maize tassel dataset. Results show that LF-YOLO achieved an mAP50 of 97.9%, with mAP50 scores of 94.6% and 95.7% on the publicly available maize tassel and wheat ear datasets, respectively. It achieves generalization across different crops while maintaining high accuracy and recall. Compared to current mainstream object detection models, LF-YOLO delivers higher precision at lower computational cost, providing efficient technical support for dense small object detection tasks in agricultural fields.
Suggested Citation
Heyang Wang & Jinghuan Hu & Yunlong Ji & Chong Peng & Yu Bao & Hang Zhu & Caocan Zhu & Mengchao Chen & Ye Mu & Hongyu Guo, 2025.
"LiteFocus-YOLO: An Efficient Network for Identifying Dense Tassels in Field Environments,"
Agriculture, MDPI, vol. 15(19), pages 1-24, September.
Handle:
RePEc:gam:jagris:v:15:y:2025:i:19:p:2036-:d:1760446
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jagris:v:15:y:2025:i:19:p:2036-:d:1760446. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.