Author
Listed:
- Ulugbek Hudayberdiev
(Department of Management Information Systems, Chungbuk National University, Cheongju 28644, Republic of Korea)
- Junyeong Lee
(Department of Management Information Systems, Chungbuk National University, Cheongju 28644, Republic of Korea)
- Odil Fayzullaev
(NextPath Innovations LLC., 10555 62nd Drive, Forest Hills, NY 11375, USA)
Abstract
Automated landmark recognition represents a cornerstone technology for advancing smart tourism systems, cultural heritage documentation, and enhanced visitor experiences. Contemporary deep learning methodologies have substantially transformed the accuracy and computational efficiency of destination classification tasks. Addressing critical gaps in existing approaches, we introduce an enhanced Samarkand_v2 dataset encompassing twelve distinct historical landmark categories with comprehensive environmental variability. Our methodology incorporates a systematic multi-threshold pixel intensification strategy, applying graduated enhancement transformations at intensity levels of 100, 150, and 225 to accentuate diverse architectural characteristics spanning from fine-grained textural elements to prominent reflective components. Four independent YOLO11 architectures were trained using original imagery alongside systematically enhanced variants, with optimal epoch preservation based on validation performance criteria. A key innovation lies in our intelligent selective ensemble mechanism that conducts exhaustive evaluation of model combinations, identifying optimal configurations through data-driven selection rather than conventional uniform weighting schemes. Experimental validation demonstrates substantial performance gains over established baseline architectures and traditional ensemble approaches, achieving exceptional metrics: 99.24% accuracy, 99.36% precision, 99.40% recall, and 99.36% F1-score. Rigorous statistical analysis via paired t-tests validates the significance of enhancement strategies, particularly demonstrating effectiveness of lower-threshold transformations in capturing architectural nuances. The framework exhibits remarkable resilience across challenging conditions including illumination variations, structural occlusions, and inter-class architectural similarities. These achievements establish the methodology’s substantial potential for practical smart tourism deployment, automated heritage preservation initiatives, and real-time mobile landmark recognition systems, contributing significantly to the advancement of intelligent tourism technologies.
Suggested Citation
Ulugbek Hudayberdiev & Junyeong Lee & Odil Fayzullaev, 2025.
"Smart Tourism Landmark Recognition: A Multi-Threshold Enhancement and Selective Ensemble Approach Using YOLO11,"
Sustainability, MDPI, vol. 17(17), pages 1-26, September.
Handle:
RePEc:gam:jsusta:v:17:y:2025:i:17:p:8081-:d:1744834
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:17:y:2025:i:17:p:8081-:d:1744834. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.