Author
Listed:
- Yiheng Wang
(School of Electronic Information, Xijing University, Xi’an 710123, China
Xi’an Key Laboratory of Intelligent Perception and Autonomous Navigation for Low-Altitude Aircraft, Xi’an 710123, China)
- Yushuai Zhang
(School of Electronic Information, Xijing University, Xi’an 710123, China
Xi’an Key Laboratory of Intelligent Perception and Autonomous Navigation for Low-Altitude Aircraft, Xi’an 710123, China)
- Zhenyu Wang
(Institute of Defense Engineering, Academy of Military Sciences, People’s Liberation Army, Beijing 100850, China)
- Jianxin Guo
(School of Electronic Information, Xijing University, Xi’an 710123, China
Xi’an Key Laboratory of Intelligent Perception and Autonomous Navigation for Low-Altitude Aircraft, Xi’an 710123, China)
- Feng Wang
(School of Electronic Information, Xijing University, Xi’an 710123, China
Xi’an Key Laboratory of Intelligent Perception and Autonomous Navigation for Low-Altitude Aircraft, Xi’an 710123, China)
- Rui Zhu
(School of Electronic Information, Xijing University, Xi’an 710123, China
Xi’an Key Laboratory of Intelligent Perception and Autonomous Navigation for Low-Altitude Aircraft, Xi’an 710123, China)
- Dejing Lin
(College of Artificial Intelligence, Yango University, Fuzhou 350015, China)
Abstract
For power-grid applications such as transmission corridor inspection, substation asset inspection, and post-disaster emergency repair, reliable UAV self-localization under GNSS-degraded or GNSS-denied conditions is critical to ensuring operational safety and accurate defect geotagging. Due to substantial discrepancies in viewpoint, scale, and geometric structure between oblique UAV images and nadir satellite images, conventional RGB-based cross-view retrieval methods often suffer from unstable alignment and insufficient geometric modeling, particularly in scenarios with repetitive textures and partial overlap. To address these challenges, we propose a cross-view visual geo-localization model that integrates RGBD multimodal inputs with multi-scale attention enhancement. Specifically, MiDaS is used to estimate relative depth from UAV imagery, which is concatenated with RGB to form a four-channel input, while satellite images are padded with an additional zero channel to maintain dimensional consistency. A shared-weight ViTAdapter is adopted to learn joint semantic–geometric representations, and a lightweight Efficient Multi-scale Attention (EMA) module is adopted on spatial feature maps to strengthen multi-scale spatial consistency. In addition, an IoU-weighted InfoNCE loss is employed to accommodate partial matching during training, thereby improving the robustness of feature alignment. Experiments on the GTA-UAV dataset under the cross-area protocol show stable performance across both retrieval and localization metrics. Specifically, Recall@1, Recall@5, and Recall@10 reach 18.12%, 38.83%, and 49.47%, respectively; AP is 28.01 and SDM@3 is 0.53; meanwhile, the top-1 geodesic distance error Dis@1 is 1052.73 m. These results indicate that explicit geometric priors combined with multi-scale spatial enhancement can effectively improve cross-view feature alignment, leading to enhanced robustness and accuracy for localization in challenging power inspection scenarios.
Suggested Citation
Yiheng Wang & Yushuai Zhang & Zhenyu Wang & Jianxin Guo & Feng Wang & Rui Zhu & Dejing Lin, 2026.
"UAV Visual Localization via Multimodal Fusion and Multi-Scale Attention Enhancement,"
Sustainability, MDPI, vol. 18(9), pages 1-22, April.
Handle:
RePEc:gam:jsusta:v:18:y:2026:i:9:p:4277-:d:1928552
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:18:y:2026:i:9:p:4277-:d:1928552. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.