Author
Listed:
- KAMAL M. OTHMAN
(Department of Electrical Engineering, College of Engineering and Islamic Architecture, Umm Al-Qura University, Makkah, Saudi Arabia)
- NADA ALZABEN
(��Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia)
- NUHA ALRUWAIS
(��Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, Saudi Arabia, P.O. Box 22459, Riyadh 11495, Saudi Arabia)
- MOHAMMED MARAY
(�Department of Information Systems, College of Computer Science, King Khalid University, Abha, Saudi Arabia)
- ABDULBASIT A. DAREM
(�Center for Scientific Research and Entrepreneurship, Northern Border University, Arar 73213, Saudi Arabia)
- ABDULLAH MOHAMED
(��Research Centre, Future University in Egypt, New Cairo 11845, Egypt)
Abstract
Unmanned aerial vehicles (UAVs) can monitor traffic in different scenarios like surveillance, control, and security. The object detection method depends on UAVs equipped with vision sensors, which have received significant attention in domains such as intelligent transportation systems (ITSs) and UAVs, which can monitor road traffic across some distance and offer vital data for following intelligent traffic supervision tasks, namely traffic situational awareness, detecting sudden accidents, and calculating traffic flow. Nevertheless, most vehicle targets exhibit specific features and lesser sizes that challenge accurate vehicle recognition in UAV overhead view. Employing innovative computer vision (CV) models, vehicle recognition and tracking in UAV images contains detecting and following vehicles in aerial footage taken by UAVs. This procedure leverages deep learning (DL) approaches for perfectly detecting vehicles and a robust tracking method for monitoring their actions through the frames, offering vital information for traffic management, surveillance, and urban planning. Therefore, this study designs an Advanced DL-based Vehicle Detection and Tracking on UAV Imagery (ADLVDT-UAVI) approach. The drive of the ADLVDT-UAVI technique is to detect and classify distinct vehicles in the UAV images correctly as Brain-Like Computing technique for Traffic Flow Optimization in Smart Cities. In this approach, Gaussian filtering (GF) primarily eliminates the noise. Besides, the ADLVDT-UAVI technique utilizes a squeeze-and-excitation capsule network (SE-CapsNet) for feature vector derivation. Meanwhile, the hyperparameter selection process involves using the Fractals coati optimization algorithm (COA). Finally, the self-attention bi-directional long short-term memory (SA-BiLSTM) approach is utilized to classify detected vehicles. To validate the improved results of the ADLVDT-UAVI approach, a wide range of experiments is performed under VEDAI and ISPRS Postdam datasets. The experimental validation of the ADLVDT-UAVI approach portrayed the superior accuracy outcome of 98.35% and 98.96% compared to recent models.
Suggested Citation
Kamal M. Othman & Nada Alzaben & Nuha Alruwais & Mohammed Maray & Abdulbasit A. Darem & Abdullah Mohamed, 2025.
"Smart Surveillance: Advanced Deep Learning-Based Vehicle Detection And Tracking Model On Uav Imagery,"
FRACTALS (fractals), World Scientific Publishing Co. Pte. Ltd., vol. 33(02), pages 1-18.
Handle:
RePEc:wsi:fracta:v:33:y:2025:i:02:n:s0218348x25400274
DOI: 10.1142/S0218348X25400274
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wsi:fracta:v:33:y:2025:i:02:n:s0218348x25400274. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Tai Tone Lim (email available below). General contact details of provider: https://www.worldscientific.com/worldscinet/fractals .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.