IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0342084.html

FRCP-YOLO: Road object detection algorithm based on improved YOLOv8n

Author

Listed:
  • Dongmei Liu
  • Changchun Wang
  • Xuejun Li
  • Xiguo Zhao
  • Chuanli Yin
  • Yuchi Liu
  • Shuai Li
  • Xuan Li

Abstract

The accuracy of road object detection is crucial for ensuring the safe driving of autonomous vehicles. Challenges such as small object missed detection, excessive parameters, low accuracy, and poor robustness are commonly observed in current road object detection models. To address the above problems, a road object detection model named FRCP-YOLO is proposed in the present study, which is developed based on the YOLOv8n. Firstly, to reduce the model’s parameters and complexity, the C2f module in the backbone network is replaced with a lightweight FasterNet Block, which enhancing the speed of image feature extraction; then the proposed R-CA module, which is based on a residual block with the Coordinate Attention (CA) mechanism, is introduced to enhance the model’s focus on objects of interest and improve its feature-learning capability. Secondly, to enhance small object detection performance, a high-resolution branch for feature extraction and a detection head for processing these features are introduced, thereby improving the model’s robustness. Finally, PIoU v2 is selected as the bounding box regression loss function to effectively prevent anchor box enlargement, enhance the ability to focus on anchor boxes, and further improve overall detection accuracy. Based on the KITTI dataset, the comparison experiments between FRCP-YOLO and other mainstream algorithms were carried out, FRCP-YOLO achieves object detection accuracies of 0.924 and 0.667 (in terms of mAP@50 and mAP@50–95) on the test set, representing improvements of 5.0% and 6.6% over the baseline model, while reducing parameters by 4%. Comparative experiments were conducted on the BDD100K dataset of complex road scenes. The detection accuracy of FRCP-YOLO outperforms other mainstream algorithms in challenging scenarios, such as dense traffic, occlusions, and night conditions, which verifies the generalization of FRCP-YOLO, highlighting its reliability and effective object detection capabilities in complex scenarios.

Suggested Citation

  • Dongmei Liu & Changchun Wang & Xuejun Li & Xiguo Zhao & Chuanli Yin & Yuchi Liu & Shuai Li & Xuan Li, 2026. "FRCP-YOLO: Road object detection algorithm based on improved YOLOv8n," PLOS ONE, Public Library of Science, vol. 21(2), pages 1-20, February.
  • Handle: RePEc:plo:pone00:0342084
    DOI: 10.1371/journal.pone.0342084
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0342084
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0342084&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0342084?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Weilin Wu & Chunquan Liu & Haoran Zheng, 2024. "A panoramic driving perception fusion algorithm based on multi-task learning," PLOS ONE, Public Library of Science, vol. 19(6), pages 1-27, June.
    2. Guodong Su & Hao Shu, 2024. "Traffic flow detection method based on improved SSD algorithm for intelligent transportation system," PLOS ONE, Public Library of Science, vol. 19(3), pages 1-21, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.

      More about this item

      Statistics

      Access and download statistics

      Corrections

      All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0342084. See general information about how to correct material in RePEc.

      If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

      If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

      If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

      For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

      Please note that corrections may take a couple of weeks to filter through the various RePEc services.

      IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.