IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i17p2752-d1733521.html
   My bibliography  Save this article

A Multi-Resolution Attention U-Net for Pavement Distress Segmentation in 3D Images: Architecture and Data-Driven Insights

Author

Listed:
  • Haitao Gong

    (Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA)

  • Jueqiang Tao

    (College of Engineering, Zhejiang Normal University, Jinhua 321004, China)

  • Xiaohua Luo

    (Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA)

  • Feng Wang

    (Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA)

Abstract

High-resolution 3D pavement images have become a valuable data source for automated surface distress detection and assessment. However, accurately identifying and segmenting cracks from pavement images remains challenging, due to factors such as low contrast and hair-like thinness. This study investigates key factors affecting segmentation performance and proposes a novel deep learning architecture designed to enhance segmentation robustness under these challenging conditions. The proposed model integrates a multi-resolution feature extraction stream with gated attention mechanisms to improve spatial awareness and selectively fuse information across feature levels. Our extensive experiments on a 3D pavement dataset demonstrated that the proposed method outperformed several state-of-the-art architectures, including FCN, U-Net, DeepLab, DeepCrack, and CrackFormer. Compared with U-Net, it improved F1 from 0.733 to 0.780. The gains were most pronounced on thin cracks, with F1 from 0.531 to 0.626. Our paired t -tests across folds showed the method is statistically better than U-Net and DeepCrack on Recall, IoU, Dice, and F1. These findings highlight the effectiveness of the attention-guided, multi-scale feature fusion method for robust crack segmentation using 3D pavement data.

Suggested Citation

  • Haitao Gong & Jueqiang Tao & Xiaohua Luo & Feng Wang, 2025. "A Multi-Resolution Attention U-Net for Pavement Distress Segmentation in 3D Images: Architecture and Data-Driven Insights," Mathematics, MDPI, vol. 13(17), pages 1-18, August.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:17:p:2752-:d:1733521
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/17/2752/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/17/2752/
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:17:p:2752-:d:1733521. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.