IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v14y2022i21p14151-d957869.html
   My bibliography  Save this article

Deep USRNet Reconstruction Method Based on Combined Attention Mechanism

Author

Listed:
  • Long Chen

    (Hubei Provincial Key Laboratory of Intelligent Robots, Wuhan Institute of Technology, Wuhan 430205, China
    School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China)

  • Shuiping Zhang

    (Hubei Provincial Key Laboratory of Intelligent Robots, Wuhan Institute of Technology, Wuhan 430205, China
    School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China)

  • Haihui Wang

    (Hubei Provincial Key Laboratory of Intelligent Robots, Wuhan Institute of Technology, Wuhan 430205, China
    School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China)

  • Pengjia Ma

    (Hubei Provincial Key Laboratory of Intelligent Robots, Wuhan Institute of Technology, Wuhan 430205, China
    School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China)

  • Zhiwei Ma

    (Hubei Provincial Key Laboratory of Intelligent Robots, Wuhan Institute of Technology, Wuhan 430205, China
    School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China)

  • Gonghao Duan

    (Hubei Provincial Key Laboratory of Intelligent Robots, Wuhan Institute of Technology, Wuhan 430205, China
    School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China)

Abstract

Single image super-resolution (SISR) based on deep learning is a key research problem in the field of computer vision. However, existing super-resolution reconstruction algorithms often improve the quality of image reconstruction through a single network depth, ignoring the problems of reconstructing image texture structure and easy overfitting of network training. Therefore, this paper proposes a deep unfolding super-resolution network (USRNet) reconstruction method under the integrating channel attention mechanism, which is expected to improve the image resolution and restore the high-frequency information of the image. Thus, the image appears sharper. First, by assigning different weights to features, focusing on more important features and suppressing unimportant features, the details such as image edges and textures are better recovered, and the generalization ability is improved to cope with more complex scenes. Then, the CA (Channel Attention) module is added to USRNet, and the network depth is increased to better express high-frequency features; multi-channel mapping is introduced to extract richer features and enhance the super-resolution reconstruction effect of the model. The experimental results show that the USRNet with integrating channel attention has a faster convergence rate, is not prone to overfitting, and can be converged after 10,000 iterations; the average peak signal-to-noise ratios on the Set5 and Set12 datasets after the side length enlarged by two times are, respectively, 32.23 dB and 29.72 dB, and are dramatically improved compared with SRCNN, SRMD, PAN, and RCAN. The algorithm can generate high-resolution images with clear outlines, and the super-resolution effect is better.

Suggested Citation

  • Long Chen & Shuiping Zhang & Haihui Wang & Pengjia Ma & Zhiwei Ma & Gonghao Duan, 2022. "Deep USRNet Reconstruction Method Based on Combined Attention Mechanism," Sustainability, MDPI, vol. 14(21), pages 1-19, October.
  • Handle: RePEc:gam:jsusta:v:14:y:2022:i:21:p:14151-:d:957869
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/14/21/14151/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/14/21/14151/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Peixin Qu & Zhen Tian & Ling Zhou & Jielin Li & Guohou Li & Chenping Zhao, 2023. "SCDNet: Self-Calibrating Depth Network with Soft-Edge Reconstruction for Low-Light Image Enhancement," Sustainability, MDPI, vol. 15(2), pages 1-13, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:14:y:2022:i:21:p:14151-:d:957869. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.