IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v9y2021i22p2873-d677315.html
   My bibliography  Save this article

DSTnet: Deformable Spatio-Temporal Convolutional Residual Network for Video Super-Resolution

Author

Listed:
  • Anusha Khan

    (Department of Computer Science, COMSATS University Islamabad, Lahore 54000, Pakistan)

  • Allah Bux Sargano

    (Department of Computer Science, COMSATS University Islamabad, Lahore 54000, Pakistan)

  • Zulfiqar Habib

    (Department of Computer Science, COMSATS University Islamabad, Lahore 54000, Pakistan)

Abstract

Video super-resolution (VSR) aims at generating high-resolution (HR) video frames with plausible and temporally consistent details using their low-resolution (LR) counterparts, and neighboring frames. The key challenge for VSR lies in the effective exploitation of intra-frame spatial relation and temporal dependency between consecutive frames. Many existing techniques utilize spatial and temporal information separately and compensate motion via alignment. These methods cannot fully exploit the spatio-temporal information that significantly affects the quality of resultant HR videos. In this work, a novel deformable spatio-temporal convolutional residual network (DSTnet) is proposed to overcome the issues of separate motion estimation and compensation methods for VSR. The proposed framework consists of 3D convolutional residual blocks decomposed into spatial and temporal (2+1) D streams. This decomposition can simultaneously utilize input video’s spatial and temporal features without a separate motion estimation and compensation module. Furthermore, the deformable convolution layers have been used in the proposed model that enhances its motion-awareness capability. Our contribution is twofold; firstly, the proposed approach can overcome the challenges in modeling complex motions by efficiently using spatio-temporal information. Secondly, the proposed model has fewer parameters to learn than state-of-the-art methods, making it a computationally lean and efficient framework for VSR. Experiments are conducted on a benchmark Vid4 dataset to evaluate the efficacy of the proposed approach. The results demonstrate that the proposed approach achieves superior quantitative and qualitative performance compared to the state-of-the-art methods.

Suggested Citation

  • Anusha Khan & Allah Bux Sargano & Zulfiqar Habib, 2021. "DSTnet: Deformable Spatio-Temporal Convolutional Residual Network for Video Super-Resolution," Mathematics, MDPI, vol. 9(22), pages 1-15, November.
  • Handle: RePEc:gam:jmathe:v:9:y:2021:i:22:p:2873-:d:677315
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/9/22/2873/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/9/22/2873/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:9:y:2021:i:22:p:2873-:d:677315. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.