IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2022i1p209-d1021279.html
   My bibliography  Save this article

DARI-Mark: Deep Learning and Attention Network for Robust Image Watermarking

Author

Listed:
  • Yimeng Zhao

    (School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China)

  • Chengyou Wang

    (School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
    Shandong University–Weihai Research Institute of Industry Technology, Weihai 264209, China)

  • Xiao Zhou

    (School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
    Shandong University–Weihai Research Institute of Industry Technology, Weihai 264209, China)

  • Zhiliang Qin

    (School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
    Weihai Beiyang Electric Group Co., Ltd., Weihai 264209, China)

Abstract

At present, deep learning has achieved excellent achievements in image processing and computer vision and is widely used in the field of watermarking. Attention mechanism, as the research hot spot of deep learning, has not yet been applied in the field of watermarking. In this paper, we propose a deep learning and attention network for robust image watermarking (DARI-Mark). The framework includes four parts: an attention network, a watermark embedding network, a watermark extraction network, and an attack layer. The attention network used in this paper is the channel and spatial attention network, which calculates attention weights along two dimensions, channel and spatial, respectively, assigns different weights to pixels in different channels at different positions and is applied in the watermark embedding and watermark extraction stages. Through end-to-end training, the attention network can locate nonsignificant areas that are insensitive to the human eye and assign greater weights during watermark embedding, and the watermark embedding network selects this region to embed the watermark and improve the imperceptibility. In watermark extraction, by setting the loss function, larger weights can be assigned to watermark-containing features and small weights to noisy signals, so that the watermark extraction network focuses on features about the watermark and suppresses noisy signals in the attacked image to improve robustness. To avoid the phenomenon of gradient disappearance or explosion when the network is deep, both the embedding network and the extraction network have added residual modules. Experiments show that DARI-Mark can embed the watermark without affecting human subjective perception and that it has good robustness. Compared with other state-of-the-art watermarking methods, the proposed framework is more robust to JPEG compression, sharpening, cropping, and noise attacks.

Suggested Citation

  • Yimeng Zhao & Chengyou Wang & Xiao Zhou & Zhiliang Qin, 2022. "DARI-Mark: Deep Learning and Attention Network for Robust Image Watermarking," Mathematics, MDPI, vol. 11(1), pages 1-16, December.
  • Handle: RePEc:gam:jmathe:v:11:y:2022:i:1:p:209-:d:1021279
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/1/209/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/1/209/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Qiumei Zheng & Nan Liu & Fenghua Wang, 2020. "An Adaptive Embedding Strength Watermarking Algorithm Based on Shearlets’ Capture Directional Features," Mathematics, MDPI, vol. 8(8), pages 1-19, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.

      Corrections

      All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2022:i:1:p:209-:d:1021279. See general information about how to correct material in RePEc.

      If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

      If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

      If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

      For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

      Please note that corrections may take a couple of weeks to filter through the various RePEc services.

      IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.