IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0329759.html

Hazediff: A training-free diffusion-based image dehazing method with pixel-level feature injection

Author

Listed:
  • Xiaoxia Lin
  • Zhengao Li
  • Dawei Huang
  • Wancheng Feng
  • XinJun An
  • Lin Sun
  • Niuzhen Yu
  • Yan Li
  • Chunwei Leng

Abstract

In the current environmental context, significant emissions generated by industrial and transportation activities, coupled with an unreasonable energy structure, have resulted in recurrent haze phenomena. This consequently leads to degraded image contrast and reduced resolution in captured images, significantly hindering subsequent mid- and high-level visual tasks. These technical challenges have positioned image dehazing as a pivotal research frontier in computer vision. Nevertheless, current image dehazing approaches exhibit notable limitations. Deep learning-based methodologies demand extensive paired hazy-clean training datasets, the acquisition of which remains particularly challenging. Furthermore, synthetically generated data frequently exhibit marked disparities from authentic scenarios, thereby limiting model generalizability. Despite diffusion-based approaches demonstrating superior image reconstruction performance, their data-driven implementations face comparable limitations. To overcome these challenges, we propose HazeDiff: a training-free dehazing method based on the Diffusion model. This method provides a novel perspective for image dehazing research. Unlike existing approaches, it eliminates the need for hard-to-get paired training data, reducing computational costs while enhancing generalization. This not only reduces computational costs but also improves the generalization ability and stability on different datasets. Ultimately, it ensures that the dehazing restoration results are more reliable and effective. The Pixel-Level Feature Inject (PFI) we proposed is implemented through the self-attention layer. It integrates the pixel-level feature representation of the reference image into the initial noise of the dehazing image, effectively guiding the diffusion process to achieve the dehazing effect. As a supplement, the Structure Retention Model (SRM) incorporated in Cross-attention performs dynamic feature enhancement through adaptive attention re-weighting. This ensures the retention of key structural features during the restoration process while reducing detail loss. We have conducted comprehensive experiments on both real-world and synthetic datasets.Experimental results demonstrate that HazeDiff surpasses state-of-the-art dehazing methods, achieving higher scores on both no-reference (e.g., NIQE) and full-reference (e.g., PSNR) evaluation metrics. It shows stronger generalization ability and practicality. It can restore high-quality images with natural visual features and clear structural content from low-quality hazy images.

Suggested Citation

  • Xiaoxia Lin & Zhengao Li & Dawei Huang & Wancheng Feng & XinJun An & Lin Sun & Niuzhen Yu & Yan Li & Chunwei Leng, 2025. "Hazediff: A training-free diffusion-based image dehazing method with pixel-level feature injection," PLOS ONE, Public Library of Science, vol. 20(10), pages 1-23, October.
  • Handle: RePEc:plo:pone00:0329759
    DOI: 10.1371/journal.pone.0329759
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0329759
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0329759&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0329759?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0329759. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.