Author
Listed:
- Seung Hwan Lee
(School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehakro, Buk-Gu, Daegu 41566, Republic of Korea)
- Sung Hak Lee
(School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehakro, Buk-Gu, Daegu 41566, Republic of Korea)
Abstract
High dynamic range (HDR) tone mapping techniques have been widely studied to effectively represent the broad dynamic range of real-world scenes. However, generating an HDR image from multiple low dynamic range (LDR) images captured at different exposure levels can introduce ghosting artifacts in dynamic scenes. Moreover, methods that estimate HDR information from a single LDR image often suffer from inherent accuracy limitations. To overcome these limitations, this study proposes a novel image processing technique that extends the dynamic range of a single LDR image. This technique achieves the goal through leveraging a Convolutional Neural Network (CNN) to generate a synthetic Near-Infrared (NIR) image—one that emulates the characteristic of real NIR imagery being less susceptible to diffraction, thus preserving sharper outlines and clearer details. This synthetic NIR image is then fused with the original LDR image, which contains color information, to create a tone-distributed HDR-like image. The synthetic NIR image is produced using a lightweight U-Net-based autoencoder, where the encoder extracts features from the LDR image, and the decoder synthesizes a synthetic NIR image that replicates the characteristics of a real NIR image. To enhance feature fusion, a cardinality structure inspired by Extended-Efficient Layer Aggregation Networks (E-ELAN) in You Only Look Once Version 7 (YOLOv7) and a modified convolutional block attention module (CBAM) incorporating a difference map are applied. The loss function integrates a discriminator to enforce adversarial loss, while VGG, structural similarity index, and mean squared error losses contribute to overall image fidelity. Additionally, non-reference image quality assessment losses based on BRISQUE and NIQE are incorporated to further refine image quality. Experimental results demonstrate that the proposed method outperforms conventional HDR techniques in both qualitative and quantitative evaluations.
Suggested Citation
Seung Hwan Lee & Sung Hak Lee, 2025.
"Single-Exposure HDR Image Translation via Synthetic Wide-Band Characteristics Reflected Image Training,"
Mathematics, MDPI, vol. 13(16), pages 1-35, August.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:16:p:2644-:d:1726392
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:16:p:2644-:d:1726392. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.