IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i10p1544-d1651447.html
   My bibliography  Save this article

AIF: Infrared and Visible Image Fusion Based on Ascending–Descending Mechanism and Illumination Perception Subnetwork

Author

Listed:
  • Ying Liu

    (The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
    Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang 110169, China)

  • Xinyue Mi

    (The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China)

  • Zhaofu Liu

    (The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China)

  • Yu Yao

    (The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
    Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang 110169, China)

Abstract

The purpose of infrared and visible image fusion is to generate a composite image that can contain both the thermal radiation profile information of the infrared image and the texture details of the visible image. This kind of composite image can be used to detect targets under various lighting conditions and offer high scene spatial resolution. However, the existing image fusion algorithms rarely consider light factor in the modeling process. The study presents a novel image fusion approach (AIF) that can adaptively fuse infrared and visible images under various lighting conditions. Specifically, the infrared image and the visible image are extracted by the AdC feature extractor, respectively, and both of them are adaptively fused under the guidance of the illumination perception subnetwork. The image fusion model is trained in an unsupervised manner with a customized loss function. The AdC feature extractor adopts an ascending–descending feature extraction mechanism to organize convolutional layers and combines these convolutional layers with cross-modal interactive differential modules to achieve the effective extraction of hierarchical complementary and differential information. The illumination perception subnetwork obtains the scene lighting condition based on the visible image, which determines the contribution weights of the visible image and the infrared image in the composite image. The customized loss function consists of illumination loss, gradient loss, and intensity loss. It is more targeted and can effectively improve the fusion effect of visible images and infrared images under different lighting conditions. Ablation experiments demonstrate the effectiveness of the loss function. We compare our method with nine other methods on public datasets, including four traditional methods and five deep-learning-based methods. Qualitative and quantitative experiments show that our method performs better in terms of indicators such as SD, and the fused image has more prominent contour information and richer detail information.

Suggested Citation

  • Ying Liu & Xinyue Mi & Zhaofu Liu & Yu Yao, 2025. "AIF: Infrared and Visible Image Fusion Based on Ascending–Descending Mechanism and Illumination Perception Subnetwork," Mathematics, MDPI, vol. 13(10), pages 1-23, May.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:10:p:1544-:d:1651447
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/10/1544/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/10/1544/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:10:p:1544-:d:1651447. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.