Author
Listed:
- Zhiqin Zhu
(State Key Laboratory of Power Transmission Equipment and System Security and New Technology, College of Automation, Chongqing University, Chongqing 400044, China)
- Guanqiu Qi
(School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85287, USA)
- Yi Chai
(State Key Laboratory of Power Transmission Equipment and System Security and New Technology, College of Automation, Chongqing University, Chongqing 400044, China)
- Yinong Chen
(School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85287, USA)
Abstract
The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF) based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP) algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods.
Suggested Citation
Zhiqin Zhu & Guanqiu Qi & Yi Chai & Yinong Chen, 2016.
"A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering,"
Future Internet, MDPI, vol. 8(4), pages 1-18, November.
Handle:
RePEc:gam:jftint:v:8:y:2016:i:4:p:53-:d:82634
Download full text from publisher
Citations
Citations are extracted by the
CitEc Project, subscribe to its
RSS feed for this item.
Cited by:
- Guanqiu Qi & Jinchuan Wang & Qiong Zhang & Fancheng Zeng & Zhiqin Zhu, 2017.
"An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework,"
Future Internet, MDPI, vol. 9(4), pages 1-25, October.
- Lingjun Liu & Zhonghua Xie & Cui Yang, 2017.
"A Novel Iterative Thresholding Algorithm Based on Plug-and-Play Priors for Compressive Sampling,"
Future Internet, MDPI, vol. 9(3), pages 1-10, June.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:8:y:2016:i:4:p:53-:d:82634. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.