JianJun Zhu1 and JiangJiang Li This email address is being protected from spambots. You need JavaScript enabled to view it.1

1School of Electrical Engineering, Zhengzhou University of Science and Technology, Zhengzhou 450000,China


 

Received: December 12, 2019
Accepted: May 21, 2020
Publication Date: September 1, 2020

Download Citation: ||https://doi.org/10.6180/jase.202009_23(3).0021  

ABSTRACT


Image fusion is an image processing technology that makes full use of the complementarity and redundancy of images and fuses them through specific fusion rules to obtain images with better visual effects. The fusion image can not only highlight the object information, but also retain the texture details of the surrounding environment. Aiming at the problems caused by blurring edges, details loss, reduction of contrast and clarity of image in traditional multi-exposure image fusion methods, we propose a multi-scale convolutional neural network (CNN) and Laplace pyramid method for multi-exposure image fusion in this paper. The source image is input into the region Laplace pyramid for decomposition. In order to preserve more detailed information and make parameters adaptive, the convolutional neural network is modified, which is used to generate the optimal weight graph to guide the fusion process. Finally, the fusion image is generated by inverse process. Experimental results show that compared with other fusion algorithms, this proposed algorithm improves the image contrast, retains the edge and detail information in the source image. Furthermore, the fusion results have better objective evaluation value.


Keywords: Multi-exposure image fusion, multi-scale CNN, Laplace pyramid, weight graph


REFERENCES


 

  1. [1]Shoulin Y, Ye Z. “Singular value decomposition-based anisotropic diffusion for fusion of infrared and visible images”, International Journal of Image and Data Fusion, pp: 146-163 (2018). doi: 10.1080/19479832.2018.1487886
  2. [2]Chen, H.-J. Feng, Z.-H. Xu, et al. “Fast detail-preserving exposure fusion”, Journal of Zhejiang University, 49(6), pp:1048-1054, (2015). doi: 10.3785/j.issn.1008-973X.2015.06.007
  3. [3]Kuo Chen, Yueting Chen, Huajun Feng, et al. “Detail Preserving Exposure Fusion for a Dual Sensor Camera”, Optical Review, 21(6), pp: 769-774, (2014). doi: 10.1007/s10043-014-0126-6
  4. [4]Ma J, Ma Y, Li C. “Infrared and visible image fusion methods and applications: A survey”, Information Fusion, 45, pp: 153-178, (2019). doi: 10.1016/j.inffus.2018.02.004
  5. [5]Jing Gao, Peng Li, Zhikui Chen. “A canonical polyadic deep convolutional computation model for big data feature learning in Internet of Things”, Future Generation Computer Systems, 99, pp: 508-516 (2019). doi: 10.1016/j.future.2019.04.048
  6. [6]Qingchen Zhang, Changchuan Bai, Laurence T. Yang, Zhikui Chen, Peng Li, and Hang Yu. “A unified smart Chinese medicine framework for healthcare and medical services”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, (2019). doi: 10.1109/TCBB.2019.2914447
  7. [7]Peng L, Chen Z, Yang L T, et al. “Deep Convolutional Computation Model for Feature Learning on Big Data in Internet of Things”, IEEE Transactions on Industrial Informatics, 14(2), pp:790 - 798 (2018). doi: 10.1109/TII.2017.2739340
  8. [8]Gao Z, Zhang C. “Texture clear multi-modal image fusion with joint sparsity model”, Optik - International Journal for Light and Electron Optics, 130, pp: 255-265, (2017). doi: 10.1016/j.ijleo.2016.09.126
  9. [9]Ayush Dogra, Bhawna Goyal, Sunil Agrawal. “From Multi-scale Decomposition to Non-multi-scale Decomposition Methods: A Comprehensive Survey of Image Fusion Techniques and its Applications”, IEEE Access, 5, pp: 16040 - 16067 (2017). doi: 10.1109/ACCESS.2017.2735865
  10. [10]Jing Gao, Jianzhong Li, and Yingshu Li. “Approximate event detection over multi-modal sensing data”, Journal of Combinatorial Optimization, 32, pp: 1002-1016, (2016). doi: 10.1007/s10878-015-9847-0
  11. [11]Qingchen Zhang, Changchuan Bai, Zhikui Chen, et al,. “Deep learning models for diagnosing spleen and stomach diseases in smart Chinese medicine with cloud computing”, Concurrency and Computation: Practice and Experience, (2019). doi:: 10.1002/cpe.5252
  12. [12]Shoulin Yin, Ye Zhang, Shahid Karim. “Large Scale Remote Sensing Image Segmentation Based on Fuzzy Region Competition and Gaussian Mixture Model”. IEEE Access. 6, pp: 26069-26080. (2018). doi: 1109/ACCESS.2018.2834960
  13. [13]Jiao Du, Weisheng Li, Bin Xiao. “Anatomical-Functional Image Fusion by Information of Interest in Local Laplacian Filtering Domain”, IEEE Transactions on Image Processing, 26(12), pp: 5855-5866 (2017). doi: 10.1109/TIP.2017.2745202
  14. [14]Li P, Chen Z, Yang L T, et al. “An Incremental Deep Convolutional Computation Model for Feature Learning on Industrial Big Data”, IEEE Transactions on Industrial Informatics, 15(3), pp. 1341-1349, (2019). doi: 10.1109/TII.2018.2871084
  15. [15]Li P, Chen Z, Yang L T, et al. “An Improved Stacked Auto-Encoder for Network Traffic Flow Classification”, IEEE Network, 32(6):22-27, (2018). doi: 10.1109/MNET.2018.1800078
  16. [16]Shoulin Yin, Ye Zhang and Shahid Karim. “Region search based on hybrid convolutional neural network in optical remote sensing images”, International Journal of Distributed Sensor Networks, 15(5), (2019). doi: 10.1177/1550147719852036
  17. [17]Teng Lin, Hang Li and Shoulin Yin. “Modified Pyramid Dual Tree Direction Filter-based Image De-noising via Curvature Scale and Non-local mean multi-Grade remnant multi-Grade Remnant Filter”, International Journal of Communication Systems. 31(16), (2018). doi: 10.1002/dac.3486
  18. [18]Wu H, Gu X. “Max-Pooling Dropout for Regularization of Convolutional Neural Networks”, International Conference on Neural Information Processing, pp: 46-54, (2015). doi: 10.1007/978-3-319-26532-2_6
  19. [19]Xin-Long L, Hong-Wei Y I. “Improved Multi-exposure Image Pyramid Fusion Method”, Acta Photonica Sinica, 48(8), (2019). doi: 10.3788/gzxb20194808.0810002
  20. [20]Kede Ma, et al. “Multi-Exposure Image Fusion by Optimizing A Structural Similarity Index”, IEEE Transactions on Computational Imaging, 4(1), pp:60-72, 2018. doi: 10.1109/TCI.2017.2786138
  21. [21]Jinhua Wang, Weiqiang Wang, Bing Li, et al. “Exposure fusion via sparse representation and shiftable complex directional pyramid transform”, Multimedia Tools & Applications, 76(14), pp: 15755–15775, (2017). doi: 10.1007/s11042-016-3868-2
  22. [22]Zhi-Feng Xie, Yu-Chen Guo, Shu-Han Zhang, et al. “Multi-Exposure Motion Estimation Based on Deep Convolutional Networks”, Journal of Computer Science and Technology, 33, pp: 487-501 (2018). doi: https://doi.org/10.1007/s11390-018-1833-4