A review of fusion methods of multi-spectral image

A review of fusion methods of multi-spectral image

Optik 126 (2015) 4804–4807 Contents lists available at ScienceDirect Optik journal homepage: www.elsevier.de/ijleo A review of fusion methods of mu...

412KB Sizes 1 Downloads 28 Views

Optik 126 (2015) 4804–4807

Contents lists available at ScienceDirect

Optik journal homepage: www.elsevier.de/ijleo

A review of fusion methods of multi-spectral image Luyi Bai ∗ , Changming Xu, Cong Wang School of Information Science & Engineering, Northeastern University, Shenyang 110819, China

a r t i c l e

i n f o

Article history: Received 25 November 2014 Accepted 26 September 2015 Keywords: Multi-spectial image Fusion Remote sensing

a b s t r a c t As an important branch of data fusion taken images as the research objects, image fusion perform multiple images to get a more accurate image using redundant information and complementary information. Multi-spectral image is a kind of remote sensing image, and fusion of multi-spectral image combine image features of multi-spectral image together to get a more comprehensive and clear image using the spatiotemporal correlation and information on complementary. Consequently, fusion of multi-spectral image, which is a hot issue, is an important way of information processing of remote sensing image. Fusion methods of multi-spectral image is an important issue of fusion of multi-spectral image of remote sensing image, and effective selection of an appropriate fusion method of multi-spectral image is especially significant for improving image accuracy. Along with the development of remote sensing technique, traditional fusion methods of image are difficult to meet the requirement of image accuracy. Recently, fusion methods of multi-spectral image attract increasing attention and become a new hot topic. In this paper, characteristics of different fusion methods of multi-spectral image as well as the research prospect are analyzed. This paper provides a scientific reference for the development of fusion technique of multi-spectral image. © 2015 Elsevier GmbH. All rights reserved.

1. Introduction Data information obtained from a single sensor is limited, and it is difficult to meet the requirement of practical applications. At the same time, technologies of multi-sensor have been developed quickly, which make information variety [1,2]. Accordingly, fusion of different kinds of data has attracted increasing attention [3–5]. As an important branch of data fusion taken images as the research objects [6,7], image fusion performs multiple images to get a more accurate image using redundant information and complementary information [8]. Multi-spectral image is a kind of remote sensing image, and fusion of multi-spectral image combine image features of multi-spectral image together to get a more comprehensive and clear image using the spatiotemporal correlation and information on complementary [9]. Multi-spectral image fusion technology is mainly used in geology, agriculture, military, etc. It can improve the spatial resolution, reduce the ambiguity, improve the classification accuracy, and achieve the goal of image quality enhancement [10]. According to the types of source images involved in fusion, fusion methods of multi-spectral image can be divided into three categories [11]: fusion of multi-bands images, fusion of

∗ Corresponding author. Tel.: +86 13513350400. E-mail address: [email protected] (L. Bai). http://dx.doi.org/10.1016/j.ijleo.2015.09.201 0030-4026/© 2015 Elsevier GmbH. All rights reserved.

multi-spectral and panchromatic images, and fusion of hyperspectral images. This paper provides a scientific reference for the development of fusion technique of multi-spectral image, including the above three categories. 2. Fusion of multi-bands images Fusion of multi-bands images is the process of generating or composing new images of multi-bands images using certain algorithm in uniform geographic coordinate system [12]. In this section, several common fusion methods are introduced, followed by discussion of their characteristics. 2.1. Low-pass contrast pyramid Low-pass contrast pyramid is compatible with human visual characteristics [13]. The advantage of low-pass contrast pyramid is to preserve the high contrast and high bright information [14], which can be applied to different resolution images fusion 2.2. Wavelet transformation Wavelet transformation is to resample fusion images, and to decompose sub-images with different resolutions. New high frequency sub-images can be obtained by processing high frequency

L. Bai et al. / Optik 126 (2015) 4804–4807

sub-images. Fusion results can be computed by wavelet transformation [15]. The disadvantages of the approach are having a domino effect, and influenced by wavelet decomposition order number [16]. 2.3. Contrast modulation Contrast modulation modulates low resolution gray images using clear gray images [17]. Contrast modulation is suitable for the pairs of images, and the fusion effect is proportional to the difference of sensor spatial and gray-scale resolution [18]. 2.4. Bayesian inference Bayesian inference deletes error information with low credibility by analyzing the compatibility of information, Bayesian estimates the retained information, and then generate the optimal fusion results [19]. On the basis of it, multi-Bayesian classification inference is proposed. It regards each sensor as a Bayesian classifier, and then generates the optimal fusion results [19]. The disadvantages of multi-Bayesian classification inference are that the uncertainty expression is not very good, and the calculation is complicated [20]. 2.5. Parameter template Parameter template completes pattern recognition with complex correlation by comparing observing data with prior template [21]. Parameter templates usually contain Boolean conditions, parameter table, threshold, weight coefficient, etc. 2.6. Clustering analysis Clustering analysis groups the similar predefined data [22]. Clustering analysis is to group data into classification table, not using the statistical theory [23]. Clustering analysis is very useful for interpreting properties and analyzing observed data. It is mainly used for target classification and recognition. 2.7. Artificial neural network Artificial neural network is to emulate biological information processing method of the nervous system [24]. Artificial neural network contains multiple units, which is used to input data as a nonlinear transformation for the classification from data to property. However, the theory of neural network image fusion method has problems still need to be solved. For example, the way of combination between neural network and the traditional classification method [25], neural network model layer and the choice of node number [26], choice of neural network model [27], and training strategy of neural network model [28], etc. 2.8. Color space transformation Color space transformation is a fusion method based on IHS (Intensity, Hue, Saturation) model and the processing methods of gray images and color images [29]. According to application scope and purpose, color space transformation models can be divided into two categories [30], which are models oriented to hardware devices and models applied to color processing applications. RGB model is the most commonly used model oriented to hardware [31], and HIS model is the most commonly used model oriented to color processing [32].

4805

3. Fusion of multi-spectral and panchromatic images Fusion methods of multi-spectral and panchromatic images can be divided into two categories [33], which are fusion methods based on color space component replacement and fusion methods based on multi-resolution analysis. In this section, several common fusion methods are introduced. 3.1. Fusion methods based on color space component replacement Fusion methods based on color components are to linear separate and replace images of each band. Final fusion images are obtained by band restructuring [34]. 3.1.1. IHS IHS is one of the most typical color component replacement fusion methods of multispectral and panchromatic image fusion [32,35,36]. It is noted that the fusion results using IHS may produce spectral distortion more or less [36]. 3.1.2. Brovey Brovey is a fusion method usually used in ratio transformation of images enhancement [37]. Brovey cannot only simplify the process of image color space conversion, but also can keep the spectral information of the original multispectral images [38]. However, if the spectral range of the original multispectral image and panchromatic image is large, it will cause color distortion of spectral information in fusion images [39]. Brovey is mainly used when multi-spectrum images of low spatial resolution and panchromatic images of high spatial resolution are similar. In addition, it will ensure that the gray value range of fusion images after gray space stretching should equal to that of original multi-spectral images with different bands [40,41]. 3.1.3. PCA PCA is a fusion method that multi-dimensional orthogonal linear transformation is carried out based on statistical properties [42], and can transform multi-spectral and panchromatic images with highly correlation into irrelevant variables [43]. The disadvantage of PCA is that it will distort the spectral information of images after PCA transformation of panchromatic and multi-spectral images [44]. 3.2. Fusion methods based on multi-resolution analysis Fusion methods based on multi-resolution analysis can be divided into fusion methods based on pyramid transform, fusion methods based on wavelet transform, and fusion methods of multiscale geometric transform [45,46]. 3.2.1. Fusion methods based on pyramid transform Laplace pyramid transform is used for multi-resolution analysis of image fusion by Gaussian pyramid sequence and interpolation sequence [47]. On the basis of it, Saleem et al. [48] propose an improved fusion method of multi-source images based on contrast pyramid transform. However, it has structural disadvantage such as extraction ability is poor after multi-scale decomposition [49]. For this purpose, Li et al. [50] propose an improved gradient pyramid multi-source image fusion method, which obtain high band coefficient by gradient direction operator. Furthermore, Li et al. [51] improve Gaussian pyramid decomposition, and propose a fusion method based on local neighborhood window feature value selection.

4806

L. Bai et al. / Optik 126 (2015) 4804–4807

3.2.2. Fusion methods based on wavelet transform Ranchin and Wald [52] first introduce discrete wavelet transform (DWT) into multi-source remote sensing image fusion so that the research area has attracted increasing attention. Li et al. [53] make prediction on multi-spectral image fusion using the method of discrete wavelet transform. Wang et al. [54] enhance fusion results of multi-spectral images using wavelet transform in order to get comparatively complete information. Moreover, in [55], Prabhu et al. propose fusion method of two-dimensional gray-scale multiresolution images using wavelet transform. Lu et al. [56] study low frequency coefficient of image calculation, and then compare with results of different wavelet. Balakrishnan et al. [57] propose a fusion method for eddy current images using discrete wavelet transform. 3.2.3. Fusion method of multi-scale geometric transform Wavelet transform has good time-frequency local features [57]. However, characteristics of dot shape information cannot be simply extended to two-dimensional images when one-dimensional signals process [54]. Due to the limitation of information in directions of separable wavelet frame structure generated by one-dimensional wavelet theory, we cannot use the dot shape information to capture the characteristics of optimal lines or planes, such as exotic high-dimensional functions [53]. Concerning on disadvantages of wavelet transformation in two-dimensional images processing [16], Multi-scale Geometric Analysis is introduced in two-dimensional space. The basic idea is to approximate singular curve using the function of geometric regularity and the coefficient expression [58]. In [58], Yang et al. propose a new fusion algorithm for multimodal medical images based on contourlet transform. The work of Wang et al. [59] discusses the directionality of wavelet transform and its limitation, and then summarizes state-of-the-art image coding methods based on MGA. Kaur and Singh [60] propose a novel multimodality Medical Image fusion (MIF) method based on improved Contourlet Transform (CNT) for spatially registered, multi-sensor, multi-resolution medical images. To sum up, the development of multi-scale geometric analysis tools make up for wavelet transform in two-dimensional images to some extent. In the meanwhile, it can also be able to represent characteristics of images sparsely [58,60]. 4. Fusion of multi-spectral and hyper-spectral images Hyper-spectral images have spectral resolution. However, the spatial resolution of hyper-spectral images is still lower than multi-spectral images. Accordingly, fusion of these two kinds of information can provide researchers with fusion results of high spatial resolution and high spectral resolution. Bayer [61] develops a PC transform-based algorithm for fusion of hyper-spectral and multispectral images taking advantage of novel fusion method used for IHS transform-based algorithm for tri-band and panchromatic images. Bissett and Kohler [62] seek to develop the technology to fuse high spatial resolution MultiSpectral Imagery (MSI) with lower spatial resolution, but higher spectral resolution. The work of [63] uses yellow rust disease of winter wheat as a model system for testing the featured technologies. Hyper-spectral reflection images of healthy and infected plants were taken with an imaging spectrograph under field circumstances and ambient lighting conditions. Pande et al. [64] compare three fusion algorithms (Principal Component Transformation, Color Normalized and Gram-Scmidt Transformation) with original hyper-spectral images.

briefly reviews fusion methods of multi-spectral images. Nevertheless, fusion methods of multi-spectral images are not limited to the paper. Although this research area has attracted increasing attention, it still needs further investigations. Acknowledgements This work was supported by the National Natural Science Foundation of China (61402087 and 61300195), the Fundamental Research Funds for the Central Universities (N130323006), the Scientific Research Funds of Hebei Education Department (QN2014339), the Natural Science Foundation of Hebei Province (F2015501049 and F2014501078), the General Project of Liaoning Province Department of Education Science Research (L2013099), and the Doctoral Funds Project of Northeastern University at Qinhuangdao (XNB201428). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43]

5. Conclusions Fusion of multi-spectral images is one of the most important technologies of processing remote sensing information. This paper

[44] [45] [46] [47]

S.-l. Sun, Z-l. Deng, Automatic 40 (6) (2004) 1017–1023. B. Khaleghi, A. Khamis, F.O. Karray, Inf. Fusion 14 (1) (2013) 28–44. J.-x. Zhang, Int. J. Image Data Fusion 1 (1) (2010) 5–24. N.I.U. Gang, B.S. Yang, M. Pecht, Reliab. Eng. Syst. Saf. 95 (7) (2010) 786–796. L. Jing, M. Zha, Y.-l. Guo, et al., Spectrosc. Spectral Anal. 31 (10) (2011) 2639–2642. C. Witharana, D.L. Civco, T.H. Meyer, ISPRS J. Photogram. Remote Sens. 87 (2014) 1–18. W. Dou, H.-q. Sun, Y.-h. Chen, Spectrosc. Spectral Anal. 31 (3) (2011) 746–752. G.-h. Qu, D.-l. Zhang, P.-f. Yan, Electron. Lett. 38 (7) (2002) 313–315. T. Achalakul, S. Taylor, Concurr. Comput. Pract. Exp. 13 (2001) 1063–1081. P. Prabhavathi, K.K. Ganesh, Int. J. Innov. Res. Comput. Commun. Eng. 2 (1) (2014) 3726–3729. D. Jiang, D.-f. Zhuang, Y.-h. Huang, et al., Image Fusion and Its Applications, Alcorn State University, USA, 2011, pp. 1–22. Y.-a. Zheng, J.-s. Song, W.-m. Zhou, et al., Acta Autom. Sin. 33 (4) (2007) 337–341. A. Toet, Pattern Recogn. Lett. 9 (4) (1989) 245–253. N. Wadhwa, M. Rubinstein, F. Durand, et al., ACM Trans. Graphics 32 (4) (2013) 1–9. H. Xu, T.-b. Jiang, J. Converg. Inf. Technol. 7 (18) (2012) 392–400. I. Mehra, N.K. Nishchal, Opt. Express 22 (5) (2014) 5474–5482. X.-z. Bai, F. Zhou, B. Xue, Image Vis. Comput. 29 (12) (2011) 829–839. S.-f. Yin, L.-c. Cao, Y.-s. Ling, et al., Infrared Phys. Technol. 53 (2) (2010) 146–150. B. Khaleghi, A. Khamis, F.O. Karray, et al., Inf. Fusion 14 (1) (2013) 28–30. S. Roussel, V. Bellon-Maurel, J.M. Roger, et al., Chemom. Intell. Lab. Syst. 65 (2) (2003) 209–219. D. Ruta, B. Gabrys, Comput. Inf. Syst. 7 (1) (2000) 1–10. D.M. Owen, C. Rentero, J. Rossy, et al., J. Biophoton. 3 (7) (2010) 446–454. X.-b. Liu, B.-b. Deng, L.-n. Shen, Appl. Mech. Mater. 328 (2013) 463–467. S.-t. Li, K. Jt, Y.-n. Wang, Pattern Recogn. Lett. 23 (8) (2002) 985–997. S.-x. Ren, L. Gao, Chemom. Intell. Lab. Syst. 107 (2) (2011) 276–282. J.-q. Yu, H. Duan, Optik 124 (17) (2013) 3103–3111. N.F. Faouzi, H. Leung, A. Kurian, Inf. Fusion 12 (1) (2011) 4–10. L.-q. Pan, G. Zhang, K. Tu, et al., Eur. Food Res. Technol. 233 (3) (2011) 457–463. Z.-h. Li, Z.-l. Jing, X.-h. Yang, et al., Pattern Recogn. Lett. 26 (13) (2005) 2006–2014. L. Nanni, A. Lumini, Pattern Recogn. 42 (9) (2009) 1906–1913. J. Yang, C.-j. Liu, L. Zhang, Pattern Recogn. 43 (4) (2010) 1454–1466. T.-m. Tu, S.-c. Su, H. Shyu, et al., Inf. Fusion 2 (3) (2001) 177–186. Q.-x. Zhou, Z.-l. Jing, S.-z. Jiang, Remote Sens. Technol. Appl. 18 (1) (2003) 41–46. K. Amolins, Yn. Zhang, P. Dare, ISPRS J. Photogram. Remote Sens. 62 (4) (2007) 249–263. S. Daneshvar, H. Ghassemian, Inf. Fusion 11 (2) (2010) 114–123. C.M. Chen, G.F. Hepner, R.R. Forster, ISPRS J. Photogram. Remote Sens. 58 (1–2) (2003) 19–30. T.-m. Tu, Y.-c. Lee, C.-p. Chang, et al., Opt. Eng. 44 (11) (2005) 1–10. K.G. Nikolakopoulos, Photogram. Eng. Remote Sens. 74 (5) (2008) 647–659. N.-y. Zhang, Q.-y. Wu, Image Process. 21 (1) (2006) 67–70. P. Selvarani, V. Vaithiyanathan, Res. J. Appl. Sci. Eng. Technol. 4 (19) (2012) 3623–3627. P. Selvarani, V. Vaithyanathan, Res. J. Appl. Sci. 7 (7) (2012) 334–339. S. Zebhi, M.R. Aghabozorgi Sahaf, M.T. Sadeghi, Signal Image Process. Int. J. (SIPIJ) 3 (4) (2012) 153–161. D. Ren, Y.-m. Liu, X.-d. Yang, et al., Intell. Autom. Soft Comput. 18 (8) (2012) 1165–1175. W. Liu, J. Huang, Y.-j. Zhao, Lect. Notes Comput. Sci. 4233 (2006) 481–488. H. Wang, Z.-l. Jing, J.-x. Li, Control Theory Appl. 21 (1) (2004) 145–151. L.-c. Jiao, S. Tan, Chin. J. Electron. 31 (12A) (2003) 1975–1981. W.-c. Wang, F.-l. Chang, J. Comput. 6 (12) (2011) 2559–2566.

L. Bai et al. / Optik 126 (2015) 4804–4807 [48] A. Saleem, A. Beghdadi, B. Boashash, EURASIP J. Image Video Process. 1 (10) (2012) 1–17. [49] Y. Li, J. Global Res. Comput. Sci. 5 (7) (2014) 1–5. [50] M.-j. Li, Y.-b. Dong, X.-l. Wang, Appl. Mech. Mater. 525 (2014) 715–718. [51] M.-j. Li, Y.-b. Dong, X.-l. Wang, Appl. Mech. Mater. 860–863 (2014) 2855–2858. [52] T. Ranchin, L. Wald, Int. J. Remote Sens. 14 (3) (1993) 615–619. [53] H. Li, B.S. Manjunath, S. Mitra, Graph. Models Image Process 57 (3) (1995) 235–245. [54] H.-h. Wang, J-x. Peng, W. Wu, Proc. Second Int. Conf. Mach. Learn. Cybern. (2003) 2557–2562.

[55] [56] [57] [58] [59] [60] [61] [62] [63] [64]

4807

V. Prabhu, S. Mukhopadhyay, Recent Adv. Inf. Technol. (2012) 15–17. H.-m. Lu, L.-f. Zhang, S. Serikawa, Comput. Math. Appl. 64 (5) (2012) 996–1003. S. Balakrishnan, M. Cacciola, L. Udpa, et al., NDT E Int. 51 (2012) 51–57. L. Yang, B.L. Guo, W. Ni, Neurocomputing 72 (1–3) (2008) 203–211. X.-h. Wang, Q. Sun, C.-m. Song, et al., Comput. Res. Dev. 47 (2010) 1132–1143. R. Kaur, H. Singh, Int. J. Res. Eng. Appl. Sci. 2 (7) (2012) 11–20. C.J. Bayer, Report (2005) 1–29. W.P. Bissett, D.D.R. Kohler, Report (2007) 1–10. D. Moshou, C. Bravo, R. Oberti, et al., Real-Time Imaging 11 (2) (2005) 75–83. H. Pande, P.S. Tiwari, S. Dobhal, J. Indian Soc. Remote Sens. 37 (2009) 395–408.