Information fusion in remote sensing

Information fusion in remote sensing

Pergamon Vistas in Astronomy Vol. 41, NO. 3, pp. 337-342, 1997 @ 1997 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0083-6656/97...

494KB Sizes 1 Downloads 62 Views

Pergamon

Vistas in Astronomy Vol. 41, NO. 3, pp. 337-342, 1997 @ 1997 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0083-6656/97 $15.00 + 0.00 PI I: SOO83-6656(97)00036-6

INFORMATION FUSION IN REMOTE SENSING TOWINN TAXT a, ANNE H. SCHISTAD SOLBERG b a Section for Medical Image Analysis and Informatics, Department of Physiology, University of Bergen, Arstadveien 19,5009 Bergen, Norway b Norwegian Computing Center, PO. Box 114 Blindem, 03 14 Oslo, Norway

Abstract- A short review of data fusion in remote sensing with emphasis on statistically based data fusion methods is given. The introductory part defines data fusion and image registration, and provides a historical background and some general references. Multivariate data are the necessary basis for any data fusion algorithm. The possible levels of data fusion and the frequent occurrence of various types of multivariate data in remote sensing are discussed. Finally, the paper presents a number of statistically based data fusion methods and discusses data fusion in the Bayesian framework. @ 1997 Elsevier Science Ltd. All rights reserved.

1. INTRODUCTION In data fusion the information of a specific scene acquired by two or more sensors at the same time or separate times is combined to generate an interpretation of the scene not obtainable from a single sensor. Alternatively, data fusion is done to reduce the uncertainty associated with the data from individual sensors. Relaxing this operational definition slightly, also the combination of the information acquired by the same sensor at different times to improve interpretation is considered as data fusion [ 1,6]. Often, data fusion is restricted to sensed data points arranged on rectangular grids so images are formed. This image part of data fusion will be discussed here, but a few exceptions will be mentioned. When the sensors are image forming, the result of the data fusion process can be a restored image of one or more of the sensors. Alternatively, the interpretation result is a classified or segmented image based on the data from all the sensors which are used in the data fusion. There is some confusion in the literature between image registration and data fusion. Image registration is the process of resampling two or more images so that the corresponding pixels in the resampled images represent data from exactly the same points of the imaged object. Thus, image registration is often a necessary preprocessing step for data fusion, but it is not a part of data fusion itself. The author of Ref. [5] gives some central references on image registration methods. Research on model-based data fusion in remote sensing started around 1982 [22]. The goal of almost all this research has been to improve the classification of the earth surface using images from satellites and airplanes. So far, very few papers have been published on data fusion

338

ir: Tant, A.H. Schistad Solberg

for enhancement or restoration of remote sensing images. The only exception is interferometry, which is a special kind of image reconstruction based on data fusion. During the last five years, interferometry based on synthetic aperture radar (SAR) images has become an important part of remote sensing [ 191. The literature on data fusion in computer vision, machine intelligence and medical imaging is substantial [ 12,18,15,2,30,31], but will not be discussed here. For extensive reviews of data fusion in computer vision and robotics, the books by Abidi and Gonzales [l] and by Clark and Yuille [6] are recommended.

2. ABSTRACTION LEVELS FOR DATA FUSION Fusion can be performed at either the signal, pixel, feature, or symbol level of representation. In signal-based fusion, signals from different sensors are combined to create a new signal with a better signal-to noise ratio than the original signals [23]. Pixel-based fusion consists of merging information from different images on a pixel-by-pixel basis to improve the performance of image processing tasks such as segmentation [ 171. Feature-based fusion consists of merging features extracted from different signals or images [3]. Symbol-level or decision-level fusion consists of merging information at a higher level of abstraction, e.g., when fusing information from sensors which are very dissimilar or refer to different regions of the environment [ 131. Algorithms for fusion of sensory information are classified into the categories of weakly coupledfusion and strongly coupledfusion [6]. In weakly coupled fusion, different sensor modules are assumed to be independent, and the fusion consists of combining the outputs from each of the sensor modules. In strongly coupled fusion the decision of one sensory module is affected by the output of another sensory module. In this case, the modules are no longer independent.

3. MULTIVARIATE DATA IN REMOTE SENSING The variety of different sensors in remote sensing creates a number of possibilities for data fusion to provide better capabilities for restoration, reconstruction and interpretation of images. This is conveniently referred to as the “multi” concept, which includes multispectral, multipolarization, multitemporal and multisensor image analysis. By exploring more than one of these additional dimensions of imaging parameters, a more accurate reproduction or interpretation of the scene can be achieved. Also imaging using multiple incidence angles can provide additional information [7]. 3.1. The multispectral aspect The measured backscatter values for an area vary with the wavelength band. A land-use category will give a different image signal depending on the frequency used, and by using different frequencies, a spectral signature which characterizes the land-use category can be found. Thus, multispectral imaging of an area provides additional information compared to only a single spectral band and is in many cases extremely helpful in the interpretation process. Multispectral optical sensors like Landsat Thematic Mapper and SPOT MS have demonstrated this effect in a substantial number of applications for more than two decades (see, e.g., Ref. [lo]).

Information=ji4sion in remote sensing

339

The use of imaging radars using more than one wavelength is still at an experimental stage. However, several airborne surveys have collected such radar data and the improved classification performance with these data has been illustrated [ 8I. 3.2. The multipolarization aspect The multipolarization concept is related to microwave image data. The polarization of an electromagnetic wave refers to the orientation of the electric field during propagation. A review of the theory and features of polarization is given in [29]. During transmission, the polarization is often chosen to be parallel to the earth’s surface, a situation referred to as horizontal pofarization. Some polarization changes can occur due to scattering, and energy can be received as horizontally polarized and/or vertically polarized. The amount of observed polarizational rotation can be a useful indicator of surface material [29]. Conventional imaging radars have operated with a single fixed polarization. The next generation of instruments will include imaging radar polarimeters, which are capable of imaging the earth’s surface at any and all possible polarizations through antenna synthesis techniques. Several studies have demonstrated the increased performance of multipolarization imagery compared to single-polarization imagery (see, e.g., Ref. [24]). 3.3. The multitemporal aspect The term multitemporal refers to the repeated imaging of an area over a time period. By analyzing an area through time, it is possible to develop interpretation techniques based on an object’s temporal variations, and to discriminate different pattern classes accordingly. Optical satellites like Landsat TM and SPOT have, for several years, provided multitemporal imagery of the earth. ERS- 1 and JERS- 1 now provide multitemporal SAR images. Multitemporal imagery allows the study of how the backscatter of different areas varies with time, weather conditions, and seasons. It also allows the monitoring of processes changing over time. The principal advantage of multitemporal analysis is the increased amount of information for the study area. The information provided by a single image is, for certain applications, not sufficient to properly distinguish between the desired pattern classes. This limitation can be resolved by examining the pattern of temporal changes in the spectral measurements [25,26]. The use of multitemporal imagery has long been recognized as a valuable tool in the interpretation process. However, the effect of incorporating multitemporal imagery into an analysis scheme depends on the capabilities of computer-assisted image analysis tools. A multitemporal classifier will increase the computational requirements compared to a traditional classifier. 3.4. The multisensor aspect With an increasing number of operational and experimental satellites, information about a phenomenon can be captured using different types of sensors. As mentioned in the Introduction, different sensors capture different characteristics of an object, and this information can be utilized in the interpretation process. Fusion of images from different sensors requires some additional preprocessing and poses certain difficulties which are not solved in traditional image classifiers. Each sensor has its own characteristics and the image captured usually contains various artifacts which should be corrected or removed. The images also need to be geometrically corrected and co-registered. If the images do not have the same spatial resolution, they need to be resampled. Also, the multisen-

T Taxt, A.H. Schistad Solberg

340

sor images are often not acquired at the same date. Thus, the multitemporal accounted for in the interpretation.

nature must also be

3.5. Other sources of spatial data For most regions on the earth, spatial information is available in the form of various kinds of maps, e.g., topography, ground cover, elevation, etc. Frequently, maps contain spatial information not obtainable from a single remotely sensed image. Such maps represent a valuable information resource and can be used to improve the interpretation of satellite images.

4. DATA FUSION

METHODS

The main approaches to data fusion in the remote sensing literature are statistical methods [4], Dempster-Shafer theory [17], and neural networks [4]. To a certain extent, the performance of these three approaches have been compared on the same data set [4,17]. Statistical methods have a general and well-understood theoretical foundation, contain a powerful set of modeling tools and perform best in many comparative experiments. Only statistical methods are discussed below. 4. I. Statistical based methods Statistical methods for fusion of remotely sensed data can be grouped into four categories: the augmented vector approach, stratification, probabilistic relaxation, and extended statistical fusion. In the simple augmented vector approach, data from different sources are concatenated as if they were measurements from a single sensor. The fused data vector is then classified as an ordinary single-source measurement [ 161. Stratification is often used to incorporate ancillary GIS data in the classification process. The GIS data are stratified into categories and then a spectral model for each of these categories is used [9]. Richards et al. [22] extended the methods used for spatially contextual classification based on probabilistic relaxation to incorporate ancillary data. The methods based on extended statistical fusion [4,17] were derived by extending the concepts used for classification of multispectral images involving only one data source. Each data source is considered independently, and the classification results are fused using weighted linear combinations. For most applications where multisource data are involved, it is not likely that all the images are acquired at the same time. When the temporal aspect is involved, the classification methodology must handle changes in pattern classes between the image acquisitions. The methods mentioned above assume that no changes with respect to the pattern classes have occurred. A few studies have been concerned with classification in a time-varying environment. Swain [27] developed a cascade classifier based on multiple single-sensor observations of an object. Jeon and Landgrebe [ 141 developed a spatio-temporal classifier utilizing both the temporal and tbe spatial context of the image data, without treating the multisensor aspect explicitly. Middelkoop et al. [20] presented a knowledge-based classifier which used land-cover data from preceding years. Solberg et al. [26] used a Markov random field in a Bayesian approach to design a spatiotemporal classifier with explicit modeling of the multisensor aspect. Contextual information from neighboring pixels normally improves the classification results compared to a pixel-by-pixel classification. Markov random fields provide a powerful methodological framework for modeling of spatial and temporal context [ 11,14,26].

Informationfusion in remote sensing

341

5. DATA FUSION IN A BAYESIAN FRAMEWORK The modem foundation of Bayesian image restoration and classification was provided by Geman and Geman in 1984 [ 111.Their stochastic relaxation model was based on a Markov random field following a Gibbs distribution, a stochastic sampling scheme and the concept of a line process to avoid oversmoothing. This Bayesian approach to image restoration and classification has been extended by many authors. Of particular interest for data fusion is the extension to multispectral image segmentation by Wright [32], the parallel integration of visual modules by Poggio et al. [Zl], and the general framework given by Szeliski [28] and by Clark and Yuille [6]. During the last decade it has gradually become clear that Bayesian algorithms are more reliable and widely applicable than ad hoc algorithms. The Bayesian models for data fusion can be built to be weakly coupled or strongly coupled. The coupling strength in the model should be determined by the degree of interaction which takes place between the sensors providing the data for fusion.

References

[l] M.A. Abidi, R.C. Gonzalez, Data Fusion in Robotics and Machine Intelligence (Academic Press, New York, 1992). [2] R. Antony, Principles of Data Fusion Automation (Artech House, Boston, 1995). [3] N. Ayache, 0. Faugeras, Building, Registrating, and Fusing Noisy Visual Maps, Znt. J. Robot. Res. 7 (1988) 45. [4] J.A. Benediktsson, P.H. Swain, O.K. Ersoy, Neural Network Approaches Versus Statistical Methods in Classification of Multisource Remote Sensing Data, IEEE Trans. Geosc. Remote Sensing 28 (1990) 540. [5] L.G. Brown, A Survey of Image Registration Techniques. ACM Computing Surveys 24 (1992) 325. [6] J.J. Clark, A.L. Yuille, Data Fusionfor Sensory Information Processing Systems. (Kluwer

Academic Publishers, Dordrecht, 1990). [7] C. Elachi, Spaceborne Radar Remote Sensing: Applications and Techniques (IEEE Press, New York, 1988). [8] A. Freeman, J. Villasenor, J.D. Klein, P. Hoogeboom, J. Groot, On the use of multifrequency and polarimetric radar backscatter features for classification of agricultural crops, Int. J. Remote Sensing 15 (1994) 1799. [9] S.E. Franklin, Ancillary data input to satellite remote sensing of complex terrain phenomena, Computers and Geosciences 15 (1989) 799. [lo] K.S. Fu, D.A. Landgrebe, T.L. Philips, Information processing of remotely sensed agricultural data, Proc. ZEEE 57 (1969) 639. [ 1l] S. Geman, D. Geman, Stochastic Relaxation, Gibbs Distribution, and Bayesian Restoration of Images, IEEE Trans. Patt. Ana. Mach. Intell. 6 (1984) 721. [12] D. Hall, Mathematical Techniques in Multisensor Data Fusion (Artech House, Boston, 1992). [ 131 S.J. Henkind, M.C. Harrison, An Analysis of Four Uncertainty Calculi, IEEE Trans. Syst., Man Cybern. 18 (1988) 700. [ 141 B. Jeon, D.A. Landgrebe, Classification with Spatio-Temporal Interpixel Class Dependency Contexts, IEEE Trans. Geosc. Rem. Sens. 30 (1992) 663. [ 151 L.A. Klein, Sensor and Data Fusion Concepts and Applications. SPIE Press Tutorial Series ZTZ4 (Bellingham, Wa: SPIE, 1993).

342

T Taxt, A.H. Schistad Solberg

[ 161 D.G. Leckie, Synergism of Synthetic Aperture Radar and Visible/Infrared Data for Forest Type Discrimination. Photogrammetric Engineering and Remote Sensing 56 (1990) 1237. [ 171 T. Lee, J.A. Richards, P.H. Swain, Probabilistic and Evidential Approaches for Multisource Data Analysis. IEEE Trans. Geosc. Remote Sensing 25 (1987) 283. [18] J. Llinas, E. Waltz, Multisensor Data Fusion (Artech House, Boston, 1990). [ 191 D. Massonet, K. Feigl, M. Rossi, F. Adragna, Radar interferometric mapping of deformation in the year after the Landers earthquake, Nature 369 (1994) 227. [20] J. Middelkoop, L.L.F. Janssen, Implementation of temporal relationships in knowledge based classification of satellite images, Photogrammetric Engineering & Remote Sensing 57 (1991) 937. [21] T. Poggio, E.B. Gamble, J.J. Little, Parallel integration of visual modules, Science 242 (1988) 436. [22] J.A. Richards, D.A. Landgrebe, PH. Swain, A means for utilizing ancillary information in multispectral classification, Remote Sensing of Environment 12 (1992) 463. [23] J.M. Richardson, K.A. Marsh, Fusion of multisensor data, Int. J. Robot. Res. 7 (1988) 78. [24] E. Rignot, R. Chellappa, Segmentation of polarimetric synthetic aperture radar data, IEEE Trans. Image Processing 1(1992) 281.

[25] A.H. Schistad Solberg, A.K. Jain, T. Taxt, Multisource classification of remotely sensed data: Fusion of Landsat TM and SAR images, IEEE Trans. Geosc. Rem. Sens. 32 (1994) 768. [26] A.H. Schistad Solberg, T. Taxt, A.K. Jain, A Markov random field model for classification of multisource satellite imagery, IEEE Trans. Geosc. Rem. Sensing 34 (1996) 100. [27] PH. Swain, Bayesian classification in a time-varying environment, IEEE Trans. Sys. Man and Cybel: 8 (1978) 879-883. [28] R. Szeliski, Bayesian Modeling of Uncertainty in Low-level Vision, (Kluwer Academic

Publishers, London, 1989). [29] ET. Ulaby, C. Elachi, Radar Polarimetry for Geoscience Applications (Artech House, Boston, 1990). [30] P.K. Varshney, Distributed Detection and Data Fusion (Springer, Berlin, 1996). [3 l] PK. Varshney, Special Issue on Data Fusion, Proc. IEEE 85 (1997) 1. [32] W.A. Wright, A. Markov, Random field approach to data fusion and colour segmentation, Image Vision Comp. 7 (1989) 144.