Comparison and integration of spaceborne optical and radar data for mapping in Sudan

50 %
50 %
Information about Comparison and integration of spaceborne optical and radar data for...

Published on December 7, 2017

Author: rsmahabir

Source: slideshare.net

1. This article was downloaded by: [George Mason University] On: 13 March 2015, At: 10:38 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Click for updates International Journal of Remote Sensing Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tres20 Comparison and integration of spaceborne optical and radar data for mapping in Sudan Terry Idol a , Barry Haack a & Ron Mahabir a a Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA, USA Published online: 11 Mar 2015. To cite this article: Terry Idol, Barry Haack & Ron Mahabir (2015) Comparison and integration of spaceborne optical and radar data for mapping in Sudan, International Journal of Remote Sensing, 36:6, 1551-1569, DOI: 10.1080/01431161.2015.1015659 To link to this article: http://dx.doi.org/10.1080/01431161.2015.1015659 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

2. Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions Downloadedby[GeorgeMasonUniversity]at10:3813March2015

3. Comparison and integration of spaceborne optical and radar data for mapping in Sudan Terry Idol, Barry Haack, and Ron Mahabir* Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA, USA (Received 16 October 2014; accepted 5 January 2015) The purpose of this study was to determine how different procedures and data, such as multiple wavelengths of radar imagery and radar texture measures, independently and in combination with optical imagery influence land-cover/use classification accuracies for a study site in Sudan. Radarsat-2 C-band and phased array L-band synthetic aperture radar (PALSAR) L-band quad-polarized radar were registered with ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) optical data. Spectral signatures were obtained for multiple landscape features, classified using a maximum-likelihood decision rule, and thematic accuracies were obtained using sepa- rate validation data. There were surprising differences between the thematic accuracies of the two radar data sets, with Radarsat-2 only having a 51% accuracy and PALSAR 73%. In contrast, the optical ASTER overall accuracy was 81%. Combining the original radar and a variance texture measure increased the Radarsat-2 to 78% and PALSAR to 80%, whereas the two original radar bands together had an accuracy of 87%. Sensor fusion of optical and radar obtained an accuracy of 93%. Based on these results, the use of multiwavelength quad-polarized radar imagery combined or inte- grated with optical imagery has great potential in improving the accuracy of land- cover/use classifications. In tropical and high-latitude regions of the world, where persistent cloud cover hinders the use of optical satellite systems, land management programmes may find this research promising. 1. Introduction Over the past several decades, spaceborne remote sensing has proved to be a highly useful technology for the collection of reliable land-surface data sets. This has primarily been accomplished by multispectral sensor systems, such as Landsat. The sensors in optical systems, such as Landsat Thematic Mapper (TM), passively record the surface reflectance of the sun’s energy in the visible and infrared spectral ranges. In contrast, synthetic aperture radar (SAR) (radio detection and ranging) is an active sensor that emits and receives wavelengths that are significantly longer than those detected by optical systems. These longer wavelengths of radar can pass through atmospheric conditions, such as clouds, that would otherwise obstruct the wavelengths of traditional spaceborne optical and multispectral systems (Al-Tahir, Saeed, and Mahabir 2014; Henderson et al. 2002). Another important benefit of radar is that it is not dependent on the sun’s energy, so it can operate at night. The operational characteristics of radar have enormous data-collecting potential for several geographic areas around the world, especially those often obscured by persistent cloudy conditions, such as tropical and high-latitude regions. *Corresponding author. Email: rmahabir@gmu.edu International Journal of Remote Sensing, 2015 Vol. 36, No. 6, 1551–1569, http://dx.doi.org/10.1080/01431161.2015.1015659 © 2015 Taylor & Francis Downloadedby[GeorgeMasonUniversity]at10:3813March2015

4. A prior constraint on the use of radar is that most collected data from spaceborne systems have been a single wavelength with a fixed polarization. Therefore, of the total surface scattering information available, only one component is being measured. Any additional surface scattering information contained within the returned radar signal is not captured (Dell’Acqua, Gamba, and Lisini 2003; Töyrä, Pietroniro, and Martz 2001). More recent systems, the Japanese phased array L-band synthetic aperture radar (PALSAR), the Canadian Radarsat-2, and the German TerraSar-X and Sentinel systems, collect informa- tion from multiple polarizations, which could potentially provide an immense amount of land-cover/use information for areas that previously had little to no data available (Sheoran and Haack 2013; Sawaya et al. 2010). Polarization is important to remote-sensing scientists as each type of polarization provides a different type of information. For example, VV polarization provides a good contrast between small grain crops and broadleaf plants, whereas HH polarization pro- vides greater information about soil conditions (Anys and He 1995). HV and VH provide information about total biomass and are complementary to VV and HH polarization (Campbell and Wynne 2012); McNairn and Brisco 2004). The contrast between vegetated and cleared areas is best seen with HV polarization (Smith 2012). Texture is a measure of the roughness or smoothness of an image. Texture measures by themselves may not be able to achieve good classification accuracies, but recent studies have shown that combining the original SAR image with texture measures could lead to improved mapping accuracies (Sim et al. 2014; Amarsaikhan et al. 2007; Lloyd et al. 2004; Herold, Haack, and Solomon 2004; Herold, Liu, and Clarke 2003; Dekker 2003; Anderson 1998). The intent of this study was to compare original radar and radar-derived texture measures for land-cover/use classifications with the traditional optical or multispectral- based classifications, and to evaluate sensor integration or fusion. One of the interesting and unique components of the analysis was the opportunity to combine and classify radar images from two different portions of the electromagnetic spectrum, each in quad- polarization format. 2. Study data and site The site selected for this analysis is Wad Madani, Sudan, in Northern Africa. Radar and optical images over the study site were used to create land-cover/use classifications. Radarsat-2 and PALSAR quad-polarization bands and derived texture measures were combined and classified, and accuracy assessments were performed. Optical imagery was collected by the ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) instrument on board the Terra space shuttle mission. The ASTER imagery had three spectral bands in the visible near-infrared region of the electromagnetic spec- trum (bands 1, 2, and 3N (nadir looking)), each with a spatial resolution of 15 m. Radarsat-2 was launched on 14 December 2007. It is the first commercial SAR satellite to acquire C-band quad-polarization imagery. Radarsat-2 offers a wide range of spatial resolutions (Canadian Space Agency 2008). A fine pixel resolution (8 m) quad- polarization image was obtained for this study. The Advance Land Observation Satellite (ALOS) was launched on 24 January 2006. On board the ALOS spaceborne platform is the PALSAR sensor, which uses the L-band radar and is supported by the Japan Aerospace Exploration Agency (JAXA). The spatial resolution from PALSAR was 12.5 m (JAXA 2006). 1552 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

5. The Radarsat-2 image for Wad Madani was collected on 6 June 2009 during the rainy season, which typically occurs from April to October. The ASTER image was captured on 4 March 2009, and the PALSAR data were collected on 12 May 2007. These differences in acquisition dates do create some concerns, but since the primary goal is a relative comparison of different processing methods and data combinations, those concerns should be consistent for all classifications, thus allowing valid comparisons. The pixels of the Radarsat-2, PALSAR, and ASTER images were all of different sizes. During the image- to-image registration, the pixels were resampled to 10 m using the nearest neighbour algorithm. In addition, the radiometric resolution of all data was consistently set at 8 bits for the classification. Figure 1 is a PALSAR composite image over the Wad Madani study area. The image is approximately 22 km × 22 km. The analysis was based on a subset of the overlap of all three data sets. Sudan’s major geographic feature is the Nile River and its tributaries, which include the Blue Nile and the White Nile. The city of Wad Madani is nestled in a bend on the west bank of the Blue Nile River. Wad Madani is located approximately 160 km southeast of Sudan’s capital city of Khartoum (Sawaya et al. 2010). In Figure 1, the major landscape features including the Blue Nile, agriculture to the west, desert to the northeast, and the city of Wad Madani on the west side of the Nile can be seen. Figure 1. PALSAR 12 May 2007 image (HH, VV, and HV BGR) of Wad Madani. Centre image coordinates 14.4° N, 33.5° E. International Journal of Remote Sensing 1553 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

6. The following land-cover/use features were classified using the Anderson et al. (1976) classification system: dense urban, fallow agriculture and/or bare ground, sparse natural trees, water, and irrigated agriculture. One of the issues with this study area was the variety of crops at different stages in their growth cycle. This required careful selection of calibration and validation sites. The fallow agricultural fields and the extensive areas of bare ground particularly to the east of the Blue Nile were not different in either the optical or radar images and thus combined. The urban class is very concentrated in Wad Madani with smaller but dense villages, and there are limited areas of sparse forest primarily near the river. The width of the Blue Nile River is extremely narrow, fluctuating between 280 and 460 m. This narrow waterbody size could cause issues when using larger pixels for classifications or with some window-derived values. Moreover, the classes used in this study are generalized and limited in number. However, for a comparison of methods and data, they were considered sufficient. At a future research stage based on results from this study, more detailed classes might be incorporated. 3. Methodology The land-cover/use classification consisted of three components. First, the calibration sites for the classification were identified. Second, the classifications were generated. Finally, the thematic accuracy results of the classifications were determined using separate valida- tion sites. Calibration and validation areas of interest (AOIs) were collected via AOI polygons. These polygons were determined using knowledge of the area, visual inspection of the various remote-sensing data, and use of finer spatial resolution imagery from Google Earth. The calibration AOIs identified the spectral characteristics of each of the classification categories. The validation AOIs were employed to determine the thematic accuracy of the land-cover/use classifications. For both calibration and validation, two to four AOIs were selected for each class. There is extensive remote-sensing literature on the various issues relative to accuracy assessments including sample type (pixel or polygon), sample size, sample selection, and statistical evaluations (Foody 2002; Smits, Dellepiane, and Schowengerdt 1999). Generally, pixels selected randomly by strata or class are preferred. The primary research focus in this study was on the relative thematic accuracies of individual classes and overall for various sensor types, derived values, and combina- tions of data and not on the accuracy of the map products. This study employed validation AOIs that may not provide the best accuracies but in the opinion of the authors are appropriate for relative evaluations of different data types and combinations of data, the focus of this study. The maximum-likelihood decision rule was applied for the classifications. Similar to the literature on different approaches for validation, there are different methods of signature extraction, signature evaluation, and decision rules for classification. Maximum likelihood is very standard and, for a comparison of data and data integrations, will provide useful comparisons. Moreover, because maximum likelihood assumes that classes are multivariate normal in distribution (Richards and Jia 2005), special care was taken to ensure that pure end members of classes were selected during the extraction of calibration and validation sites. Both sets of AOIs were kept separate during the classi- fication process and were therefore exclusive in use throughout the process. Other decision rules such as support vector machines may provide higher accuracies but are not likely to change the relative results. The following section presents the results of the 1554 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

7. various classifications beginning with the independent ASTER and radar images and then progressing to various value added, texture evaluations, and data combinations. 4. Results 4.1. ASTER classification The analysis of optical imagery to perform land-cover/use classification is not the goal of this research. However, the results of classifications obtained by using the optical imagery can be a baseline against which the results of radar can be compared. Table 1 lists the results for the ASTER imagery analysis. The Wad Madani optical land-cover/use classi- fication results are good in most classes, ranging from 55% to 99% in the producer’s accuracy and from 65% to 100% in the user’s accuracy. The overall accuracy is 81% for the ASTER imagery. The greatest classification confusion for the Wad Madani ASTER imagery was with sparse trees. Significant geographic areas that comprised sparse trees were misclassified as both agriculture and urban areas. The sparse trees classification had errors of omission and commission (producer’s and user’s accuracies) with both of these other classes. The confusion with agriculture is understandable as they both generally have green vegetation. The confusion with urban may be caused by some trees in the urban landscape and also the sparse trees containing bare soil similar in spectral response to rooftops. The ASTER image was taken during the dry season, so the plants were not as well developed as they would be during the rainy season. This condition might also explain the confusion between sparse trees and agriculture. There were some user misclassifications with the urban class with both bare soil and sparse trees. This could be anticipated, as the urban area contains some open areas and some plants. In addition, the urban structures often use indigenous materials, such as clay and bricks, which are spectrally similar to bare soil, part of the sparse forest landscape. Nevertheless, the classification results for Wad Madani from the ASTER imagery are very reasonable. 4.2. Radar analysis One of the ongoing issues with radar is the necessity or appropriateness of removing, or at least reducing, the amount of speckle (Maghsoudi, Collins, and Leckie 2012; Bouchemakh et al. 2008; Lu et al. 1996). The amount of speckle varies between radar Table 1. Error matrix for ASTER, Wad Madani. Reference Water Bare soil Sparse trees Agriculture Urban User’s accuracy (%) Classified Water 16,531 0 0 0 0 100.0 Bare soil 0 10,883 87 0 1623 86.4 Sparse trees 101 0 9861 3926 1379 64.6 Agriculture 2 0 4698 14,575 0 75.6 Urban 0 1247 3368 296 17,048 77.6 Producer’s accuracy (%) 99.4 89.7 54.7 77.5 85.0 80.5 International Journal of Remote Sensing 1555 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

8. data sets, as a function of the level of vendor preprocessing and also if the radar acquisition was single- or multilook. Single-look radar will typically have more speckle than multilook radar since multilook divides the SAR returns in the pass, creating different images that are then averaged into a single image. In this study, a comparison was made between the spectral signatures and thematic classifications of original radar and despeckled radar at both 3 × 3 and 5 × 5 windows using the Lee–Sigma algorithm. The statistical values of the spectral signatures for the different land-cover/use classes for the despeckled PALSAR image are listed in Table 2. Only values for the HH and HV polarizations are included since the results of HH and VV, and HV and VH were very similar. These statistical values can provide information on how well the different classes are statistically separated, which in turn can provide insight into how well classifications might be. As would be anticipated, the large window sizes have lower standard deviations. This was also noticed for both HH and HV polarizations for the Radarsat-2 despeckled image (results not shown) when moving from a 3 × 3 to a 5 × 5 window size. The high mean digital number (DN) value for the water class for the PALSAR image is unusual. Close examination of the imagery does not explain the high water values. Sparse natural trees in the image display a low, although mixed as indicated by the high standard deviation, radar return. The forest areas provide higher mean DN values than bare soil, which suggests that there will be little confusion between the two classes. It is interesting that the water and trees are very similar in HH but different in HV. No such unexpected results were found for the Radarsat-2 values. Table 2. Spectral signatures of Wad Madani despeckled PALSAR image. PALSAR imagery 3 × 3 Window 5 × 5 Window Land-cover/use classes Example AOI statistics digital numbers (DNs) HH HV HH HV Water X 139.8 93.7 139.7 93.3 σ 23.9 31.0 20.5 29.2 Minimum value 77.0 40.0 90.0 42.0 Maximum value 255.0 180.0 238.0 154.0 Bare soil X 67.2 79.6 67.0 79.5 σ 23.8 18.4 22.0 16.3 Minimum value 19.0 31.0 23.0 40.0 Maximum value 210.0 165.0 189.0 150.0 Sparse natural trees X 160.5 192.0 160.2 197.8 σ 46.3 56.3 43.8 54.4 Min. value 59.0 72.0 63.0 75.0 Max. value 255.0 255.0 255.0 255.0 Agriculture X 157.2 102 157.0 101.9 σ 33.0 22.3 30.0 20.1 Minimum value 57.0 26.0 62.0 30.0 Maximum value 255.0 193.0 245.0 188.0 Urban X 241.6 254.7 241.5 254.6 σ 22.4 2.8.0 19.8 2.2 Minimum value 93.0 203.0 93.0 219.0 Maximum value 255.0 255.0 255.0 255.0 Note: Here X is the mean and σ is the standard deviation. 1556 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

9. Analysis of the PALSAR imageries’ spectral signature values suggests that it may be difficult to differentiate between sparse natural trees and agriculture land cover. In the PALSAR images, both classes have very similar spectral values in the HH bands. However, the HV band has much greater differences. The urban areas have a high mean spectral signature value in both bands for the PALSAR image. This suggests that the urban class will have little likelihood of confusion with other classes. Table 3 shows the confusion matrix with classification results for the Wad Madani Radarsat-2 and PALSAR imagery using a despeckled 5 × 5 window. As would be expected, particularly given the use of polygons for accuracy assessment because the despeckling is essentially a smoothing filter, the larger window size despeckled data had higher overall thematic accuracies. These differences, however, were relatively small. The Radarsat-2 original accuracy was 51%, which increased to 58% with the 5 × 5 filter, whereas the PALSAR accuracy increased from 73% to 79%. Despeckled radar at a 5 × 5 window was used in this study. As shown in Table 3, there was a great deal of confusion between the water and bare soil classes within the Radarsat-2 images. In the Radarsat-2 image, the producer’s accuracy for water was very good, 93% with a low user’s accuracy of 60%. Given the small width of the Blue Nile, the larger window size may have influenced these results. This confusion was also evident in the poor producer’s accuracy for bare soil, which achieved extremely poor results at 20%, likely because the water and bare soil both act as specular reflectors with similar low backscatter. However, the PALSAR bare soil produ- cer’s accuracy result was very high at 98%. Also, the sparse trees producer’s accuracy was low for both Radarsat-2 and PALSAR images. In the Radarsat-2 images, sparse trees were confused with bare soil, agriculture and urban. However for the PALSAR imagery, the sparse tree classification experienced a high rate of confusion with the urban class. Table 3. Error matrices for Wad Madani classification using despeckled 5 × 5 window. Reference Water Bare soil Sparse trees Agriculture Urban User’s accuracy (%) Wad Madani–Radarsat-2 Classified Water 15,486 9558 203 572 125 59.7 Bare soil 1145 2409 1266 2621 632 29.8 Sparse trees 1 18 9434 3100 4179 56.4 Agriculture 2 145 5390 11,777 4636 53.7 Urban 0 0 1721 727 10,478 81.1 Producer’s accuracy (%) 93.1 19.9 52.4 62.7 52.3 57.9 Wad Madani–PALSAR Classified Water 15,036 38 276 3627 0 82.1 Bare soil 259 11,910 63 5150 0 84.4 Sparse trees 2 16 12,569 428 1594 84.8 Agriculture 1322 166 1174 9592 0 73.2 Urban 0 0 3932 0 18,456 82.4 Producer’s accuracy (%) 90.5 98.2 69.8 51.0 92.0 78.9 International Journal of Remote Sensing 1557 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

10. As was expected, the Radarsat-2 agriculture classification producer’s accuracy was low at 63%, with a great deal of confusion between the agriculture, bare soil, and sparse trees. The producer’s accuracy for the agriculture class in the PALSAR imagery was also low at 51%. Most of the confusion within the agriculture classification in the PALSAR image was with the water and the bare soil classes. The spectral signature mean values for bare soil and agriculture were very different. Confusion between agriculture and sparse trees was expected, but was minimal. The spectral signature values between agriculture and water were similar, so the confusion between the two classes was not unexpected. Considering that the urban bands in the PALSAR image were separated from all the other classes, the high 92% producer’s accuracy results were expected. However, it was anticipated that the Radarsat-2 images would yield better producer’s accuracy than was actually achieved. This expectation was based on the urban spectral signatures for the HH and HV bands that were well separated from the other classes. The Radarsat-2 image’s urban producer’s accuracy was only 52%. 4.3. Texture analysis Remote-sensing data are a compilation of both brightness value for each pixel (spectral) and arrangement of the pixels (spatial). This spatial information can be extracted as textural information from the pixels (Cervone and Haack 2012; Champion et al. 2008; Chen, Stow, and Gong 2004; Kurosu et al. 1999). Traditional digital image classification methodologies are based purely on the use of the spectral characteristics of the data, thus ignoring any spatial information in the data collected (Maillard 2003). Areas such as residential or urban are more easily distinguished by their spatial characteristics (Nyoungui, Tonye, and Akono 2002; Solberg and Anil 1997). Ignoring the full comple- ment of data collected creates challenges for the accurate classification of land-cover/use classes. The analysis of texture was therefore an important component of this study. The use of radar texture measures in land-cover/use classification has generated varied results. In some literature, texture layers have yielded better classification results than the original radar images (Haack, Solomon, and Herold 2002; Kiema 2002). In other litera- ture, the classification results from a texture measure layer were not as good as the original radar image. Often combining original radar and derived texture assists in improved classifications, at least for some classes (Herold, Haack, and Solomon 2004). Two types of analysis were performed using texture layers. First, variance texture measures were extracted for four different window sizes for each band of the original, not despeckled, Radarsat-2 and PALSAR data. The use of variance texture was guided by previous work, which determined this measure to be suitable for mapping land cover/use from radar imagery (Herold, Haack, and Solomon 2004; Haack and Bechdol 2000). Also, as suggested by Ulaby et al. (1986), several texture measures extracted from the grey-level co-occurence matrix correlated. This statement is exemplified in the work of Marceau et al. (1990), finding only 7% and 3% of variance explained by texture measures and quantization level, respectively, the remaining 90% of which was explained by window sizes. Because this study focuses on the parameter of most importance, scale as suggested by Marceau et al. (1990), the extraction of all texture features was limited to the use of the variance measure. The window sizes evaluated were 5 × 5, 9 × 9, 13 × 13, and 17 × 17. Larger windows have given higher results in earlier studies (Tadesse and Falconer 2014), but research has also shown that any window size larger than 13 × 13 often gives diminishing returns (Villiger 2008). Second, each of the despeckled radar images (5 × 5 window) was combined (layer stacked) with the best of the texture measure that was 1558 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

11. generated from that specific image. The best texture measure was determined by the window size with the highest overall classification accuracy. The combined image was classified and the results were analysed. Error matrices for the best texture measures created using the original Wad Madani Radarsat-2 and PALSAR images are shown in Tables 4 and 5. Each of the four texture windows for the Radarsat-2 image produced a land-cover/use classification accuracy that was superior to the classification for the original image, that is, overall classification results for each derived texture measure exceeding 58%. The overall accuracies increased with window size from 60% for the 5 × 5 window to 78% for the 17 × 17 window, and the results of which are detailed in Table 4. Conversely, none of the texture measures generated with the PALSAR image pro- duced land-cover/use classification results that were as good as the classification of the original despeckled image, with all overall low accuracy values less than 55% (Table 5). The best texture overall (55%) was the largest window and the percentage decreased to 41% for the 5 × 5 window. These results by window size were not surprising as texture acts as a spatial filter, and in using AOIs for validation, filtering would generally increase Table 4. Wad Madani error matrices of Radarsat-2 variance texture measures. Texture measure 17 × 17 Reference Water Bare soil Sparse trees Agriculture Urban User’s accuracy (%) Classified Water 15,481 7595 0 0 0 67.1 Bare soil 1072 4493 29 2029 0 58.9 Sparse trees 31 0 17,075 5749 1497 70.1 Agriculture 50 42 523 10,989 32 94.4 Urban 0 0 387 30 18,521 97.8 Producer’s accuracy (%) 93.1 37.0 94.8 58.5 92.4 77.7 Table 5. Wad Madani error matrices of PALSAR variance texture measures. Texture measure 17 × 17 Reference Water Bare soil Sparse trees Agriculture Urban User’s accuracy (%) Classified Water 5883 131 2469 3086 1731 43.1 Bare soil 2602 11,239 999 9447 975 54.9 Sparse trees 641 42 8670 481 785 71.4 Agriculture 6737 228 3675 5254 994 27.8 Urban 756 490 2201 529 15,565 78.9 Producer’s accuracy (%) 35.4 92.7 48.1 28.0 77.6 55.4 International Journal of Remote Sensing 1559 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

12. thematic accuracies. Note, however, that the original radar classifications were with despeckled data, and the texture analyses were not despeckled. Classified images of Wad Madani using the best texture measure are shown in Figure 2. Closer examination of the Radarsat-2 data shows that as the texture measure window size increased, there was very good improvement in the producer’s accuracy in the urban and sparse tree classes. However, the water class decreased very slightly in producer’s accuracy as the window size increased, most likely as a function of the larger window size including some non-water areas. The agriculture class showed a small increase in produ- cer’s accuracy with an increase in window size. Compared to the results of the Radarsat-2 texture measures, the PALSAR despeckled 5 × 5 image overall classification result was 79%. The classification result for the texture measure generated from the PALSAR original image with a window size of 17 × 17 is 55%, a decrease of 24%. These results are surprising, but the high original PALSAR data accuracy results allow few opportu- nities for improvement. It is interesting to note that where Radarsat-2 had difficultly classifying bare soil properly with a producer’s accuracy value of 37%, it did well classifying water with an accuracy of 93%. PALSAR classification producer’s accuracies showed the opposite trend. PALSAR did very well classifying the bare soil with the highest producer’s classification accuracy of 93% and a corresponding water accuracy value of 35%. This may well correspond to the way the Radarsat-2’s C-band wavelength interacts with the water and bare soil versus the PALSAR L-band wavelength. In most cases, the larger texture measures window sizes achieved better results than the smaller window sizes. Additionally, it was interesting to note that the best classifica- tion accuracy improvements were seen in the urban class. In a few cases, increases in areas classified as urban caused a decrease in overall classification results. The Radarsat-2 texture measures gave better classification results than the despeckled original images. The PALSAR texture measures provided either very slight improvements or much worse classification results from a land-cover/use class perspective. It appears that the L-band does not perform as well as the C-band when classifying land cover/use using a texture measure. Figure 2. Classification occurring over Wad Madani. (a) Classification completed using Radarsat- 2 January. Texture measure 17 × 17. (b) Classification completed using PALSAR. Texture measure 17 × 17 (water, blue; agriculture, light green; bare soil, grey; sparse trees, dark green; urban, red). Approximate scene width 15 km. 1560 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

13. Both 5 × 5 despeckled original radar images were combined (layer stacked) with the best of the texture measures that were generated for that specific image for analysis. For example, the Radarsat-2 5 × 5 despeckled image was combined with the best texture measure that was created from that image. The combined image was then classified. The results were analysed and compared with the classification results of the original des- peckled image alone. Table 6 provides the results of these combinations for both radar sensors. The land-cover/use classification of both Wad Madani original despeckled images combined with the best texture measure images showed substantial increases when compared to the overall accuracy of the despeckled-only radar classification. The land- cover/use classification of the Radarsat-2 despeckled image was combined with the best texture measures image, the 17 × 17 window. This combination resulted in an overall accuracy of 78%, an improvement, when compared to the despeckled-only radar image classification of 58%, or an increase of 20%. The overall classification accuracy of the PALSAR original despeckled image combined with the best texture measures image, the 17 × 17 window, also improved slightly when compared to the classification of the PALSAR original despeckled image, from 79% to 80%. Analysis of individual classes showed that the water classification values were already high. Adding texture did little to raise the water classification accuracy values. The addition of texture measures to the original imagery greatly enhanced the urban and sparse tree classes producer’s and user’s accuracy. For example, for the Radarsat-2 image, the urban class producer’s accuracy was improved by 40%. The texture measures in these classes, when added to the original image, were able to greatly enhance the classification results. The results in the PALSAR image were lower, as the urban Table 6. Error matrices of Wad Madani original despeckled imagery combined with the best texture measure. Reference Water Bare soil Sparse trees Agriculture Urban User’s accuracy (%) Wad Madani – Radarsat-2 original and texture 17 × 17 Classified Water 15,296 7465 0 0 0 67.2 Bare soil 1291 4639 37 1599 0 61.3 Sparse trees 26 5 16,977 5701 1480 70.2 Agriculture 21 21 569 11,476 39 94.6 Urban 0 0 431 21 18,531 97.6 Producer’s accuracy (%) 92.0 38.2 94.2 61.1 92.4 78.2 Wad Madani – PALSAR original and texture 17 × 17 Classified Water 14,490 117 23 2241 0 85.9 Bare soil 138 11,743 0 5994 0 65.7 Sparse trees 128 49 14,010 533 1547 86.1 Agriculture 1863 221 1307 10,029 26 74.6 Urban 0 0 2674 0 18,450 87.3 Producer’s accuracy (%) 87.2 96.8 77.8 53.4 92.1 80.3 International Journal of Remote Sensing 1561 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

14. producer’s accuracy had the same value when compared to the original image and the sparse trees user’s value only increased by 8%. The addition of texture helped differentiate the bare soil class from water and sparse trees in the Radarsat-2 images. In the original Radarsat-2 image alone, the producer’s accuracy of the bare soil was 19%. When texture was added, this increased to 38%, which is still low. 4.4. Combining multiple-wavelength radar images This section explores the use of the relatively new opportunity of combining and classifying radar images from two different portions of the electromagnetic spectrum. The PALSAR sensor collects data in the L-band, whereas Radarsat-2 acquires data in the C-band. As both satellites collect data in different wavelengths, it is anticipated that combining the two images would increase the information and thus improve the classi- fication results. All of the images used in this analysis were despeckled with a 5 × 5 window size. Combining radar images from two different portions of the electromagnetic spectrum provided improvements when compared to a single image (Table 7). The best accuracy achieved with a single Wad Madani radar image was 78%, when using the PALSAR image. When the Wad Madani Radarsat-2 image was layer stacked with the PALSAR image and classified, the overall accuracy result increased to 87%, an improvement of 9%. Most confusion between individual classes in the combined Radarsat-2 and PALSAR images occurred between agriculture and bare soil. This was not expected. The producer’s accuracy for the sparse trees class did improve slightly in the Wad Madani Radarsat-2 and PALSAR combination. This improvement would be expected, as more foliage during the rainy season can improve the texture and radar returns, helping differentiate sparse trees from the other classes. Overall, however, every class had very good results with the classification. 4.5. Combining optical and radar images This final analysis examines whether combining the radar and texture measures generated from radar with the ASTER multispectral image can improve overall classification results. All three ASTER bands were layer stacked and used in the analysis. The use of multiple Table 7. Error matrix of Wad Madani original Radarsat-2 and PALSAR despeckled combined imagery. Reference Water Bare soil Sparse trees Agriculture Urban User’s accuracy (%) Classified Water 15,993 92 8 382 0 97.1 Bare soil 171 11,974 1 2914 0 79.5 Sparse trees 20 13 13,810 1382 1493 82.6 Agriculture 435 51 638 14,119 0 92.6 Urban 0 0 3557 0 18,557 83.9 Producer’s accuracy (%) 96.2 98.7 76.7 75.1 92.6 87.0 1562 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

15. radar wavelengths in combination with ASTER imagery in land-cover/use classification is relatively unique as prior work has used only one radar wavelength (Amarsaikhan et al. 2012; Santos and Messina 2008). For the Wad Madani area, the best classification results for the original despeckled images were achieved when using the PALSAR scene. The PALSAR image was com- bined with the ASTER optical image for classification. Next, the Wad Madani Radarsat-2 texture measure with a window size of 17 × 17 resulted in the best overall accuracy results for the single-texture measures. This layer was then combined with the ASTER image, which yielded another error matrix. Finally, the best texture measure, which was the Radarsat-2 texture measure with a window size of 17 × 17, and the best of the original despeckled radar, which was the PALSAR image, were layer stacked with the ASTER image. Table 8 provides the confusion matrix for the best of the above-mentioned layer combinations, which is the ASTER and PALSAR combination at 93%. The other sensor fusion results had similar overall accuracies and minor class-by-class variations. The ASTER and Radarsat texture overall accuracy was 92% and the ASTER, PALSAR, and Radarsat texture was 92%. When the PALSAR image was added to the ASTER optical image, the overall accuracy increased to 93% relative to the 80% of the ASTER electro-optical image alone. The largest increase in producer’s accuracy occurred with the sparse trees class. This class performed very poorly in the ASTER-only classification, with a producer’s accuracy of 55%. When the ASTER, PALSAR, and Radarsat-2 texture measure images were combined, the sparse trees class producer’s accuracy rose to a very high 98%, an increase of 43%. In general, when the radar imagery was added to the ASTER image, the overall accuracy improved. In the case of Wad Madani, the overall accuracy increased substantially by 11–13%. 5. Discussion and conclusions Use of radar in land-cover/use applications continues to increase, driven in part by the widespread online data availability. With the increase in the quantity of available radar imagery, it is important to understand both strengths and weaknesses of using radar for land-cover/use classifications. Table 9 lists the overall thematic accuracies for the various sensors, derived texture values, and data combinations for this study. As noted previously, there are some individual class variations in accuracies that also are important and overall Table 8. Wad Madani optical, SAR, and texture combinations error matrices. Reference Water Bare soil Sparse trees Agriculture Urban User’s accuracy (%) Classified Water 15,661 0 0 0 0 100.0 Bare soil 0 12,126 74 37 0 99.1 Sparse trees 330 4 16,131 2128 636 83.9 Agriculture 628 0 971 16,632 0 91.2 Urban 0 0 838 0 19,414 95.9 Producer’s accuracy (%) 94.2 100.0 89.5 88.5 96.8 93.4 International Journal of Remote Sensing 1563 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

16. accuracy should not be the sole evaluation measure. The results for the classifications using the ASTER imagery alone were excellent (81%), thereby reinforcing the use of optical imagery as a valued resource for land-cover/use classification. If optical imagery could be collected regardless of weather conditions and at either day or night, an argument could be made that radar data would have a much more limited use. However, in many parts of the world, such as the tropics and high latitudes, it is difficult to collect optical imagery. Therefore, as more radar imagery becomes available, it will be used more frequently to examine those parts of the world where optical imagery is unavailable. There are of course other potential applications of radar than land cover/use, including biomass estimations (Kurvonen, Pulliainen, and Hallikainen 1999; Luckman et al. 1997) and deformation via interferometric approaches (Rosen et al. 1996; Massonnet, Briole, and Arnaud 1995). Even when optical imagery is available, radar imagery can help improve the classi- fication results. Such efforts have not been restricted to land-cover/use applications, including its use in geology (Ricchetti 2001; Yesou et al. 1993), floods (Wang, Koopmans, and Pohl 1995), and in the identification of coal fire-affected areas (Prakash et al. 2001). In general, as reported in this study, when the radar imagery was added to the ASTER optical image, the overall accuracy improved, and for the Wad Madani area, the overall accuracy increased substantially (93%). Similar increased accuracy, compared to individual optical or radar, was found by Laurin et al. (2013) investigating land cover in West Africa. Using images collected from the Landsat TM and the Advanced Visible and Near-Infrared Radiometer type-two optical sensors, Laurin et al. (2013) reported accura- cies of 95.6% and 97.5% for both sensors, respectively. Likewise, Forkuor et al. (2014) reported radar contributions in the range of 10–15% when radar was integrated with optical imagery for crop mapping in Northwestern Benin, West Africa. These results are not surprising given the complementary nature of both sets of data. In the case of optical imagery, chemical, physical, and biological characteristics of target objects are provided. Radar data are associated with the shape, texture, structure, and dielectric properties (Pereira et al. 2013). However, at least in the aforementioned studies, the use of dual- pole radar was investigated compared to quad-polarized data used in the present study. Nonetheless, both the present study and others show the increase value added in the combined use of optical and radar data for land-cover/use applications. In the radar analyses using a texture measure, in most cases, the larger window sizes achieved better results than the smaller window sizes. The 17 × 17 window size provided the best results. Additionally, it was interesting to note that the best classification accuracy Table 9. Summary by data type of overall accuracies. Data combination Overall accuracy (%) ASTER 80.5 Radarsat (despeckled) 57.9 PALSAR (despeckled) 78.9 Radarsat variance texture 77.7 PALSAR variance texture 55.4 Radarsat and texture 78.2 PALSAR and texture 80.3 Radarsat and PALSAR 87.0 ASTER and PALSAR 93.4 1564 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

17. improvements for the original radar imagery were seen in the urban class. The Radarsat-2 texture measures resulted in better classifications than the despeckled original images. It appears that the PALSAR L-band does not perform as well as the Radarsat-2 C-band when generating classifications while using a variance texture measure. Conversely, Lu et al. (2011) found the opposite relationship for the results of both sensors. However, these results would have been influenced by the difference in fusion method used and subsets of land-cover/use classes chosen for examination in the particular study. These differences highlight the increasing need for the increased replication of scientific approaches over different geographic areas for more objective comparisons. Moreover, as further reported in the present study, the classification results of the combined original radar and texture images showed substantial increase when compared to the overall accuracy of the despeckled-only radar image classification results. This study also explored the relatively new opportunity of combining and classifying radar images from two different portions of the electromagnetic spectrum. Previous studies such as Liao, Huang, and Guo (2004) have examined the fusion of multiple C-band images, providing relatively good results. With the combination of different wavelengths, the expectation is that higher land-cover/use classification accuracies will result. This continues to be an area of increasing interest to the remote-sensing commu- nity. In line with other similar studies (Evans et al. 2010; Amarsaikhan et al. 2007), the combination of radar images consistently provided improvements over the use of a single radar image. These findings therefore support the use of radar multiwavelength imagery having considerable potential for land-cover/use classification (80% for the two des- peckled radar wavelengths). The final portion of this research was to determine whether or not the combination of radar imagery and texture measures generated from radar imagery with the ASTER images could improve overall classification results. When the radar imagery was added to the ASTER image, in general, the overall accuracy improved. In the case of the Wad Madani site, the overall accuracy increased considerably, an increase of 11–13%. Based on the results of this research, radar land-cover/use classification accuracy can in some situations almost equal or perhaps surpass that of optical imagery. This study shows that there is great promise that areas of the world that were largely unseen due to cloud cover can now be exposed. There will be several new areas of research, given the new radar sensors that are now being deployed. The Sentinel satellite missions from the European Space Agency, starting with the launch of Sentinel-1 on 3 April 2014, present a good example of the trend towards the increased provision of free and global coverage radar imagery. Sentinel-1 is equipped with a single polarization (VV or HH) for the Wave Mode and selectable dual polarization (VV + VH or HH + HV) for all other modes. Furthermore, with spatial resolutions of 5 × 5 m, 5 × 20 m, 5 × 20 m, and 25 × 100 m for strip map, interferometric-wide, wave, and extra-wide swath viewing modes, it is expected that this data source will be widely used for land-cover/use mapping. Overall, the results of this study support the increased use and greater research of radar for land-cover/use mapping. In the future, several other areas are to be investigated, extending the present research. Of particular interest is the investigation of multitemporal radar. Several studies including those of Chust, Ducrot, and Pretus (2004), Shao et al. (2001), Le Hegarat-Mascle et al. (2000), and Pierce et al. (1998) have examined this area, showing substantial benefits for the discrimination of vegetation, especially those having distinct phonological cycles. Other areas to be investigated include use of more detailed land-cover/use classifications, comparison of other texture measures such as those pro- posed by Haralick, Shanmugam, and Dinstein (1973), the use of other data fusion International Journal of Remote Sensing 1565 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

18. methods such as principal component analysis, and investigation of other classification algorithms, such as neural network, decision tree, support vector machine, object-based algorithms, sub-pixel-based algorithms, and contextual algorithms. These are not new areas of research as reported in the works of Pereira et al. (2013), Li et al. (2012), Qi et al. (2010), and Gao and Ban (2009). However, in order for the field of radar remote sensing as it applies to land-cover/use mapping to mature fully, increasingly, more work needs to be carried out in these areas so that both meaningful discussion and validation of research findings can be obtained. Disclosure statement No potential conflict of interest was reported by the authors. Funding The authors would like to thank the following organizations for providing and/or funding the imagery used and for supporting this research. Radarsat-2 images were provided by the Canadian Space Agency under project3126 of the Science and Operational Application Research for Radarsat-2 program. The Alaska Space Facility, under sponsorship from NASA, provided the PALSAR imagery. The NASA Land Processes Distributed Active Archive Center at the USGS/Earth Resources Observation and Science (EROS) Center provided the ASTER imagery. Finally, additional support was provided through grants received from the Department of Geography and Geoinformation Science at George Mason University. References Al-Tahir, R., I. Saeed, and R. Mahabir. 2014. “Application of Remote Sensing and GIS Technologies in Flood Risk Management.” In Flooding and Climate Change: Sectorial Impacts and Adaptation Strategies for the Caribbean Region, edited by D. D. Chadee, J. M. Sutherland, and J. B. Agard, 137–150. Hauppauge, NY: Nova Publishers. Amarsaikhan, D., M. Ganzorig, P. Ache, and H. Blotevogel. 2007. “The Integrated Use of Optical and Insar Data for Urban Land-Cover Mapping.” International Journal of Remote Sensing 28 (6): 1161–1171. doi:10.1080/01431160600784267. Amarsaikhan, D., M. Saandar, M. Ganzorig, H. H. Blotevogel, E. Egshiglen, R. Gantuyal, B. Nergui, and D. Enkhjargal. 2012. “Comparison of Multisource Image Fusion Methods and Land Cover Classification.” International Journal of Remote Sensing 33 (8): 2532–2550. doi:10.1080/01431161.2011.616552. Anderson, C. 1998. “Texture Measures in SIR-C Images.” Geoscience and Remote Sensing Symposium Proceedings, 1998. IGARSS ‘98. 1998 IEEE International 3: 1717–1719. doi:10.1109/IGARSS.1998.692452. Anderson, J. R., E. E. Hardy, J. T. Roach, and R. E. Witmer. 1976. A Land Use and Land Cover Classification System for Use with Remote Sensor Data. US Geological Survey Professional Paper 964. Washington, DC: US Government Printing Office. Anys, H., and D. He. 1995. “Evaluation of Textural and Multipolarization RADAR Features for Crop Classification.” IEEE Transactions on Geoscience and Remote Sensing 33 (5): 1170– 1181. doi:10.1109/36.469481. Bouchemakh, L., Y. Smara, S. Boutarfa, and Z. Hamadache. 2008. “A Comparative Study of Speckle Filtering in Polarimetric RADAR SAR Images.” In Information and Communication Technologies: From Theory to Applications, ICTTA 2008, 3rd International Conference, 1–6. doi:10.1109/ICTTA.2008.4530040. Campbell, J., and R. Wynne. 2012. Introduction to Remote Sensing. 5th ed., 626 pp. New York: Guilford Press. Canadian Space Agency. 2008. “Radarsat – 1.” Accessed 2008. http://www.space.gc.ca/asc/eng/ satellites/radarsat1/default.asp 1566 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

19. Cervone, G., and B. Haack. 2012. “Supervised Machine Learning of Fused RADAR and Optical Data for Land Cover Classification.” Journal of Applied Remote Sensing 6 (1): 063597. doi:10.1117/1.JRS.6.063597. Champion, I., P. Dubois-Fernandez, D. Guyon, and M. Cottrel. 2008. “RADAR Image Texture as a Function of Forest Stand Age.” International Journal of Remote Sensing 29 (6): 1795–1800. doi:10.1080/01431160701730128. Chen, D., D. Stow, and P. Gong. 2004. “Examining the Effect of Spatial Resolution and Texture Window Size on Classification Accuracy: An Urban Environment Case.” International Journal of Remote Sensing 25 (11): 2177–2192. doi:10.1080/01431160310001618464. Chust, G., D. Ducrot, and J. L. Pretus. 2004. “Land Cover Discrimination Potential of Radar Multitemporal Series and Optical Multispectral Images in a Mediterranean Cultural Landscape.” International Journal of Remote Sensing 25 (17): 3513–3528. doi:10.1080/ 0143116032000160480. Dekker, R. J. 2003. “Texture Analysis and Classification of ERS SAR Images for Map Updating of Urban Areas in the Netherlands.” IEEE Transactions on Geoscience and Remote Sensing 41 (9): 1950–1958. doi:10.1109/TGRS.2003.814628. Dell’Acqua, F., P. Gamba, and G. Lisini. 2003. “Improvements to Urban Area Characterization Using Multitemporal and Multiangle SAR Images.” IEEE Transactions on Geoscience and Remote Sensing 41 (9): 1996–2004. doi:10.1109/TGRS.2003.814631. Evans, T. L., M. Costa, K. Telmer, and T. S. F. Silva. 2010. “Using ALOS/PALSAR and RADARSAT-2 to Map Land Cover and Seasonal Inundation in the Brazilian Pantanal.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 3 (4): 560–575. doi:10.1109/JSTARS.2010.2089042. Foody, G. M. 2002. “Status of Land Cover Classification Accuracy Assessment.” Remote Sensing of Environment 80 (1): 185–201. doi:10.1016/S0034-4257(01)00295-4. Forkuor, G., C. Conrad, M. Thiel, T. Ullmann, and E. Zoungrana. 2014. “Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa.” Remote Sensing 6 (7): 6472–6499. doi:10.3390/rs6076472. Gao, L., and Y. Ban. 2009. “Multitemporal RADARSAT-2 Polarimetric SAR Data for Urban Land- Cover Mapping.” In The Sixth International Symposium on Digital Earth, 78410N–78410N, September 9–12. Beijing: International Society for Optics and Photonics. doi:10.1117/ 12.873218. Haack, B., and M. Bechdol. 2000. “Integrating Multisensor Data and RADAR Texture Measures for Land Cover Mapping.” Computers & Geosciences 26 (4): 411–421. doi:10.1016/ S0098-3004(99)00121-1. Haack, B. N., E. Solomon, and N. D. Herold. 2002. “RADAR and Optical Data Sensor Integration for Land Extraction.” Pecora 15 Conference Proceedings, Denver, CO, November 10–15. Haralick, R. M., K. Shanmugam, and I. H. Dinstein. 1973. “Textural Features for Image Classification.” IEEE Transactions on Systems, Man, and Cybernetics 3: 610–621. doi:10.1109/TSMC.1973.4309314. Henderson, F., R. Chasan, J. Portolese, and T. Hart Jr. 2002. “Evaluation of SAR-Optical Imagery Synthesis Techniques in a Complex Coastal Ecosystem.” Photogrammetric Engineering and Remote Sensing 68 (8): 839–846. Herold, M., X. Liu, and K. Clarke. 2003. “Spatial Metrics and Image Texture for Mapping Urban Land Use.” Photogrammetric Engineering and Remote Sensing 69 (9): 991–1001. doi:10.14358/PERS.69.9.991. Herold, N., B. Haack, and E. Solomon. 2004. “An Evaluation of RADAR Texture for Land Use/ Cover Extraction in Varied Landscapes.” International Journal of Applied Earth Observation and Geoinformation 5 (2): 113–128. doi:10.1016/j.jag.2004.01.005. JAXA (Japan Aerospace Exploration Agency). 2006. “Image Data Acquired by the PALSAR Onboard the ‘Daichi’.” Japanese Aerospace Exploration Agency. Accessed February, 2008. http://www.jaxa.jp/press/2006/02/20060217_daichi_e.html Kiema, J. B. K. 2002. “Texture Analysis and Data Fusion in the Extraction of Topographic Objects from Satellite Imagery.” International Journal of Remote Sensing 23 (4): 767–776. doi:10.1080/ 01431160010026005. Kurosu, T., S. Uratsuka, H. Maeno, and T. Kozu. 1999. “Texture Statistics for Classification of Land Use with Multitemporal JERS-1 SAR Single Look Imagery.” IEEE Transactions on Geoscience and Remote Sensing 37 (1): 227–235. doi:10.1109/36.739157. International Journal of Remote Sensing 1567 Downloadedby[GeorgeMasonUniversity]at10:3813March2015

20. Kurvonen, L., J. Pulliainen, and M. Hallikainen. 1999. “Retrieval of Biomass in Boreal Forests from Multitemporal ERS-1 and JERS-1 SAR Images.” IEEE Transactions on Geoscience and Remote Sensing 37 (1): 198–205. doi:10.1109/36.739154. Laurin, G. V., V. Liesenberg, Q. Chen, L. Guerriero, F. Del Frate, A. Bartolini, D. Coomes, B. Wilebore, J. Lindsell, and R. Valentini. 2013. “Optical and SAR Sensor Synergies for Forest and Land Cover Mapping in a Tropical Site in West Africa.” International Journal of Applied Earth Observation and Geoinformation 21: 7–16. doi:10.1016/j.jag.2012.08.002. Le Hegarat-Mascle, S., A. Quesney, D. Vidal-Madjar, O. Taconet, M. Normand, and C. Loumagne. 2000. “Land Cover Discrimination from Multitemporal ERS Images and Multispectral Landsat Images: A Study Case in an Agricultural Area in France.” International Journal of Remote Sensing 21 (3): 435–456. doi:10.1080/014311600210678. Li, G., D. Lu, E. Moran, L. Dutra, and M. Batistella. 2012. “A Comparative Analysis of ALOS PALSAR L-Band and RADARSAT-2 C-Band Data for Land-Cover Classification in a Tropical Moist Region.” ISPRS Journal of Photogrammetry and Remote Sensing 70: 26–38. doi:10.1016/j.isprsjprs.2012.03.010. Liao, J., H. Huang, and H. Guo 2004. “Multitemporal and Dual-Polarization SAR Data for Detection of Urban Areas.” In Proceedings of the 2004 Envisat and ERS symposium, Salzbury, September 6–10, 5 pp. Lloyd, C., S. Berberoglu, P. Curran, and P. Atkinson. 2004. “A Comparison of Texture Measures for the Per-Field Classification of Mediterranean Land Cover.” International Journal of Remote Sensing 25 (19): 3943–3965. doi:10.1080/0143116042000192321. Lu, D., G. Li, E. Moran, L. Dutra, and M. Batistella. 2011. “A Comparison of Multisensor Integration Methods for Land Cover Classification in the Brazilian Amazon.” GIScience & Remote Sensing 48 (3): 345–370. doi:10.2747/15481603.48.3.345. Lu, Y. H., S. Y. Tan, T. S. Yeo, W. E. Ng, I. Lim, and C. B. Zhang. 1996. “Adaptive Filtering Algorithms for SAR Speckle Reduction.” IGARSS ’96. 1996 International Geoscience and Remote Sensing Symposium 1: 67–69. doi:10.1109/IGARSS.1996.516246. Luckman, A. J., A. C. Frery, C. C. F. Yanasse, and G. B. Groom. 1997. “Texture in Airborne SAR Imagery of Tropical Forest and Its Relationship to Forest Regeneration Stage.” International Journal of Remote Sensing 18 (6): 1333–1349. doi:10.1080/014311697218458. Maghsoudi, Y., M. Collins, and D. Leckie. 2012. “Speckle Reduction for the Forest Mapping Analysis of Multi-Temporal Radarsat-1 Images.” International Journal of Remote Sensing 33 (5): 1349–1359. doi:10.1080/01431161.2011.568530. Maillard, P. 2003. “Comparing Texture Analysis Methods through Classification.” Photogrammetric Engineering and Remote Sensing 69 (4): 357–367. doi:10.14358/PERS.69.4.357. Marceau, D. J., P. J. Howarth, J. M. Dubois, and D. J. Gratton. 1990. “Evaluation of the Grey-Level Co-Occurrence Matrix Method for Land-Cover Classification Using SPOT Imagery.” IEEE Transactions on Geoscience and Remote Sensing 28 (4): 513:119. doi:10.1109/ TGRS.1990.572937. Massonnet, D., P. Briole, and A. Arnaud. 1995. “Deflation of Mount Etna Monitored by Spaceborne Radar Interferometry.” Nature 375 (6532): 567–570. doi:10.1038/375567a0. McNairn, H., and B. Brisco. 2004. “The Application of C-Band Polarimetric SAR for Agriculture: A Review.” Canadian Journal of Remote Sensing 30: 525–542. doi:10.5589/m03-069. Nyoungui, A., E. Tonye, and A. Akono. 2002. “Evaluation of Speckle Filtering and Texture Analysis Methods for Land Cover Classification from SAR Images.” International Journal of Remote Sensing 23 (9): 1895–1925. doi:10.1080/01431160110036157. Pereira, L. O., C. C. Freitas, S. J. S. Sant´ Anna, D. Lu, and E. F. Moran. 2013. “Optical and Radar Data Integration for Land Use and Land Cover Mapping in the Brazilian Amazon.” GIScience & Remote Sensing 50 (3): 301–321. doi:10.1080/15481603.2013.805589. Pierce, L. E., K. M. Bergen, M. C. Dobson, and F. T. Ulaby. 1998. “Multitemporal Land-Cover Classification Using SIR-C/X-SAR Imagery.” Remote Sensing of Environments 64 (1): 20–33. doi:10.1016/S0034-4257(97)00165-X. Prakash, A., E. J. Fielding, R. Gens, J. L. Van Genderen, and D. L. Evans. 2001. “Data Fusion for Investigating Land Subsidence and Coal Fire Hazards in a Coal Mining Area.” International Journal of Remote Sensing 22 (6): 921–932. doi:10.1080/014311601300074441. Qi, Z., A. G. Yeh, X. Li, and Z. Lin. 2010. “Land Use and Land Cover Classification Using RADARSAT-2 Polarimetric SAR Image.” In Proceedings of the ISPRS Technical Commission 1568 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at10:3813March2015

21. VII Symposium: 100 Years ISPRS Advancing Remote Sensing Science, Vienna July 5–7 Vol. 38, 198–203. Ricchetti, E. 2001. “Visible? Infrared and Radar Imagery Fusion for Geological Application: A New Approach Using DEM and Sun-Illumination Model.” International Journal of Remote Sensing 22 (11): 2219–2230. doi:10.1080/713860801. Richards, J. A., and X. Jia. 2005. Remote Sensing and Digital Image Analysis. 1st ed., 194–199. Berlin: Springer. Rosen, P. A., S. Hensley, H. A. Zebker, F. H. Webb, and E. J. Fielding. 1996. “Surface Deformation and Coherence Measurements of Kilauea Volcano, Hawaii, from SIR-C Radar Interferometry.” Journal of Geophysical Research 101 (E10): 23109–23125. doi:10.1029/96JE01459. Santos, C., and J. Messina. 2008. “Multi-Sensor Data Fusion for Modeling African Palm in the Ecuadorian Amazon.” Photogrammetric Engineering and Remote Sensing 74 (6): 711–723. doi:10.14358/PERS.74.6.711. Sawaya, S., B. Haack, T. Idol, and A. Sheoran. 2010. “Land Use/Cover Mapping with Quad- Polarization RADAR and Derived Texture Measures Near Wad Madani, Sudan.” GIScience & Remote Sensing 47 (3): 398–411. doi:10.2747/1548-1603.47.3.398. Shao, Y., X. Fan, H. Liu, J. Xiao, S. Ross, B. Brisco, R. Brown, and G. Staples. 2001. “Rice Monitoring and Production Estimation Using Multitemporal RADARSAT.” Remote Sensing of Environment 76 (3): 310–325. doi:10.1016/S0034-4257(00)00212-1. Sheoran, A., and B. Haack. 2013. “Classification of California Agriculture using Quad Polarization Radar Data and Landsat Thematic Mapper Data.” GIScience and Remote Sensing 50 (1): 50–63. doi:10.1080/15481603.2013.778555. Sim, C. K., K. Abdullah, M. Z. MatJafri, and H. S. Lim. 2014. “Comparative Performance of ALOS PALSAR Polarization Bands and Its Combination with ALOS AVNIR-2 Data for Land Cover Classification.” In IOP Conference Series: Earth and Environmental Science, Sarawak: IOP. vol. 18 (1), 7 pp. doi:10.1088/1755-1315/18/1/012012. Smith, R. B. 2012. Interpreting Digital Radar Images with Tntmips, 20. Lincoln, NE: MicroImages. Smits, P. C., S. G. Dellepiane, and R. A. Schowengerdt. 1999. “Quality Assessment of Image Classification Algorithms for Land-Cover Mapping: A Review and a Proposal for a Cost-Based Approach.” International Journal of Remote Sensing 20 (8): 1461–1486. doi:10.1080/ 014311699212560. Solberg, A. H. S., and K. J. Anil. 1997. “Texture Fusion and Feature Selection Applied to SAR Imagery.” IEEE Transactions on Geosciences and Remote Sensing 10 (6): 989–1003. doi:10.1109/36.563288. Tadesse, H. K., and A. Falconer. 2014. “Land Cover Classification and Analysis Using Radar and Landsat Data in North Central Ethiopia.” In Proceedings of the ASPRS Annual Conference, Louisville, KY, March 23–28, 12 pp. Töyrä, J., A. Pietroniro, and L. Martz. 2001. “Multisensor Hydrologic Assessment of a Freshwater Wetland.” Remote Sensing of Environment 75: 162–173. doi:10.1016/S0034-4257(00)00164-4. Ulaby, F. T., F. Kouyate, B. Brisco, and T. H. L. Williams. 1986. “Textural Infornation in SAR Images.” IEEE Transactions on Geoscience and Remote Sensing GE-24 (2): 235–245. doi:10.1109/TGRS.1986.289643. Villiger, E. 2008. “Radar and Multispectral Image Fusion Options for Improved Land Cover Classification.” PhD diss., George Mason University. Wang, Y., B. N. Koopmans, and C. Pohl. 1995. “The 1995 Flood in the Netherlands Monitored from Space-a Multisensor Approach.” In Proceedings of the second ERS Applications Workshop, London, De

Add a comment