:: UI - Disertasi Membership :: Kembali

UI - Disertasi Membership :: Kembali

A Methodological framework for multi date and multisensor remotely-sense image interpretation based on uniform classification scheme = Metodologi interpretasi citra Inderaja multitemporal dan multisensor berdasarkan klasifikasi uniform

Aniati Murni Arymurthy; M. Barmawi, promotor ([Publisher not identified] , 1997)

 Abstrak

This dissertation describes the synergy use of remote sensing data from multi-temporal and multi sensor (optical and radar) for improving our understanding of the land-cover structural phenomena. A tropical country like Indonesia has a high cloud coverage throughout the year with a maximum during the rainy season, and hence the availability of cloud-free optical images is minimal. To solve this problem, radar images have been intensively introduced. The radar images are cloud-free but their use is hampered due to their speckle noise and topographic distortions, and the lack of a suitable radar image classification system.
In many cases, the use of optical or radar image alone is not sufficient. Therefore, the main objectives of this research are: (i) to develop a framework for multi date and multi sensor (optical and radar) image classification; (ii) to solve the cloud cover problem in optical images; and (iii) to obtain a more consistent image classification using multi date and multi sensor images. We have proposed a framework for multi date and multi sensor image classification based on a uniform image classification scheme. The term uniform means that the same procedure can be used to classify optical or radar images, low-level mosaic or fused images, single or multiple feature images.
To be able to conduct a multi temporal and multi sensor analysis, we have unified the optical and radar image classification procedure after finding that both optical and radar images consist of homogeneous and textured regions. A region is considered as homogeneous if the local variance of gray level distribution is relatively low, and a region is considered as textured if the local variance is high. We used a multivariate Gaussian distribution to model the homogeneous part and a multinomial distribution to model the gray level co-occurrences of the textured part, and applied a multiple classifier system to improve the classification accuracy.
The main advantages of the uniform classification scheme are as follow. First, we can tune the homogeneous-textured threshold value parameter in order to obtain an optimal result by allowing the classifier working as a single (conventional) or multiple classifier system. The classifier can have a better or at least the same classification accuracy as the conventional one. Second, we can use either single-band or multi-band input images. This will make it possible to classify a. radar image based on multi-model texture feature images or to classify multi spectral optical images. Third, we can use the same procedure to classify any input images. Compared to the conventional classifiers, the multiple classifier system can improve the classification result from 0% to 20% for radar images and from 0% to 2% for optical images.
The proposed framework incorporates the image mosaicking and data fusion at the low-level stage (before the classification process) as well as at the high-level stage (after the classification process). For cloud cover removal, the image mosaicking at the low-level stage is usually done using multi temporal optical images, whereas mosaicking at the high-level stage is applied to the classified optical and radar images. To be able to obtain a cloud-free image, we have modified the existing Soofi and Smith algorithm which is using multi temporal optical images to an algorithm using multi sensor images. In the high-level data fusion, we have also been able to incorporate a mechanism for cloud cover removal by omitting the information from the optical sensor and using only the information from the radar sensor. According to a case study in our experiment, the cloud cover removal and image classification using the low-level image mosaicking, the high-level image mosaicking, and the high-level data fusion gave 80.2%, 76.2%, and 80.5% classification accuracy, respectively.
The high-level data fusion combines the decisions from several input images to obtain a consensus of classified image. We have applied both the maximum joint posterior probability and the highest rank method for the decision combination functions. We have utilized two existing data fusion methods and have proposed an alternative data fusion method based on the compound conditional risk. According to the experimental results, the decision combination function based on the maximum joint posterior probability favors the optical feature image, while the highest rank method favors the radar feature image. The preference of using the maximum joint posterior probability results in the domination of optical features in the fusion result, and the classification accuracy of the fused image can be better 8.5% in average than the individual radar classified image.

 File Digital: 1

Shelf
 D 235a.pdf :: Unduh

LOGIN required

 Metadata

No. Panggil : D235
Entri utama-Nama orang :
Entri tambahan-Nama orang :
Entri tambahan-Nama badan :
Subjek :
Penerbitan : [Place of publication not identified]: [Publisher not identified], 1997
Program Studi :
Bahasa : eng
Sumber Pengatalogan :
Tipe Konten :
Tipe Media :
Tipe Carrier :
Deskripsi Fisik :
Naskah Ringkas :
Lembaga Pemilik : Universitas indonesia
Lokasi : Perpustakaan UI, Lantai 3
  • Ketersediaan
  • Ulasan
No. Panggil No. Barkod Ketersediaan
D235 D235 TERSEDIA
Ulasan:
Tidak ada ulasan pada koleksi ini: 74502