|Title:||Uncertainty in GIS analysis for landslide hazard assessment|
|Subject:||Hong Kong Polytechnic University -- Dissertations|
Geographic information systems
Landslide hazard analysis
Department of Land Surveying and Geo-Informatics
|Pages:||190,  leaves : ill. ; 30 cm|
|Abstract:||Landslide Hazard assessment can be carried out on a range of scales and using a range of methods. GIS is commonly used as a component of studies using statistical or deterministic methods to assess landslide susceptibility or hazard on a regional basis. The uncertainty in the GIS analyses for such studies was assessed by "auditing" a case study, the Natural Terrain Landslide Study Landslide Susceptibility Map (LSM), a bivariate statistical analysis of the susceptibility of Hong Kong natural terrain to failure by rainfall-triggered, shallow, debris avalanches. Auditing was carried out using standard techniques in uncertainty analysis, including review of input data quality (metadata), error models for each dataset and models of propagation of error. Bivariate statistical methods (such as the case study) assume that landslide susceptibility (a probability density function for the occurrence of landslides under certain triggering conditions) can be accounted for by the combination of (an unknown number of) physical parameters describing the environment. The method attempts to correlate a variety of parameter classes with a sample of landslide events. Model uncertainties include assumptions that susceptibility can be discretized into physical parameters and that relationships with parameters are temporally invariant for the time span of the inventory. Implementation of bivariate statistical models allows discretion in choosing parameters (this contrasts with the deterministic approach in which a specific model or models are used). In the case study the choice of parameters is considered to be subjective. Choice of data structure, resolution and level of aggregation for representation of each of the parameters introduced further uncertainty (bias, correlation). Parameter and inventory error are the most obvious sources of uncertainty. In the case study, geology and slope gradient were the only two parameters selected. The landslide inventory itself was based on aerial photograph interpretation of highly varied temporal resolution and was known to include a high level of misclassifications (i.e., includes features that are not landslides). A moderate level of misclassification may be tolerable if the misclassifications are largely random. More serious is the positional uncertainty of feature locations, which adversely influence classification against the geology and slope gradient parameters. Uncertainties in the geology and slope gradient parameters were found to be dominated by artefacts from their cartographic lineage. In its original application domain model, the geology information is relatively rich, although highly interpreted, and is documented on a variety of media, including a geological map. Loss of detail on Conceptual and Logical modelling is marked, with significant consequences to LSM analysis. These uncertainties were compounded by LSM implementation of the data, which combined two dissimilar sub-themes (mimicking a cartographic limitation) and by assuming parameter class homogeneity, where none is implied in the original data. Uncertainty of geology is documented by classification error matrices. Slope Gradient is a scale dependent characteristic, and uncertainty can be determined by defining an appropriate resolution as a datum and identifying a suitable source of test data (1:1,000 scale maps, human interpretation). Actual slope gradient models used in the analysis (TIN model based on 1:20,000 scale contours) performed very badly against this optimum data. Error distribution was approximately normal (RMSE c.12 degrees, bias of c.3.5 degrees). Models of error propagation for spatial analysis of categorical data are generally poorly understood (compared to error propagation in raster data), though some effects are well documented, such as the "modifiable area unit problem" (MAUP). In the case study both geology and slope gradient themes were aggregated without regard to the MAUP-effects on bias and correlation. Multiple and multi-dimensional uncertainty in the input parameters for the LSM confounded attempts to fully model propagation of errors and thus describe output uncertainty in the LSM quantitatively. However, by isolating the effect of error in the slope gradient input, it was possible to track error propagation from this source. It was found that the overlay logic comprised both logical AND and logical OR operators and that, and resulting error fell between the absolute slope gradient error and "nominally perfect" data. Most significantly, it was found that the over-classification (i.e. into 5 degree subdivisions of slope) of the slope gradient parameter, in combination with the specific spatial overlay procedure in the LSM analysis, resulted in errors rather meaningful classifications. Analysts rely heavily on data producers' documentation of limitations of spatial data. However, for the LSM input, the data producers' metadata was found to be wholly inadequate for the purposes of identifying sources of analytical uncertainty. Details of lineage critical to the analysis were not described (and in some cases are still not known). Furthermore, fitness-for-use was not established prior to analysis, rather a passive "best-in-the-time-available" approach to data selection was used. In addition to limitations of the data themselves, problems arose due to error-insensitive data handling, caused by unquestioning and over-optimistic attitude to the methodology. Establishing fitness-for-use requires active exploration and testing of the data in the context of the spatial functions and operations used in the analysis. Models of error propagation are difficult to construct for categorical data, especially so in these environmental coverages, where classifications of natural systems are subject to interpretation and scale-dependency. Despite research results spanning more than a decade, commercial GIS do not provide useful tools for analysing uncertainty and the propagation of error. Simple tools for stochastic modelling of error by multiple simulations (sensitivity analysis) are needed as standard functionality. Full uncertainty analysis will probably have to wait until data are held in structures that explicitly accommodate several dimensions of uncertainty (e.g., fuzzy sets), and this will require a significant policy shift in organisations that acquire and manage spatial data.|
Files in This Item:
|b16590454.pdf||For PolyU Staff & Students||9.04 MB||Adobe PDF||View/Open|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: