|Author:||Chan, Pak-kei Patrick|
|Title:||Localized generalization error bound for multiple classifier system|
|Subject:||Hong Kong Polytechnic University -- Dissertations.|
Neural networks (Computer science)
|Department:||Department of Computing|
|Pages:||xviii, 155 p. : ill. ; 30 cm.|
|Abstract:||The objective of this thesis is to study the Localized Generalization Error Model (L-GEM) for Multiple Classifier Systems (MCSs). L-GEM for a single classifier was proposed by Yeung (2007). It is based on the observation that it will not be reasonable to expect a classifier which is trained by using a set of learning samples to recognize unseen samples very different from the training set. The L-GEM provides an upper bound for the mean square error (MSE) of unseen samples in a neighborhood of each training sample. One significant application of the L-GEM is that it could be used as an objective function for base classifier training. The assumption of the same width for all dimensions of a hidden neuron in the initial version of the L-GEM is now relaxed. The parameters of a RBF network are selected by minimizing its localized generalization error bound. The characteristics of the proposed objective function are compared with those for regularization methods. For the problem of weight selection, RBF networks trained by minimizing the proposed objective function consistently outperform RBF networks trained by techniques such as Training Error Minimization, Tikhonov Regularization, Weight Decay or Locality Regularization. The proposed objective function is equally effective in the selection of three parameters simultaneously: center, width and weight. RBF networks trained by minimizing the proposed objective function yield better testing accuracies when compared to those which minimize training error only. A new dynamic fusion method for the construction of a MCS based on L-GEM is also proposed. This L-GEM based Fusion method (LFM) uses the L-GEM to estimate the local competence of the base classifiers in a MCS. Different from the current dynamic classifier selection methods, the LFM estimates the performance of the base classifiers not only using the training samples but also points in local neighborhood regions. Experimental results show that a MCS using LFM has a good performance consistently in terms of testing classification accuracy and time complexity. The LFM is also compared with twenty one current dynamic fusion methods experimentally. The results show that the LFM yields better testing accuracies than other dynamic fusion methods. L-GEM has been extended from a single classifier system to a MCS, named L-GEMMCS. L-GEMMCS is closely related to the existing error model "Bias Ambiguity Decomposition". L-GEMMCS consists of four terms: base classifier training error, diversity of training error, base classifier sensitivity and diversity of sensitivity. The two terms, diversity of training error and diversity of sensitivity, are new concepts which could be used to characterize the interactions among the base classifiers in MCS. The meaning and relationship of these four terms are analyzed and discussed. L-GEMMCS is shown to be useful in evaluating the generalization ability of a MCS. It can be used as a selection criterion for the best set of classifiers for the construction of a MCS from a pool of diverse base classifiers.|
|Rights:||All rights reserved|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: