Localized generalization error model and its applications to supervised pattern classification problems

Pao Yue-kong Library Electronic Theses Database

Localized generalization error model and its applications to supervised pattern classification problems

 

Author: Ng, Wing-yin
Title: Localized generalization error model and its applications to supervised pattern classification problems
Degree: Ph.D.
Year: 2006
Subject: Hong Kong Polytechnic University -- Dissertations
Neural networks (Computer science)
Artificial intelligence
Genetic algorithms
Machine learning
Department: Dept. of Computing
Pages: 195 p. : ill. ; 30 cm
Language: English
InnoPac Record: http://library.polyu.edu.hk/record=b2069687
URI: http://theses.lib.polyu.edu.hk/handle/200/2414
Abstract: The objective of this thesis is to investigate the localized generalization error of a classifier trained for supervised pattern classification problems. This is motivated by the straightforward idea that one should not expect a classifier to recognize correctly unseen samples which are totally different from the training samples. Therefore, a localized generalization error model (L-GEM) is proposed to give an upper bound on the generalization error for the unseen samples located within neighborhoods of the training samples. The L-GEM is applied to address three fundamental issues in supervised pattern classification problems: architecture selection for a neural network, feature selection and active learning. For architecture selection problem, one can use the L-GEM to select the largest neighborhoods around the training samples, subject to a predefined generalization error bound (Maximal Coverage Classification problem with Selected Generalization error bound (MC2SG)). A number of application problems in civil engineering, computer network security and image classification were solved by using Radial Basis Function Neural Network (RBFNN) trained by MC2SG L-GEM can also be used as a feature selection/reduction criterion. This is accomplished iteratively by measuring the feature which affects the generalization error the least. The problem of active learning is resolved by doing the opposite, i.e., each time selecting the training sample which yields the largest L-GEM value to the trained classifier. Since its derivation is based on the stochastic sensitivity measure of a classifier, the L-GEM is applicable to any classifier for which the stochastic sensitivity measure could be defined, e.g. RBFNN, multilayer perception neural networks and support vector machines. This thesis presents the L-GEM for a RBFNN, and a pilot study on the extension of the L-GEM to other classifiers including the multiple classifier system is discussed briefly.

Files in this item

Files Size Format
b20696875.pdf 2.633Mb PDF
Copyright Undertaking
As a bona fide Library user, I declare that:
  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

     

Quick Search

Browse

More Information