Author:  Zeng, Xiaoqin 
Title:  Output sensitivity of MLPs derived from statistical expectation 
Degree:  Ph.D. 
Year:  2002 
Subject:  Neural networks (Computer science) Hong Kong Polytechnic University  Dissertations 
Department:  Dept. of Computing 
Pages:  viii, 98 leaves : ill. ; 30 cm 
Language:  English 
InnoPac Record:  http://library.polyu.edu.hk/record=b1616594 
URI:  http://theses.lib.polyu.edu.hk/handle/200/2104 
Abstract:  The sensitivity of a neural network's output to its parameter perturbation is an important issue in the design and implementation of neural networks. What will be the effects of parameter perturbation on the output of neural networks? How does one measure the degree of the response of neural networks due to parameter perturbation? The objective of this dissertation is to analyse and quantify the sensitivity of the most popular and general feedforward neural network  the Multilayer Perceptron (MLP) to input and weight perturbations. Based on the structural features of the MLP, a bottomup approach is followed to study the sensitivity of the MLP. The sensitivity for each neuron is computed in an order from the first layer to the last. Then the results of the neurons in a layer are collected to form the sensitivity for that layer. Finally the sensitivity of the output layer is defined as the sensitivity for the entire neural network. Sensitivity is defined as the mathematical expectation of output deviations due to input and weight deviations with respect to overall input and weight values in a given continuous interval. An analytical expression that is a function of input and weight deviations is approximately derived for the sensitivity of a single neuron. Two algorithms are then presented to compute the sensitivity for an entire neural network. By analyzing the derived analytical formula and executing one of the given algorithms, some significant observations on the behavior of the MLP under input and weight perturbations are discovered, which can be used as guidelines to aid the design of an MLP. As intuitively expected, the sensitivity increases with input and weight perturbations, but the increase has an upper bound that is determined by the structural configuration of the MLP, namely the number of neurons per layer and the number of layers. There exists an optimal value for the number of neurons in a layer, which yields the highest sensitivity value. The sensitivity decreases with the number of layers increasing, and the decrease almost levels off when the number becomes large. Similarly a quantified sensitivity measure to input deviation is developed for a specific MLP with fixed weight and thus fixed network architecture. Based on the derived analytical expressions, two algorithms are given for the computations of the sensitivity of a single neuron and the sensitivity of an entire neural network. The sensitivity measure is a useful means to evaluate the networks' performance such as errortolerance and generalization capabilities. The applications of the sensitivity analysis to hardware design, and the sensitivity measure to the selection of weights for a more robust MLP are discussed. 
Files  Size  Format 

b16165949.pdf  3.927Mb 


As a bona fide Library user, I declare that:  


By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms. 