Author: Zeng, Xiaoqin
Title: Output sensitivity of MLPs derived from statistical expectation
Degree: Ph.D.
Year: 2002
Subject: Neural networks (Computer science)
Hong Kong Polytechnic University -- Dissertations
Department: Department of Computing
Pages: viii, 98 leaves : ill. ; 30 cm
Language: English
Abstract: The sensitivity of a neural network's output to its parameter perturbation is an important issue in the design and implementation of neural networks. What will be the effects of parameter perturbation on the output of neural networks? How does one measure the degree of the response of neural networks due to parameter perturbation? The objective of this dissertation is to analyse and quantify the sensitivity of the most popular and general feedforward neural network - the Multilayer Perceptron (MLP) to input and weight perturbations. Based on the structural features of the MLP, a bottom-up approach is followed to study the sensitivity of the MLP. The sensitivity for each neuron is computed in an order from the first layer to the last. Then the results of the neurons in a layer are collected to form the sensitivity for that layer. Finally the sensitivity of the output layer is defined as the sensitivity for the entire neural network. Sensitivity is defined as the mathematical expectation of output deviations due to input and weight deviations with respect to overall input and weight values in a given continuous interval. An analytical expression that is a function of input and weight deviations is approximately derived for the sensitivity of a single neuron. Two algorithms are then presented to compute the sensitivity for an entire neural network. By analyzing the derived analytical formula and executing one of the given algorithms, some significant observations on the behavior of the MLP under input and weight perturbations are discovered, which can be used as guidelines to aid the design of an MLP. As intuitively expected, the sensitivity increases with input and weight perturbations, but the increase has an upper bound that is determined by the structural configuration of the MLP, namely the number of neurons per layer and the number of layers. There exists an optimal value for the number of neurons in a layer, which yields the highest sensitivity value. The sensitivity decreases with the number of layers increasing, and the decrease almost levels off when the number becomes large. Similarly a quantified sensitivity measure to input deviation is developed for a specific MLP with fixed weight and thus fixed network architecture. Based on the derived analytical expressions, two algorithms are given for the computations of the sensitivity of a single neuron and the sensitivity of an entire neural network. The sensitivity measure is a useful means to evaluate the networks' performance such as error-tolerance and generalization capabilities. The applications of the sensitivity analysis to hardware design, and the sensitivity measure to the selection of weights for a more robust MLP are discussed.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
b16165949.pdfFor All Users3.84 MBAdobe PDFView/Open

Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: