Author:  Chang, Wingfai 
Title:  A study of second order learning algorithms for recurrent neural networks 
Degree:  M.Sc. 
Year:  1995 
Subject:  Neural networks (Computer science) Algorithms Mathematical optimization Hong Kong Polytechnic University  Dissertations 
Department:  Multidisciplinary Studies 
Pages:  vi, 73 leaves : ill. ; 30 cm 
Language:  English 
InnoPac Record:  http://library.polyu.edu.hk/record=b1205015 
URI:  http://theses.lib.polyu.edu.hk/handle/200/85 
Abstract:  Backpropagation (BP) of error gradients has proven its usefulness in training feedforward neural networks to tackle a large number of classification and function mapping problems. However, this method exhibits a number of serious problems while training. The user is required to select three arbitrary coefficients: learning rate, momentum, and the number of hidden nodes. An unfortunate choice can cause slow convergence. Besides, the network can become trapped in a local minimum of the error function, arriving at an unacceptable solution when a much better one exists. Also, the large number of learning iterations needed to optimally adjust the weights of the networks is prohibitive for online applications. Numerical optimization theory offers a rich and robust set of techniques which can be applied to improve the learning rate of neural networks. These techniques focus on methods using not only the local gradient of the function but also the second derivative. The first derivative of the error measures the slope of the error surface at a point while the second derivative measures the curvature of the error surface at the same point. This information is very useful in determining the optimal update direction. There are varieties of second order methods, in particular, the conjugate gradient method is commonly used in BP networks due to its speed and simplicity. It has been shown that a much shorter training time is required when the feedforward network is trained using second order methods. Recurrent networks, which include feedback loops (connections by which a node's prior output influences its subsequent output), are capable of processing temporal patterns and accepting sequences as inputs and producing them as outputs. Recurrent networks can be trained with backpropagation, however, such training requires a great deal of computation and memory, and encounters the same problems as in the feedforward networks. In this dissertation, the conjugate gradient method is applied to train a fully connected recurrent neural network. The result is then compared with another learning algorithm  real time recurrent learning algorithm, which uses only first derivative of error and is often used in training recurrent neural networks. The recurrent network implemented with these two learning algorithms is used to simulate a linear system (a second order Butterworth low pass filter). In addition, the recurrent network is applied to a speaker recognition problem. The application of the network to speaker recognition shows that the recurrent network is able to learn temporally encoded sequences, and to make decision on whether or not a speech sample corresponds to a particular speaker. 
Files  Size  Format 

b12050155.pdf  2.377Mb 


As a bona fide Library user, I declare that:  


By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms. 