|Financial time series forecasting by neural network using conjugate gradient method
|Stock price forecasting
Neural networks (Computer science)
Conjugate gradient methods -- Data processing
Hong Kong Polytechnic University -- Dissertations
Department of Computing
|[v], 67 leaves : ill. ; 30 cm
|The three-layer Multilayer Perceptron (MLP) neural network is used for the time series forecasting of short-term stock price. The prediction is based on a Technical Analysis point of view. A number of technical indicators in a n-day period are constructed for network input, where the network output is the change of stock price on the (n+1)-day. To overcome the problem of determining the network parameters (learning rate, momentum rate) and the slow convergence of basic Backpropagation learning algorithm, the Conjugate Gradient method with restart procedure is used for the network training. Random weight initialization with uniform distribution is a popular technique for network initialization. Different schemes have been proposed but they are experimental and problem-dependent. In this project, the possibility of weight initialization by using Multiple Linear Regression is discussed. In order for the nonlinear network can fit into the linear regression model during the initial weight calculation, a network linearization method will be adopted. The daily trade data of listing companies from the Shanghai Stock Exchange is used for the network learning and validation, using the Conjugate Gradient method with regression weight initialization and the Backpropagation method with momentum term using random weight initialization. The performance of these methods is compared and evaluated. The results indicate that the network can model the time series of the stock price satisfactory. The proposed conjugate gradient with linear regression weight initialization requires significant lesser number of iterations as required by the backpropagation. In addition, the sensitivity of regression ion initialization to both conjugate gradient and backpropagation has been investigated. Conjugate Gradient learning works better with regression initialization than with uniform random initialization. Less number of interations are required. However, similar conclusion cannot be drawn from the backpropagation learning.
|All rights reserved
Files in This Item:
|For All Users (off-campus access for PolyU Staff & Students only)
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: