Full metadata record
|Department:||Department of Electronic and Information Engineering||en_US|
|Title:||A reading assistant system for the visually impaired||en_US|
|Abstract:||Access to textual information is of significant importance in our daily life and there exists text everywhere in many forms, like newspaper, mail, product instruction, restaurant menu, business card, road sign, and so on. The community of the blind and the visually impaired people has about 170 million people around the world. However, due to their vision troubles, the blind and visual impaired people need to rely on others to survive in the society driven by textual information. The text recognition technique from natural scenes has received a growing attention in recent years because of the potential of camera-based image acquisition and wide availability of digital cameras. More and more innovative applications for the visually impaired have been developed to meet their needs to access textual information. The object of this project is to develop a reading assistant system which can provide the blind or visually impaired people with access to printed English text information in natural scenes. The device developed consists of a USB camera and a laptop computer. Images from natural scenes are captured by the USB camera and converted into digital format. The image processing and text recognition software is developed with VC++ 6.0 & the Matlab platform and runs on a laptop computer. After image analysis and text recognition, the computer drives a speech synthesis engine to read text in real time. Image analysis and text recognition are via a set of algorithms which can be mainly divided into text region extraction, image pre-processing, image skew and slant correction, character segmentation, and character recognition. Locating the text region is of the first priority and a Gabor filter is used to detect and locate text region which improves the accuracy and reliability of text extraction algorithm. Due to a low quality of original images from natural scenes, several essential image processing algorithms are optimized, such as binarization, noise reduction, and skew correction. In our system, adaptive thresholding, median filter, Hough transform and the projection algorithm are utilized. As to the character recognition, the method is based on an artificial neural network with the back-propagation training algorithm. In this dissertation, the system structure and the flow chart of processing are discussed. Besides, in order to evaluate the proposed algorithms, some test images are selected from the public database of The International Conference on Document Analysis and Recognition (ICDAR 2003). Experimental results show the proposed methods can provide effective and reliable performance. The system developed makes the users (the blind or the visually impaired) be aware of textual information which is not available to them. Our efforts have made good contributions to improve the life quality of the disable with emerging assistive technologies.||en_US|
|Pages:||xiii, 107 leaves : ill. ; 30 cm.||en_US|
|Subject:||Hong Kong Polytechnic University -- Dissertations.||en_US|
|Subject:||Reading devices for people with disabilities.||en_US|
|Subject:||People with visual disabilities.||en_US|
|Subject:||Blind, Apparatus for the.||en_US|
|Subject:||Assistive computer technology.||en_US|
Files in This Item:
|b23030653.pdf||For All Users (off-campus access for PolyU Staff & Students only)||21.71 MB||Adobe PDF||View/Open|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: