Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineeringen_US
dc.contributor.advisorSiu, Wan-chi (EIE)en_US
dc.contributor.advisorLun, P. K. Daniel (EIE)en_US
dc.creatorWang, Liwen-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13033-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleDeep learning technologies for scene enlightening and road safetyen_US
dcterms.abstractVision is the most important sense of our human beings. Similarly, image/video signal is one of the most popular ways for machines to precept the external world. Benefited from the increasement of the computational power, both conventional and deep learning based machine learning methods have drawn great attention in the computer vision field. These learning-based methods make it possible to utilize the captured digital vision-based signals automatically. However, because of the insufficient illumination, the camera sometimes cannot produce pleasing pictures in low-light conditions, which limits the use-value of the obtained visual signal. This thesis gives focuses on image enhancement in low-light conditions, and scene analysis with emphasis on object recognition.en_US
dcterms.abstractLow-light image enhancement is a challenging task. To address the problem, we regard the low-light enhancement as a residual learning problem that is to estimate the residual between low- and normal-light images. We propose a novel Deep Lightening Network (DLN) that consists of several Lightening Back-Projection (LBP) blocks. The LBPs perform lightening and darkening processes iteratively to learn the residual for normal-light estimations. To effectively utilize the local and global features, we also propose a Feature Aggregation (FA) block that adaptively fuses the results of different LBPs. Experiments show the proposed method is very effective.en_US
dcterms.abstractBecause the raw signal captured by the Color Filter Array (CFA) contains the richest digital information, we further extend our method to work on the camera sensors to process raw signals, which produce standard RGB (sRGB) images with more pleasing visual quality. This helps the night surveillance system or autonomous cars with better input signals. Because the initial raw signal is noisy under the dark environment, a denoising operator is proposed to remove the influence of the noises. And a weighting operation is introduced to adjust the compensation adaptively. Experimental works has been done, results of which demonstrate that the proposed method is very effective.en_US
dcterms.abstractBesides working on the image processing task, we explored automatic video analysis methods especially for the road videos. Locating the distance between two points on the road should be extremely useful for autonomous driving, intelligent traffic system, etc. To enable the road cameras with real-world perception, we propose a novel scheme that can automatically build the connection between pixel measures and real-world distances. In our design, a pixel is assigned to have a weight unit which consists of two orthogonal weights on the scene. Then, an efficient Distance Estimation Network (DEN) is proposed to generate two maps of the distance weights on the road. Based on prior knowledge of the road videos, we propose a set of constraints to assist the training of the DEN. The approach is novel, which does not need the user to explicitly enter any parameter into the system. We have evaluated the proposed method in different scenes. Numerical results show that the proposed method can estimate distances on the roads automatically and correctly, and actually this approach is being used in one of our recent consultancy projects.en_US
dcterms.extent237 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2022en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
7466.pdfFor All Users8.97 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13033