Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineeringen_US
dc.contributor.advisorChan, Yui-lam (EIE)en_US
dc.creatorCao, Yue-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/11187-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleQuality enhancement for compressed screen content video using post processing algorithms with deep learningen_US
dcterms.abstractIn the past few years, the field of video quality enhancement has made great strides. From the very beginning, traditional methods (such as deblocking and SAO) were used to deal with image artifacts, and now more and more people are trying to use deep learning methods to enhance video quality. Example deep learning models for video quality enhancement include a Deep Convolutional Neural Network-based Auto Decoder (DCAD) model with ten layers and Denoising Convolutional Neural Network (DnCNN) model with twenty layers. At present, most video quality enhancement models are based on natural sequences. With the rise of scenes such as video conferencing and remote office, the proportion of screen content video increases. Since there is a significant discrepancy between screen content video and natural video, researchers have drawn considerable attention to consider the characteristics of screen content video when various quality enhancement algorithms are applied. In order to achieve a wider range of quality enhancement application scenarios, this thesis first created a dataset based on screen content video, and tried the models adopted in enhancing the original natural sequences on the basis of the dataset. Among them, the DCAD and DnCNN models have the best results. They can achieve an average BD-rate gain of 2.71% and 3.41% respectively by HM16.20­SCM8.8. As the artifact always appears at the boundaries of some coding blocks in screen content video and the degree of artifacts induced by the new Screen Content Coding (SCC) tools is quite different from the natural video sequences. Based on this observation, a dual-input model is proposed specifically for screen content video in this work. Comparing different types of masks, the mixed mask, which includes CU boundary information and block coding mode information, is the best. BD-rate can further reduce by 0.4% and 0.36% to 3.81% and 3.07% based on DnCNN and DCAD, respectively.en_US
dcterms.extentx, 63 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2021en_US
dcterms.educationalLevelM.Sc.en_US
dcterms.educationalLevelAll Masteren_US
dcterms.LCSHDigital videoen_US
dcterms.LCSHImage processing -- Digital techniquesen_US
dcterms.LCSHImaging systems -- Image qualityen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsrestricted accessen_US

Files in This Item:
File Description SizeFormat 
5664.pdfFor All Users (off-campus access for PolyU Staff & Students only)2.31 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/11187