Author: Cao, Yue
Title: Quality enhancement for compressed screen content video using post processing algorithms with deep learning
Advisors: Chan, Yui-lam (EIE)
Degree: M.Sc.
Year: 2021
Subject: Digital video
Image processing -- Digital techniques
Imaging systems -- Image quality
Hong Kong Polytechnic University -- Dissertations
Department: Department of Electronic and Information Engineering
Pages: x, 63 pages : color illustrations
Language: English
Abstract: In the past few years, the field of video quality enhancement has made great strides. From the very beginning, traditional methods (such as deblocking and SAO) were used to deal with image artifacts, and now more and more people are trying to use deep learning methods to enhance video quality. Example deep learning models for video quality enhancement include a Deep Convolutional Neural Network-based Auto Decoder (DCAD) model with ten layers and Denoising Convolutional Neural Network (DnCNN) model with twenty layers. At present, most video quality enhancement models are based on natural sequences. With the rise of scenes such as video conferencing and remote office, the proportion of screen content video increases. Since there is a significant discrepancy between screen content video and natural video, researchers have drawn considerable attention to consider the characteristics of screen content video when various quality enhancement algorithms are applied. In order to achieve a wider range of quality enhancement application scenarios, this thesis first created a dataset based on screen content video, and tried the models adopted in enhancing the original natural sequences on the basis of the dataset. Among them, the DCAD and DnCNN models have the best results. They can achieve an average BD-rate gain of 2.71% and 3.41% respectively by HM16.20¬≠SCM8.8. As the artifact always appears at the boundaries of some coding blocks in screen content video and the degree of artifacts induced by the new Screen Content Coding (SCC) tools is quite different from the natural video sequences. Based on this observation, a dual-input model is proposed specifically for screen content video in this work. Comparing different types of masks, the mixed mask, which includes CU boundary information and block coding mode information, is the best. BD-rate can further reduce by 0.4% and 0.36% to 3.81% and 3.07% based on DnCNN and DCAD, respectively.
Rights: All rights reserved
Access: restricted access

Files in This Item:
File Description SizeFormat 
5664.pdfFor All Users (off-campus access for PolyU Staff & Students only)2.31 MBAdobe PDFView/Open

Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: