Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electronic and Information Engineering | en_US |
dc.contributor.advisor | Chi, Zheru (EIE) | - |
dc.creator | Peng, Huayi | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/9572 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | - |
dc.rights | All rights reserved | en_US |
dc.title | Transfer Chinese ink wash style into arbitrary images | en_US |
dcterms.abstract | The aim of this dissertation is to provide a methodology to transfer Chinese ink wash style into arbitrary images. Neural style transfer has been a popular topic since 2015. Researchers applied neural networks to achieve image style transfer. However, existing neural style transfer approaches cannot have a satisfying result when transferring Chinese ink wash styles. The reason behind is Chinese ink wash paintings have a unique character named "Liu-bai" which would emphasize the main body of the scene and weaken the irrelevant details. Existing algorithms or models do not take this factor into consideration, therefore the generated images cannot rival freehand Chinese ink wash paintings in beauty and feeling. In order to overcome this problem, we trained a deep semantic image segmentation network and employed saliency maps to preprocess input images. A mask is generated to remove unwanted details. Then the masked content image and the Chinese ink wash painting enter a convolutional autoencoder. The convolutional autoencoder consists of a VGG based encoder and a trained decoder. Content image and Chinese ink wash painting are encoded into feature maps. Their feature maps are concatenated and put into the decoder. The decoder decodes the feature maps into an image which has both content information of the input content image and style information of the Chinese ink wash style. We show that with the help of a semantic image segmentation network and saliency map, the unwanted details of the content image can be easily removed. The resulting image can be generated by a convolutional autoencoder with the "Liu-bai" feature. Moreover, we introduce two enhancement methodologies to further improve the image quality. | en_US |
dcterms.extent | xiii, 45 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2018 | en_US |
dcterms.educationalLevel | M.Sc. | en_US |
dcterms.educationalLevel | All Master | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.LCSH | Image processing -- Digital techniques | en_US |
dcterms.LCSH | Ink painting, Chinese | en_US |
dcterms.accessRights | restricted access | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
991022144624803411.pdf | For All Users (off-campus access for PolyU Staff & Students only) | 9.76 MB | Adobe PDF | View/Open |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/9572