Author: Ding, Yueming
Title: Personalising 3D digital human face models for fashion
Advisors: Mok, P. Y. (SFT)
Degree: Ph.D.
Year: 2024
Department: School of Fashion and Textiles
Pages: xxv, 221 pages : color illustrations
Language: English
Abstract: The fashion industry continues to develop in a very rapid pace because fashion is an indispensable part of human life and which has an important role to play in the world's economy. In China, the apparel market has surged to a substantial value of US$318.10 billion in 2023, making it the world's second-largest market. In the digital era, it is very important to have more personalised services, particularly in fashion related applications. Personalised products/services refer how well the products/services fit to individual users, who each has own facial appearance. There is an undeniable strong link between human faces and fashion, and there are continued ongoing research efforts connecting the two.
By thoroughly reviewing the face related fashion research and applications, this study introduces a novel 3D face shape reconstruction method that incorporates facial contour features obtained using a deep neural network into hard blended edges features, and such facial features including landmarks and hard-blended edges are used for 3D face reconstruction, achieving high accuracy and robustness on both synthetic and in-the-wild facial image datasets.
Apart from accurate 3D face shape reconstruction, the study proposes a new facial appearance completion method based on diffusion model to generate high resolution facial texture maps with rich details from single-view face images. The resulting facial texture maps are completed from single-view face images with high fidelity, capable of handling large poses or occlusions. The trained diffusion model can reconstruct highly accurate 3D facial texture maps from single face images, achieving high-quality rendering effects that preserve important details of the input faces (e.g. wrinkles or moles).
Furthermore, this study also develops a topology-free approach for personalizing full-body fashion avatars, by first transferring face reconstructions from single-view images to full-head models and then from full-head models to complete human body models with consistent textures. The proposed method is inexpensive, eliminating the use of specialized equipment and applicable to head and body models of different topologies.
The integrated approach can be applied to virtual try-on systems, enhancing individual identities and improving the realism of the try-on experience for users. This advancement allows for a more accurate representation of how clothing and fashion items would appear on individuals, enabling users to make informed decisions and fostering a more satisfactory and engaging virtual shopping experience.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
8619.pdfFor All Users10.58 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/14164