Full metadata record
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textilesen_US
dc.contributor.advisorMok, P. Y. Tracy (SFT)en_US
dc.contributor.advisorFan, Jintu (SFT)en_US
dc.creatorHe, Honghong-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/14304-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleIntelligent body-aware 2D virtual try-on and tileable texture generations for 3D visual try-on applicationsen_US
dcterms.abstractThe worldwide reach of e-commerce enables individuals and businesses to buy and sell products online with ease at anytime and anywhere. Nevertheless, there are still specific difficulties or consumer concerns that hinge online purchases of fashion, largely because clothing products, different from other typical products, are soft and depend on wearers’ body forms. More specifically, these concerns include clothing fitting, shape and size, the compatibility with other fashion items, how well the clothing matches the wearer’s skin tone, and so forth. These concerns are reflected at the sky-high rates of return for clothing products shopped online. High returns generate a lot of carbon emissions that pollute the environment. To improve customer experience while promote sustainable development of the fashion industry, virtual try-on (VTON) technology offers a viable solution to these problems.en_US
dcterms.abstractThis thesis explores VTON methods for fashion applications, allowing users to preview garments before making a purchase decision. VTON methods can be generally classified into 2D-VTON and 3D-VTON methods. On one hand, 2D VTON methods are image-based virtual try-on schemes that utilize deep generative models to visualize garments on human images. Existing 2D-VTON methods learn pixel-wise cross-domain transformations to synthesize fashion images, yet they fail to consider the relationship between garment size and user body shape. A good VTON should not only provide excellent visual representation, but also help individuals evaluate the fit, shape, and size of garments on wearers’ bodies. With this motivation, a shape-faithful virtual try-on method is developed in this study based on body size dependent fashion landmark transformation (called BSLT-VTON). The proposed BSLT-VTON transfers wearable fit and wearing position between potential wearers and related garments with two key developments: a fashion landmark localization model (i) and body-aware garment deformation model (ii). Given the unavailability of massive fashion data with special body shapes, it is difficult to learn the relationship between different human body shapes and clothing sizes from images. Therefore, a fashion landmark localization model is proposed in this study (Model i) based on pose-aware segmentation to obtain fashion masks, and this will combine with human joints from a pose estimation method, both are inputs to locate the key functional points on the clothing. With this Model i, the relationship between different human body shapes and functional areas of clothing is obtained. Next, the body-aware clothing deformation model aiming to deform and scale clothing according to different body shapes (Model ii) is developed. This is a 2-step small incremental clothing deformation strategy to prevent excessive deformation of clothing during fitting. The effectiveness of the proposed method has been demonstrated in terms of the impact of clothing height and length (i.e., overall fit) on people of different body shapes and heights/proportions. Even for complex texture patterns, the proposed BSLT-VTON can provide pleasing visual effect, better than other state-of-the-art methods.en_US
dcterms.abstractOn the other hand, 3D VTON methods offer try-on experience to users based on 3D garment models. In this study, a high-quality texture generation method (Model iii) and a new cloth digitization pipeline (Model iv) are proposed for 3D VTON. Most existing work for deep learning-based 3D VTON focus on reconstructing 3D garments from images, while the texture or texture quality of 3D garments are largely ignored, resulting in 3D garments with low quality of texture not suitable for real try-on fashion applications. The proposed texture reconstruction method combines the latest advances in deep texture synthesis, adversarial neural networks, and latent space manipulation, enabling the synthesis of high-quality, tileable textures for 3D garments, ready for Augment Reality 3D VTON fashion applications.en_US
dcterms.abstractIn summary, this study proposed 2 new VTON methods including both 2D and 3D approaches. Extensive experiments have shown that, comparing to the state-of-the-art 2D VTON methods, the new 2D VTON method (BSLT-VTON), integrating Models i and ii, exhibits realistic visualization try-on results, in terms of clothing patterns, textures, and colors, while allowing users to judge the realistic try-on effect of the clothing on users’ bodies, in terms of clothing fit and shape. Comparing to the existing 3D VTON solutions, the new 3D VTON method, namely cloth digitization pipeline (Model iv) can easily and flexibly map high-quality, tileable textures generated (by Model iii) to 3D garments. Not only textures are generated, tileable albedo and normal maps are generated simultaneously for more realistic 3D visual try-on by 3D garment rendering.en_US
dcterms.extentxviii, 211 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2025en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
8770.pdfFor All Users4.1 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/14304