Full metadata record
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textilesen_US
dc.contributor.advisorWong, Calvin (SFT)en_US
dc.creatorZhu, Shumin-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/14170-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleAesthetic-aware intelligent fashion avataren_US
dcterms.abstractWith the global digital fashion market expanding rapidly, integrating human creativity and aesthetics into intelligent design systems capable of generating novel, trend-aligned designs remains a key challenge, despite progress in machine learning and generative models. Existing works are limited by: (1) ignoring complex hidden relationships between fashion attributes in recognition; (2) unsuitable datasets (high complexity, low resolution, and limited accessible full-body data) for attribute editing; (3) GAN inversion techniques suffering from information loss for rare or fine-grained attributes; (4) latent space manipulation methods being either confined to predefined attributes or limited by imperfect attribute disentanglement; (5) no support for full-body sketch-to-real product image translation.en_US
dcterms.abstractTo address these gaps, this study develops aesthetically perceptive intelligent systems for fashion design assistance, with key contributions: (1) sRA-Net: a structured relationship-aware network that leverages multiple hidden attribute relationships to enhance fashion attribute recognition. (2) AFED Dataset: 830K high-quality sketch and product fashion images (AFED) for any fashion attribute editing (AFED). (3) Twin-Net: a GAN inversion framework balancing inversion and editing for high-fidelity fashion image inversion and subsequent attribute editing. (4) PairPCA: a few-shot latent manipulation method based on pretrained GAN inversion framework for accurate fashion attribute editing. (5) FSRI System: converts full-body fashion sketches into real product images (FSRI).en_US
dcterms.abstractThese solutions advance fine-grained recognition, high-fidelity reconstruction, accurate attribute editing and fashion sketch to product image translation; AFED provides a robust dataset foundation. The work aims to accelerate garment design, reduce designer workload, and expand creativity in digital fashion.en_US
dcterms.extentxiv, 126 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2025en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
8625.pdfFor All Users26.45 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/14170