Full metadata record
DC FieldValueLanguage
dc.contributorInstitute of Textiles and Clothingen_US
dc.contributor.advisorMok, P. Y. Tracy (ITC)-
dc.creatorZhu, Shuaiyin-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/9139-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic University-
dc.rightsAll rights reserveden_US
dc.titleEfficient and robust photo-based methods for precise shape and pose modeling of human subjectsen_US
dcterms.abstractAccurate modeling of human subjects of diverse shapes and sizes in arbitrary poses is vitally important in many research applications, for example for the development of fashion products, anthropometric studies, and/or computer graphics applications. Different methods, including scan-based, image-based and example-based, have been developed over the years. However, for the customization of an individual subject's shape, these methods have known limitations. For example, scan-based methods have to involve expensive scanners and the subjects must be scanned in special clothing at specific locations. Image-based reconstructive methods have uncontrollable 3D shape errors due to oversimplified 2D-to-3D approximation. Although example-based reconstructive methods generate models with a realistic appearance, the size accuracy of the resulting models is questionable. Example-based methods may not model the local shape characteristics of individuals well, and the output models often have an 'average' shape. This project proposes new and efficient methods for modeling individuals of customized sizes and shapes in arbitrary dynamic poses. The size measurements and the shapes of the resulting models must be accurate enough, fulfilling the specific requirements of the clothing industry for fashion applications. In addition to accurate shape modeling, methods are developed to deform the customized models into various poses in real time. A total of five methods/systems are developed in this study to realize automatic shape modeling and dynamic pose deformation.en_US
dcterms.abstractThe first method is Automatic Shape Customization of Human subjects in tight-fitting clothing, called 'ASCHt'. ASCHt presents a complete automatic pipeline for extracting body shape features from input images and customizing 3D human models. The inputs of ASCHt are two orthogonal-view photographs of the subject, and the output of the system is a customized model of the subject in the input images with precise size measurements. ASCHt requires the subjects to be photographed in tight-fitting clothing. The second method, named 'ASCHa', dispenses with such restrictions on clothing types, and realizes automatic shape customization for human subjects in arbitrary clothing, including tight-fitting, normal-fitting or even loose-fitting clothing. ASCHa incorporates an intelligent algorithm, predicting under-the-clothes body profiles of the subjects based on input images where the body profiles are covered. According to the predicted body profiles, the subject's 3D body model is customized. The third method, 'ASCHp', is the automatic shape customization of the human based on the cutting-edge human parsing technology. ASCHp improves the robustness, efficiency and accuracy of shape modeling of individuals. All three methods are comprehensively evaluated by experiments. It is shown that the proposed methods can customize 3D models for individuals based on two input images; the output models have accurate size and shape details, and the size accuracy of the output models is comparable to that of a scan. The fourth development of this study is a system that adopts the above shape modeling methods on a client-server system architecture. The shape modeling methods are implemented on the server end, which serves requests from different clients like mobiles, websites and standalone systems. We have demonstrated such architecture in a mobile-server application. The fifth method developed in this study is for pose modeling, and it is called rapid automatic pose deformation (RAPD). It deforms human models of various body shapes into a series of dynamic poses. RAPD incorporates a new skeleton embedding algorithm that quickly embeds a skeleton into any customized models. With skeleton information, customized models can be deformed into different poses based on given motion data. To correct the skin surface deformation errors in the above rigid deformation, RAPD trains pose-induced non-rigid surface deformation from a dataset of registered scan models in diverse poses. By integrating RAPD with the shape modelling method ASCHp, an individual's body shape model canbe deformed into various dynamic poses in real-time. The proposed shape and pose modeling methods of human subjects can provide competitive advantages to the fashion industry. They allow a customized model to be created completely automatically within seconds. These customized models can support the fashion industry on efficient product development, enabling seamless collaboration among design houses and off-shore manufacturing facilities. In addition, the customized models can be rapidly deformed into various poses with a realistic appearance. This enables a more comprehensive fit evaluation in the development of high-performance clothing, such as sportswear and/or functional garments. Moreover, the output models can also be applied in online stores, allowing customers to visualize try-on effects before purchases. They also ease the difficulties of taking body measurements, helping customers with size selection in online clothing purchases. In addition, the technology can be applied to niche markets like bespoke markets and/or applications in other domains like medical and fitness.en_US
dcterms.extentxx, 228 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2017en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.LCSHComputer animationen_US
dcterms.LCSHHuman body -- Computer simulationen_US
dcterms.LCSHFashion design -- Data processingen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
991021965754903411.pdfFor All Users7.75 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/9139