Author: Zhu, Shuaiyin
Title: Efficient and robust photo-based methods for precise shape and pose modeling of human subjects
Advisors: Mok, P. Y. Tracy (ITC)
Degree: Ph.D.
Year: 2017
Subject: Hong Kong Polytechnic University -- Dissertations
Computer animation
Human body -- Computer simulation
Fashion design -- Data processing
Department: Institute of Textiles and Clothing
Pages: xx, 228 pages : color illustrations
Language: English
Abstract: Accurate modeling of human subjects of diverse shapes and sizes in arbitrary poses is vitally important in many research applications, for example for the development of fashion products, anthropometric studies, and/or computer graphics applications. Different methods, including scan-based, image-based and example-based, have been developed over the years. However, for the customization of an individual subject's shape, these methods have known limitations. For example, scan-based methods have to involve expensive scanners and the subjects must be scanned in special clothing at specific locations. Image-based reconstructive methods have uncontrollable 3D shape errors due to oversimplified 2D-to-3D approximation. Although example-based reconstructive methods generate models with a realistic appearance, the size accuracy of the resulting models is questionable. Example-based methods may not model the local shape characteristics of individuals well, and the output models often have an 'average' shape. This project proposes new and efficient methods for modeling individuals of customized sizes and shapes in arbitrary dynamic poses. The size measurements and the shapes of the resulting models must be accurate enough, fulfilling the specific requirements of the clothing industry for fashion applications. In addition to accurate shape modeling, methods are developed to deform the customized models into various poses in real time. A total of five methods/systems are developed in this study to realize automatic shape modeling and dynamic pose deformation.
The first method is Automatic Shape Customization of Human subjects in tight-fitting clothing, called 'ASCHt'. ASCHt presents a complete automatic pipeline for extracting body shape features from input images and customizing 3D human models. The inputs of ASCHt are two orthogonal-view photographs of the subject, and the output of the system is a customized model of the subject in the input images with precise size measurements. ASCHt requires the subjects to be photographed in tight-fitting clothing. The second method, named 'ASCHa', dispenses with such restrictions on clothing types, and realizes automatic shape customization for human subjects in arbitrary clothing, including tight-fitting, normal-fitting or even loose-fitting clothing. ASCHa incorporates an intelligent algorithm, predicting under-the-clothes body profiles of the subjects based on input images where the body profiles are covered. According to the predicted body profiles, the subject's 3D body model is customized. The third method, 'ASCHp', is the automatic shape customization of the human based on the cutting-edge human parsing technology. ASCHp improves the robustness, efficiency and accuracy of shape modeling of individuals. All three methods are comprehensively evaluated by experiments. It is shown that the proposed methods can customize 3D models for individuals based on two input images; the output models have accurate size and shape details, and the size accuracy of the output models is comparable to that of a scan. The fourth development of this study is a system that adopts the above shape modeling methods on a client-server system architecture. The shape modeling methods are implemented on the server end, which serves requests from different clients like mobiles, websites and standalone systems. We have demonstrated such architecture in a mobile-server application. The fifth method developed in this study is for pose modeling, and it is called rapid automatic pose deformation (RAPD). It deforms human models of various body shapes into a series of dynamic poses. RAPD incorporates a new skeleton embedding algorithm that quickly embeds a skeleton into any customized models. With skeleton information, customized models can be deformed into different poses based on given motion data. To correct the skin surface deformation errors in the above rigid deformation, RAPD trains pose-induced non-rigid surface deformation from a dataset of registered scan models in diverse poses. By integrating RAPD with the shape modelling method ASCHp, an individual's body shape model canbe deformed into various dynamic poses in real-time. The proposed shape and pose modeling methods of human subjects can provide competitive advantages to the fashion industry. They allow a customized model to be created completely automatically within seconds. These customized models can support the fashion industry on efficient product development, enabling seamless collaboration among design houses and off-shore manufacturing facilities. In addition, the customized models can be rapidly deformed into various poses with a realistic appearance. This enables a more comprehensive fit evaluation in the development of high-performance clothing, such as sportswear and/or functional garments. Moreover, the output models can also be applied in online stores, allowing customers to visualize try-on effects before purchases. They also ease the difficulties of taking body measurements, helping customers with size selection in online clothing purchases. In addition, the technology can be applied to niche markets like bespoke markets and/or applications in other domains like medical and fitness.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
991021965754903411.pdfFor All Users7.75 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/9139