Author: | Chan, Tak-chiu |
Title: | Model based coding |
Degree: | M.Sc. |
Year: | 1998 |
Subject: | Image processing Pattern perception Coding theory Facial expression Hong Kong Polytechnic University -- Dissertations |
Department: | Multi-disciplinary Studies Department of Electronic Engineering |
Pages: | vii, 89 leaves : ill. ; 30 cm |
Language: | English |
Abstract: | The initial conception of a model-based analysis and synthesis image coding system is first reviewed and discussed in this thesis. A construction method for a three-dimensional (3-D) facial model that includes synthesis methods for facial expressions is then presented. Using edge pixel counting to utilize the edge information, face feature positions can be estimated such as eyes, nose and mouth. An input image is first analyzed and an output image using the 3-D model is then synthesized. A very low bit rate image transmission can be realized as the encoder sends only the required analysis parameters. Output images can be reconstructed without noise corruption that reduces naturalness because the decoder synthesizes images from a similar 3-D model. In order to construct a 3-D model of a person's face, a 3-D wire frame face model is used. A full-face image is then projected onto this wire frame model according to those face feature positions and 3-D affine-transformation. For motion estimation, a modified four-step search algorithm is applied to generate global motion parameters with a motion flow map. The global motion parameters provide information to rotate the 3-D framework in X, Y and Z direction. On the other hand, facial expression is performed by simulating related muscular action to minimum error compared with the motion flow map in the sense of energy. For synthesizing the output images, texture is updated through a simple geometric equation with an error frame for compensation. Finally, the maximum transmission bit rates are estimated. Author has tried more than one method at each step. The most concerns are to widen the coverage of study and to see whether it is possible to make modification or even combine them. In preprocessing, edge forcing is combined with directed-edge extraction. Horizontal and Vertical edge image are obtained instead of one traditional edge image. This can minimize the error in integral projection. In other words, any pixel, which does not belong to this direction, is not included in this direction projection. Besides, image difference is applied to eliminate the background, as the background is assumed static. However, analog to digital quantization error makes it difficult to set a universal threshold to distinguish motion and error. Therefore, head center estimation and restricted integral projection are proposed. The idea is to confine the projection around the face area. For three-dimensional face model, the original 18 muscles were refined and 4 more muscles were added. Actuating different combination of muscles can animate most of facial expression. In motion estimation, author modifies a 4SS algorithm that is proposed by Lai-Man Po and Wing-Chung Ma [31]. This algorithm is applied twice to obtain head motion parameters (HMP) and facial expression parameters (FEP). Furthermore, a bias due to searching priority was solved. |
Rights: | All rights reserved |
Access: | restricted access |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
b14418861.pdf | For All Users (off-campus access for PolyU Staff & Students only) | 4.05 MB | Adobe PDF | View/Open |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/2042