|Title:||Analysis and motion estimation strategies for frame and video object coding|
|Subject:||Hong Kong Polytechnic University -- Dissertations|
MPEG (Video coding standard)
Digital video -- Standards
|Department:||Department of Electronic and Information Engineering|
|Pages:||xvi, 193 leaves : ill. ; 30 cm|
|Abstract:||Block-based motion estimation is widely used for exploiting temporal redundancies in arbitrarily shaped video objects, which is computationally the most demanding part within the MPEG-4 standard. One of the main differences of MPEG-4 video in comparison to previously standardized video coding schemes is the support of arbitrarily shaped video objects for which the numerous existing fast motion estimation algorithms are not suitable. The conventional fast motion estimation algorithm works well for the opaque macroblocks. This is not the case for boundary macroblocks which contain a large number of local minima on their error surfaces. In view of this, we propose a fast search algorithm which incorporates the binary alpha-plane to accurately predict the motion vectors of boundary macroblocks. Besides, these accurate motion vectors can be used to develop a novel priority search algorithm which is an efficient search strategy for the remaining opaque macroblocks. Experimental results show that our approach requires simple computational complexity, and it gives a significant improvement in accuracy on motion-compensated video object planes as compared with conventional algorithms, such as the diamond search. Numerically, a speed-up of about 27 times as compared with the full search algorithm is obtained in our tested VOs. Although many fast search algorithms can achieve low computational load and acceptable encoding quality requirement, it is always desirable to look for identical searching results as compared with that of the conventional full search algorithm. For instance, high quality digital video product and object tracking applications need to estimate motion activities accurately. To develop a fast full search algorithm, we have made use of our observation that pixel matching errors with similar magnitudes tend to appear in clusters for natural video sequences on average. Subsequently an adaptive partial distortion search algorithm has been proposed. The algorithm significantly improves the computation efficiency of the original partial distortion search. In terms of the number of operations, our experimental results show that the computational efficiency of the algorithm outperforms other algorithms. The algorithm can have a speed-up of 3 to 9 as compared with the Full Search Algorithm (FSA). In terms of realtime measurement, our algorithm can speed up the search for about 3.38 times as compared to the FSA on average, which is again better than other tested algorithms including Successive Elimination Algorithm for encoding sequences with high motion activities and arbitrarily shaped video objects. Discrete Cosine Transform (DCT) is widely used in modern video compression standards including the ISO MPEG-4, to achieve high compression efficiency. The DCT domain scheme works very well for intraframe coding. On the other hand, block-based compensation typically results in a peaky distribution of errors. It leads to a scattering of DCT coefficients and makes the DCT coding inefficient. This disadvantage motivates us to study the spatial distribution of prediction errors resulting from either the full-search motion estimation or other fast search algorithms. As a result, we propose a Mixed Spatial-DCT-based Coding Scheme to code the prediction errors. The scheme divides prediction errors in a block into two components. One component is coded in the spatial domain with the arithmetic coding technique while the other is coded with the traditional DCT method. The coding scheme can improve the rate-distortion performance of the traditional DCT-based coding for high quality video applications. The proposed scheme is especially suitable for arbitrary shaped video objects and, video sequences which contain moderate to high motion activities. In order to find a possible optimal coding system, a signal-source model has been used, which hopefully can be sufficiently accurate enough to reflect the characteristics of practical signals. The first-order Markov process has been found to be a successful model for intraframe coding. However, for motion-compensated error signals, the situation is very different. It has been observed that the motion compensation prediction (MCP) errors are space-dependent and the assumption of wide-sense stationary (WSS) is not valid. As a result, it is inaccurate to employ a simple Markov model for the MCP errors. Hence, we have studied a covariance model analytically from the first order Markov model by making use of the space-dependent characteristics. Consequently, we derive an approximated and separable autocorrelation model for the block based motion compensation difference signal. Experimental results show that the proposed model reflects the autocorrelation characteristics of practical prediction errors accurately. Furthermore, this model can provide some very useful insights for an analytical design of the coding system and make possible the investigation of various video signal decomposition algorithms. This is a fruitful direction of further research.|
Files in This Item:
|b18099531.pdf||For PolyU Staff & Students||2.69 MB||Adobe PDF||View/Open|
|8424.pdf||For All Users (Non-printable)||2.71 MB||Adobe PDF||View/Open|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: