Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Computing | en_US |
dc.contributor.advisor | Zhang, Lei (COMP) | en_US |
dc.creator | Li, Minghan | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/13243 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | en_US |
dc.rights | All rights reserved | en_US |
dc.title | Exploring spatiotemporal consistency and unified frameworks for video segmentation | en_US |
dcterms.abstract | Video segmentation (VS) divides a video into segments, enabling applications like video understanding, region-guided video generation, interactive video editing, and augmented reality. This thesis presents four studies exploring three essential aspects in the video segmentation field: spatiotemporal consistency, weak supervision, and unified frameworks. The first three studies improve video instance segmentation (VIS) in various scenarios and reduce annotation costs for segmentation datasets. The fourth study introduces a unified model that handles all six video segmentation tasks concurrently, boosting generalization capabilities. | en_US |
dcterms.abstract | In Chapter 1, we introduce common image and video segmentation tasks, review some typical methods, and discuss contribution and organization of this thesis. In Chapter 2, we introduce the spatiotemporal feature consistency into one-stage CNN-based segmentation methods via incorporating spatial feature calibration and temporal feature fusion modules. In Chapter 3, we propose to mine discriminative object embeddings via leveraging inter-frame object query association for transformer-based methods. This can siginificantly improve instance segmentation performance on challenging videos, such as similar-looking objects, complex object trajectories, and mutually occluded objects. In Chapter 4, to reduce the cost of pixel-wise mask annotation for video segmentation datasets, we adapt state-of-the-art pixel-supervised VIS models to a box-supervised VIS baseline and design a box-center guided spatial-temporal pairwise affinity loss to promote better spatial and temporal consistency. In Chapter 5, we tackle the challenge of unifying all six tasks of video segmentation within a single transformer-based framework, resulting in a video segmentation framework with greater generalization capabilities. The proposed unified framework shows a commendable balance between performance and universality on more than 10 challenging video segmentation benchmarks, including generic segmentation, language-guided segmentation and visual prompt-guided segmentation. | en_US |
dcterms.abstract | In summary, these thesis contributes to the advancement of video segmentation techniques by addressing specific challenges and proposing novel approaches to enhance performance, reduce annotation efforts, and establish a unified framework for tackling multiple video segmentation tasks. | en_US |
dcterms.extent | xxi, 163 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2024 | en_US |
dcterms.educationalLevel | Ph.D. | en_US |
dcterms.educationalLevel | All Doctorate | en_US |
dcterms.LCSH | Image processing -- Digital techniques | en_US |
dcterms.LCSH | Digital video | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.accessRights | open access | en_US |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/13243