Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.contributor.advisorZhang, Lei (COMP)en_US
dc.creatorLi, Minghan-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13243-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleExploring spatiotemporal consistency and unified frameworks for video segmentationen_US
dcterms.abstractVideo segmentation (VS) divides a video into segments, enabling applications like video understanding, region-guided video generation, interactive video editing, and augmented reality. This thesis presents four studies exploring three essential aspects in the video segmentation field: spatiotemporal consistency, weak supervision, and unified frameworks. The first three studies improve video instance segmentation (VIS) in various scenarios and reduce annotation costs for segmentation datasets. The fourth study introduces a unified model that handles all six video segmentation tasks concurrently, boosting generalization capabilities.en_US
dcterms.abstractIn Chapter 1, we introduce common image and video segmentation tasks, review some typical methods, and discuss contribution and organization of this thesis. In Chapter 2, we introduce the spatiotemporal feature consistency into one-stage CNN-based segmentation methods via incorporating spatial feature calibration and temporal feature fusion modules. In Chapter 3, we propose to mine discriminative object embeddings via leveraging inter-frame object query association for transformer-based methods. This can siginificantly improve instance segmentation performance on challenging videos, such as similar-looking objects, complex object trajectories, and mutually occluded objects. In Chapter 4, to reduce the cost of pixel-wise mask annotation for video segmentation datasets, we adapt state-of-the-art pixel-supervised VIS models to a box-supervised VIS baseline and design a box-center guided spatial-temporal pairwise affinity loss to promote better spatial and temporal consistency. In Chapter 5, we tackle the challenge of unifying all six tasks of video segmentation within a single transformer-based framework, resulting in a video segmentation framework with greater generalization capabilities. The proposed unified framework shows a commendable balance between performance and universality on more than 10 challenging video segmentation benchmarks, including generic segmentation, language-guided segmentation and visual prompt-guided segmentation.en_US
dcterms.abstractIn summary, these thesis contributes to the advancement of video segmentation techniques by addressing specific challenges and proposing novel approaches to enhance performance, reduce annotation efforts, and establish a unified framework for tackling multiple video segmentation tasks.en_US
dcterms.extentxxi, 163 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2024en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.LCSHImage processing -- Digital techniquesen_US
dcterms.LCSHDigital videoen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
7698.pdfFor All Users46.95 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13243