Author: Li, Minghan
Title: Exploring spatiotemporal consistency and unified frameworks for video segmentation
Advisors: Zhang, Lei (COMP)
Degree: Ph.D.
Year: 2024
Subject: Image processing -- Digital techniques
Digital video
Hong Kong Polytechnic University -- Dissertations
Department: Department of Computing
Pages: xxi, 163 pages : color illustrations
Language: English
Abstract: Video segmentation (VS) divides a video into segments, enabling applications like video understanding, region-guided video generation, interactive video editing, and augmented reality. This thesis presents four studies exploring three essential aspects in the video segmentation field: spatiotemporal consistency, weak supervision, and unified frameworks. The first three studies improve video instance segmentation (VIS) in various scenarios and reduce annotation costs for segmentation datasets. The fourth study introduces a unified model that handles all six video segmentation tasks concurrently, boosting generalization capabilities.
In Chapter 1, we introduce common image and video segmentation tasks, review some typical methods, and discuss contribution and organization of this thesis. In Chapter 2, we introduce the spatiotemporal feature consistency into one-stage CNN-based segmentation methods via incorporating spatial feature calibration and temporal feature fusion modules. In Chapter 3, we propose to mine discriminative object embeddings via leveraging inter-frame object query association for transformer-based methods. This can siginificantly improve instance segmentation performance on challenging videos, such as similar-looking objects, complex object trajectories, and mutually occluded objects. In Chapter 4, to reduce the cost of pixel-wise mask annotation for video segmentation datasets, we adapt state-of-the-art pixel-supervised VIS models to a box-supervised VIS baseline and design a box-center guided spatial-temporal pairwise affinity loss to promote better spatial and temporal consistency. In Chapter 5, we tackle the challenge of unifying all six tasks of video segmentation within a single transformer-based framework, resulting in a video segmentation framework with greater generalization capabilities. The proposed unified framework shows a commendable balance between performance and universality on more than 10 challenging video segmentation benchmarks, including generic segmentation, language-guided segmentation and visual prompt-guided segmentation.
In summary, these thesis contributes to the advancement of video segmentation techniques by addressing specific challenges and proposing novel approaches to enhance performance, reduce annotation efforts, and establish a unified framework for tackling multiple video segmentation tasks.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
7698.pdfFor All Users46.95 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13243