Author: Yeung, Ching Chi
Title: Deep learning for vision-based defect inspection
Advisors: Lam, Kin-man (EEE)
Loo, Ka-hong (EEE)
Degree: Ph.D.
Year: 2024
Subject: Deep learning (Machine learning)
Engineering inspection -- Automation
Computer vision -- Industrial applications
Steel -- Surfaces --Defects
Image segmentation
Pavements --Cracking
Hong Kong Polytechnic University -- Dissertations
Department: Department of Electrical and Electronic Engineering
Pages: xxvi, 124 pages : color illustrations
Language: English
Abstract: Vision-based defect inspection is an essential quality control task in various industries. With the development of deep learning, deep learning-based visual defect inspection meth­ods have achieved remarkable performance. However, existing deep learning-based visual defect inspection models face three main challenges according to their specific applica­tion requirements, including inspection efficiency, precise localization and classification, and generalization ability. Therefore, this thesis aims to investigate deep learning-based models to address these challenges, which are particularly relevant to three specific appli­cations of vision-based defect inspection. These applications include steel surface defect detection, defect semantic segmentation, and pavement crack detection.
In this thesis, we study the efficient design of steel surface defect detection models. We propose a fused-attention network (FANet) to balance the trade-off between accuracy and speed. This model applies an attention mechanism to a single balanced feature map to improve accuracy while maintaining detection speed. Moreover, it introduces a feature fusion and an attention module to handle defects with multiple scales and shape variations.
Furthermore, we investigate the model design to boost the localization and classifi­cation performance for defect semantic segmentation. We propose an attentive boundary-aware transformer framework, namely ABFormer, to precisely segment different types of defects. This framework introduces a feature fusion scheme to split and fuse the boundary and context features with two different attention modules. This facilitates the different learning aspects of the attention modules. In addition, the two attention modules cap­ture the spatial and channel interdependencies of the features, respectively, to address the intraclass difference and interclass indiscrimination problems.
Finally, we focus on improving the generalization ability of pavement crack detec­tion models. We propose a contrastive decoupling network (CDNet) to effectively detect cracks in seen and unseen domains. This framework separately extracts global and local features with contrastive learning to produce generalized and discriminative representa­tions. Besides, it introduces a semantic enhancement module, detail refinement module, and feature aggregation scheme to tackle diverse cracks with complex backgrounds in input images.
The vision-based defect inspection models proposed in this thesis are evaluated by comparing them with other state-of-the-art methods on different defect inspection datasets. Experimental results validate that our models can achieve promising perfor­mance. These models have great potential to advance deep learning-based methods for various applications of vision-based defect inspection.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
7718.pdfFor All Users7.78 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13272