Author: | Wang, Zilong |
Title: | Intelligent fire identification and quantification driven by computer vision |
Advisors: | Huang, Xinyan (BEEE) Usmani, Asif Sohail (BEEE) |
Degree: | Ph.D. |
Year: | 2023 |
Subject: | Fire -- Data processing Computer vision Artificial intelligence Hong Kong Polytechnic University -- Dissertations |
Department: | Department of Building Environment and Energy Engineering |
Pages: | x, 106 pages : color illustrations |
Language: | English |
Abstract: | Fires, as catastrophic events, pose significant threats to human life, property, and the environment. The identification and quantification of fires play a crucial role in mitigating their destructive impact. In this work, a novel image-based framework for the identification and quantification of fires has been devised to enhance the acquisition of pertinent fire-related data. This framework leverages smoke and flame imagery as primary indicators, and extensive image databases have been meticulously constructed to facilitate the training of artificial intelligence (AI) models dedicated to tasks such as fire detection, fire segmentation, and heat release rate estimation. The outcomes of this study manifest the efficacy of the AI models in extracting a wealth of valuable fire-related information, encompassing aspects such as fire localization, flame elevation, area coverage, heat release rate, among others. The exploration of this framework serves as a fundamental cornerstone in safeguarding human lives and assets, augmenting public safety measures, and advancing sustainable fire management practices. This thesis follows a manuscript-style format, comprising of an introduction to the research background and motivation in the first chapter, an independent paper that has either been published or submitted to a journal publication in the subsequent chapter, and a final chapter that provides a summary of the overall outcomes and outlines potential avenues for future research. Chapter 1 introduces the research background and motivation. Fire is the most common accident in our life, no matter it happens in the building or open area. During a real fire scenario, firefighters can only judge or guess the fire development based on their experiences, such as the smoke plume size and colour. The lack of accurate fire information can lead to misjudgement of fire scenarios and critical events, e.g., flashover and backdraft, the delay in firefighting and rescue, and many injuries and fatalities. To ensure firefighters' safety and support decision-making, a real-time fire identification and quantification is needed to have a better understanding of the fire development. Chapter 2 explores the real-time HRR quantification of transient fire scenarios by using external smoke images and deep learning algorithms. A big database of 1,845 simulated compartment fire scenarios is formed. Three input parameters (constant fire heat release rate, opening size, and soot yield) are paired with the external smoke images, and then trained by Convolutional Neural Network (CNN) model. Results show that by training either the front-view or side-view smoke images, the AI method can well identify the transient fire heat release rate inside the building, even without knowing the burning fuels, and the error is no more than 20%. This work demonstrates that the deep learning algorithms can be trained with simulated smoke images to determine the hidden fire information in real-time and shows great potential in smart firefighting applications. Chapter 3 quantifies the real-time fire heat release rate using fire scene images and deep learning algorithms. A big database of 112 fire tests from the NIST Fire Calorimetry Database is formed, and 69,662 fire scene images labelled by their transient heat release rate are adopted to train the deep learning model. The fire tests conducted in the lab environment and the real fire events are used to validate and demonstrate the reliability of the trained model. Results show that regardless of the fire sources, background, light conditions, and camera settings, the proposed AI-image fire calorimetry method can well identify the transient fire heat release rate using only fire scene images. This work demonstrates that the deep learning algorithms can provide an alternative method to measure the fire HRR when traditional calorimetric methods cannot be used, which shows great potential in smart firefighting applications. Chapter 4 proposes using explainable deep learning methods for flame-image-based fire calorimetry to quantify the intensity of fire and to explain the mechanism of image-based fire calorimetry. Firstly, a pre-trained fire segmentation model is adopted to generate a flame image database with different formats, i.e., (1) original RGB, (2) background-free RGB, (3) background-free grey, and (4) background-free binary to filter the impacts of other factors such as background, colour, and brightness. Then, the synthetic database is utilized for training and testing the fire-calorimetry AI model. Results show that the primary factor affecting the fire calorimetry is the size of the fire area, and other factors have little effect. Improving the accuracy in flame segmentation plays a key role in reducing the quantification error in vision-based fire calorimetry from about 30% to less than 20%. Finally, Gradient-weighted Class Activation Mapping (Grad-CAM) method is adopted to visualize the contribution of each pixel in identifying fire images. This research deepens the understanding of this new vision-based fire calorimetry and guide future AI applications in fire monitoring, digital twin, and smart firefighting. Chapter 5 delivers an automatic distance measurement and fire calorimetry method based on binocular camera. Firstly, a well-calibrated binocular camera is used to capture the fire scenario, and the captured images are fed into a pre-trained fire detection model to localize the fire source in the image. Then, the centre of the fire root is chosen as the reference point to calculate the distance of the camera from the fire source. This distance is used to rescale the images captured to match the input scale of the fire calorimetry model, which is then used to identify the fire heat release rate. Results show that the binocular stereo vision system is capable of accurately measuring the distance of the camera from the fire source and the heat release rate, with a relative error of less than 20% and a R2 of 0.73. This method provides real-time distance measurement and fire calorimetry in mobile scenarios, opening the potential for wider applications in real fire scenarios. Chapter 6 develops the Fire Vigilance Pocket Edition application (FV Pocket), which is designed to enable automatic fire monitoring and calorimetry using computer vision and deep learning techniques, for real-time fire surveillance. The application comprises four main functions, namely, fire detection, fire segmentation, fire measurement, and fire calorimetry. Fire detection is performed by YOLOv5, which localizes the fire source in the image. Subsequently, the detected fire area is input into the Swin-Unet model to separate the flame and background, enabling the real-time display of the fire area. Additionally, image-based fire measurement techniques are used to determine the flame height and the flame area according to the estimated reference scales, which also facilitates the rescaling of raw images. Finally, the rescaled images are fed into a pre-trained fire calorimetry model to identify the heat release rate of the fire. The models used in FV Pocket, their design, and main features are discussed, and the application is demonstrated using real fire tests. The potential uses and limitations of FV Pocket are also addressed in this work. Chapter 7 summarizes the overall outcomes of image-based fire identification and quantification in current stage. According to the present findings, the challenges the researcher need to overcome will be discussed. |
Rights: | All rights reserved |
Access: | open access |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/12725