Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorDong, Runze-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13895-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleDeep learning models for temporal action detectionen_US
dcterms.abstractIn this study, we investigate the impact of various attention mechanisms within the End-to-End Temporal Action Detection with TRansformer (ETadTR) model to improve performance and training efficiency for human action recognition. We focus on employing different attention mechanisms in the encoding layers, and comparing the original Multi-Scale Deformable Attention (MSDA) with Slide Attention, Sparse Attention, and Local Attention mechanisms.en_US
dcterms.abstractOur experiments reveal that the choice and optimization of attention mechanisms significantly affect the model’s accuracy and computational efficiency. The Sliding Attention mechanism uses a single sliding window instead of the multi-scale adjustable windows of MSDA to simplify the computational complexity, but the result is not high. After finding the weakness of sliding attention mechanism in processing long sequence attention calculation, Sparse Attention mechanism is adopted. Experiments show that among the three modes of Sparse Attention mechanism, the ‘local’ mode achieves the best results while optimizing the computing efficiency of long sequences. Finally, we found that replacing MSDA with Local Attention in the encoding layers, while maintaining MSDA in the decoding layers, resulted in optimal performance under specific configurations. With three encoding layers and four default decoding layers, the Local Attention mechanism outperforms the original MSDA model, achieving a higher accuracy and a training speed of 1,597.27 frames per second, compared to 1,423.39 frames per second for the original model.en_US
dcterms.abstractThese findings underscore the importance of modular and targeted attention mechanism selection in optimizing temporal action detection models. By focusing on efficient encoding layers configurations, we successfully balanced high accuracy with reduced computational complexity, making significant progress in enhancing model efficiency. This approach offers valuable insights for future research and practical applications, particularly in areas such as surveillance systems, sports analytics, and human-computer interaction, where rapid and accurate action recognition is critical.en_US
dcterms.abstractIn summary, our study demonstrates that carefully tailored attention mechanisms can greatly enhance the performance and efficiency of temporal action detection models. The results validate the potential of the Local Attention mechanism to streamline computation and improve training speeds while maintaining robust accuracy, providing a promising direction for future advancements in the field of human action recognition.en_US
dcterms.extentvii, 44 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2024en_US
dcterms.educationalLevelM.Sc.en_US
dcterms.educationalLevelAll Masteren_US
dcterms.accessRightsrestricted accessen_US

Files in This Item:
File Description SizeFormat 
8303.pdfFor All Users (off-campus access for PolyU Staff & Students only)2.09 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13895