Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.contributor.advisorGuo, Song (COMP)en_US
dc.creatorLuo, Boyuan-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/11389-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleInt8-based fully quantization-aware trainingen_US
dcterms.abstractDeep Neural Networks have shown great success to handle various real-world applications, where huge computational overhead is required to drive the resource-hungry training procedure. With the increase of model complexity, the limited computational capacity becomes the performance bottleneck of modern learning tasks, especially when dealing with a great number of tensor-level arithmetic operations. Recently, quantizing number into low-precision data formats is a promising research direction to address the above challenge. However, most existing methods focus on post-training quantization and ultra-low-bit neural networks, where the computational primitives cannot be fully utilized. A natural manner to make the quantization algorithm hardware-friendly is to exploit the power of 8-bit fixed-point instructions, which hold fewer resource demands over the conventional 32-bit floating-point operations. This property motivates us to propose a novel INT8-based Fully quantization-aware training(FQAT) algorithm, which quantizes model parameters in both forward and backward pass, including weights, activations and gradients. The proposed FQAT can be deployed in extensive usage scenarios, including powerful Nvidia's GPU, Intel's CPU, and resource-constrained devices, such as Raspberry Pi development board. Compared with the non-uniform and uniform quantization scheme, I choose the uniform linear quantization scheme to match the limited on-device computational capacity. Besides, I jointly design batch normalization and range clipping by simplifying these operations into a single function, named Clipping Batch normalization. I implement the proposed algorithms on commodity Deep Learning frameworks based on Python and Numpy, which is the first to enable on-device training from scratch. Experimental results show that FQAT can effectively handle training tasks on resource-constrained embedded devices on MNIST and CIFAR10 datasets.en_US
dcterms.extent[32] pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2021en_US
dcterms.educationalLevelM.Sc.en_US
dcterms.educationalLevelAll Masteren_US
dcterms.LCSHNeural networks (Computer science)en_US
dcterms.LCSHMachine learningen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsrestricted accessen_US

Files in This Item:
File Description SizeFormat 
5827.pdfFor All Users (off-campus access for PolyU Staff & Students only)909.91 kBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/11389