Author: Zheng, Huadi
Title: Towards privacy protection in the era of adversarial machine learning : attack and defense
Advisors: Hu, Haibo (EIE)
Degree: Ph.D.
Year: 2021
Subject: Machine learning
Computer security
Privacy, Right of
Hong Kong Polytechnic University -- Dissertations
Department: Department of Electronic and Information Engineering
Pages: xix, 108 pages : color illustrations
Language: English
Abstract: In recent years, the pervasive application of machine learning has encouraged a boosting amount of artificial intelligent services, such as voice assistant, facial recognition, autonomous driving, word suggestion, and security diagnostics. Essentially, it enables a computer system to learn the underlying patterns from data and represents them in a model, which is then integrated into designated software for assessing new input. While machine learning has significantly reshaped the modern paradigm of system development, it also stirs up extensive social debates over privacy and confidentiality concerns. In particular, adversarial machine learning has grown in importance as researchers discover that a trained model can be deceived, extracted, inverted or applied in malicious inference. Nevertheless, up to now, the understanding of privacy risks and the countermeasures against them remain limited. To unveil privacy challenges and tackle potential vulnerabilities, I focus my PhD study on the emerging attacks and defenses in the context of adversarial machine learning. Adversarial machine learning originally refers to the manipulation of model behavior by supplying deceptive samples. With the rapid development of alternative attacks such as model extraction and membership inference, it has been bestrewed in a broader domain - corruption of functionality and confidentiality with respect to the adoption of machine learning, where new threats are not only presented in the decision stage but also demonstrated across the pipeline of machine learning. The works described in this thesis are mainly divided into three parts, in a top-down order of attack surfaces, from model prediction to data collection. In the first part, I present a novel mechanism for preventing the extraction of private decision boundary on machine learning services. The proposal consists of obfuscating the output of a classifier with the guarantee of boundary differential privacy, in such a way that fine-grained queries designed to infer the boundary have their accuracy sufficiently diminished in the critical zone, thus hampering the goal of delineating a clear inter-class border. In the second part, I present a side-channel attack system MISSILE to infer sensitive indoor locations in a given premise with machine learning inference. A spyware can stealthily collect these sensory data from typical inertial sensors, such as accelerator, gyroscope and magnetic sensor. In the third part, I will turn to the very source of data collection and study the future of privacy-preserving data collection with an empirical evaluation of local differential privacy and federated learning under a designated task. Finally, I conclude the insights revealed in this study and discuss possible directions of privacy protection with adversarial machine learning in mind.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
5843.pdfFor All Users3.79 MBAdobe PDFView/Open

Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: