Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | en_US |
| dc.creator | Xiao, Yaxin | - |
| dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/14093 | - |
| dc.language | English | en_US |
| dc.publisher | Hong Kong Polytechnic University | en_US |
| dc.rights | All rights reserved | en_US |
| dc.title | A study on learning-based model extraction attacks and defense methods | en_US |
| dcterms.abstract | The widespread adoption of Machine Learning as a Service (MLaaS) has exposed cloud-deployed black-box models to growing security risks, particularly from model extraction attacks (MEAs). In these attacks, adversaries exploit prediction interfaces to replicate proprietary models, subsequently enabling secondary privacy breaches or adversarial attacks through extracted model insights. Driven by the intellectual property (IP) theft crisis, this thesis first systematically investigates MEA risks and then explores defense strategies from multiple perspectives. | en_US |
| dcterms.abstract | While existing MEA research focuses on query optimization to maximize attack success, two critical attack amplifiers remain underexplored: (1) the mutual reinforcement between model theft and training data privacy leakage and (2) the impact of initial bootstrapping on extraction performance. This work reveals that model extraction and membership inference attacks, which aim to identify training data, can strengthen each other through an iterative process. Furthermore, we reveal that optimized initial parameters and more compatible model architectures enable MEAs to replicate models at the neuron level. This strategy not only boosts the performance of model extraction attacks but also redefines their severity because it provides substitute models with neuron-level precision for downstream attacks. | en_US |
| dcterms.abstract | To counter the threats of MEAs, we propose two defense strategies. The first is a proactive method, which leverages the model's hard-to-replicate properties to reduce its extractability, preventing MEAs from producing high-fidelity extracted models. The second is a passive forensic approach using black-box model watermarks, which embeds ownership signals into the extracted models. Compared to existing watermarking methods, CFW not only transfers more effectively to extracted models but also resist adaptive removal attacks. By uncovering key mechanisms that amplify model extraction attacks and introducing effective countermeasures, this study offers strong protection for MLaaS platforms against intellectual property theft. | en_US |
| dcterms.extent | xviii, 123 pages : color illustrations | en_US |
| dcterms.isPartOf | PolyU Electronic Theses | en_US |
| dcterms.issued | 2025 | en_US |
| dcterms.educationalLevel | Ph.D. | en_US |
| dcterms.educationalLevel | All Doctorate | en_US |
| dcterms.accessRights | open access | en_US |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/14093

