Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorXiao, Yaxin-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/14093-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleA study on learning-based model extraction attacks and defense methodsen_US
dcterms.abstractThe widespread adoption of Machine Learning as a Service (MLaaS) has exposed cloud-deployed black-box models to growing security risks, particularly from model extraction attacks (MEAs). In these attacks, adversaries exploit prediction interfaces to replicate proprietary models, subsequently enabling secondary privacy breaches or adversarial attacks through extracted model insights. Driven by the intellectual property (IP) theft crisis, this thesis first systematically investigates MEA risks and then explores defense strategies from multiple perspectives.en_US
dcterms.abstractWhile existing MEA research focuses on query optimization to maximize attack success, two critical attack amplifiers remain underexplored: (1) the mutual reinforcement between model theft and training data privacy leakage and (2) the impact of initial bootstrapping on extraction performance. This work reveals that model extraction and membership inference attacks, which aim to identify training data, can strengthen each other through an iterative process. Furthermore, we reveal that optimized initial parameters and more compatible model architectures enable MEAs to replicate models at the neuron level. This strategy not only boosts the performance of model extraction attacks but also redefines their severity because it provides substitute models with neuron-level precision for downstream attacks.en_US
dcterms.abstractTo counter the threats of MEAs, we propose two defense strategies. The first is a proactive method, which leverages the model's hard-to-replicate properties to reduce its extractability, preventing MEAs from producing high-fidelity extracted models. The second is a passive forensic approach using black-box model watermarks, which embeds ownership signals into the extracted models. Compared to existing watermarking methods, CFW not only transfers more effectively to extracted models but also resist adaptive removal attacks. By uncovering key mechanisms that amplify model extraction attacks and introducing effective countermeasures, this study offers strong protection for MLaaS platforms against intellectual property theft.en_US
dcterms.extentxviii, 123 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2025en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
8547.pdfFor All Users3.83 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/14093