Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.contributor.advisorZhou, Kai (COMP)en_US
dc.creatorHan, Yuwei-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13295-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleCost aware poisoning attack against graph neural networksen_US
dcterms.abstractGraph Neural Networks (GNNs) have achieved remarkable success in various tasks such as node classification, link prediction, and anomaly detection. However, these applications are vulnerable to adversarial attacks, especially poisoning attacks, where the attacker can modify the graph’s structure and features at the model training stage to degrade the model’s performance. Despite the existence of such attacks, the efficient utilization of the attacker’s budget, in terms of the number and type of modifications allowed, remains an open challenge. This thesis aims to address this challenge by developing cost-aware poisoning attack strategies against GNNs that maximize the degradation of the model’s performance while adhering to a constrained attack budget.en_US
dcterms.abstractWe begin by identifying the key factors that contribute to the effectiveness of poisoning attacks on GNNs, focusing on the strategic modification of graph structure. We then propose a set of novel attack methodologies that are designed to exploit these factors efficiently, ensuring that each modification contributes significantly to the overall impact on the GNN’s performance. Our approaches are validated through extensive empirical evaluations on standard benchmarks for node classification, link prediction and anomaly detection tasks, demonstrating their superiority over existing attack strategies in terms of cost-effectiveness and impact.en_US
dcterms.abstractBuilding on our empirical findings, we formalize the problem of cost-aware adversarial attacks on GNNs, deriving theoretical bounds on the minimum number of modifications required to achieve a desired level of performance degradation. This formalization not only provides a theoretical foundation for our empirical strategies but also offers insights into the inherent vulnerabilities of GNNs to poisoning attacks.en_US
dcterms.abstractIn summary, this thesis contributes to the field of adversarial machine learning by introducing a comprehensive framework for cost-aware poisoning attacks against GNNs. Our work not only advances the understanding of GNN vulnerabilities but also provides practical tools and theoretical insights to guide the development of more robust GNN models in the face of poisoning threats.en_US
dcterms.extentx, 76 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2024en_US
dcterms.educationalLevelM.Phil.en_US
dcterms.educationalLevelAll Masteren_US
dcterms.LCSHNeural networks (Computer science)en_US
dcterms.LCSHArtificial intelligenceen_US
dcterms.LCSHComputer securityen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
7742.pdfFor All Users2.36 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13295