Author: | Han, Yuwei |
Title: | Cost aware poisoning attack against graph neural networks |
Advisors: | Zhou, Kai (COMP) |
Degree: | M.Phil. |
Year: | 2024 |
Subject: | Neural networks (Computer science) Artificial intelligence Computer security Hong Kong Polytechnic University -- Dissertations |
Department: | Department of Computing |
Pages: | x, 76 pages : color illustrations |
Language: | English |
Abstract: | Graph Neural Networks (GNNs) have achieved remarkable success in various tasks such as node classification, link prediction, and anomaly detection. However, these applications are vulnerable to adversarial attacks, especially poisoning attacks, where the attacker can modify the graph’s structure and features at the model training stage to degrade the model’s performance. Despite the existence of such attacks, the efficient utilization of the attacker’s budget, in terms of the number and type of modifications allowed, remains an open challenge. This thesis aims to address this challenge by developing cost-aware poisoning attack strategies against GNNs that maximize the degradation of the model’s performance while adhering to a constrained attack budget. We begin by identifying the key factors that contribute to the effectiveness of poisoning attacks on GNNs, focusing on the strategic modification of graph structure. We then propose a set of novel attack methodologies that are designed to exploit these factors efficiently, ensuring that each modification contributes significantly to the overall impact on the GNN’s performance. Our approaches are validated through extensive empirical evaluations on standard benchmarks for node classification, link prediction and anomaly detection tasks, demonstrating their superiority over existing attack strategies in terms of cost-effectiveness and impact. Building on our empirical findings, we formalize the problem of cost-aware adversarial attacks on GNNs, deriving theoretical bounds on the minimum number of modifications required to achieve a desired level of performance degradation. This formalization not only provides a theoretical foundation for our empirical strategies but also offers insights into the inherent vulnerabilities of GNNs to poisoning attacks. In summary, this thesis contributes to the field of adversarial machine learning by introducing a comprehensive framework for cost-aware poisoning attacks against GNNs. Our work not only advances the understanding of GNN vulnerabilities but also provides practical tools and theoretical insights to guide the development of more robust GNN models in the face of poisoning threats. |
Rights: | All rights reserved |
Access: | open access |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/13295