Author: | Lai, Yuni |
Title: | Trustworthy graph learning : adversarial attack, empirical defense, and certified robustness |
Advisors: | Zhou, Kai (COMP) |
Degree: | Ph.D. |
Year: | 2025 |
Subject: | Graph theory -- Data processing Neural networks (Computer science) Machine learning -- Mathematical models Hong Kong Polytechnic University -- Dissertations |
Department: | Department of Computing |
Pages: | xvi, 186 pages : color illustrations |
Language: | English |
Abstract: | With the prevalence of graph data on the Internet, graph learning models, particularly Graph Neural Networks (GNNs), have emerged as powerful tools for various applications, including social network analysis, recommender systems, and anomaly detection. These applications often play critical roles in ensuring system security and reliability, making the trustworthiness of graph models paramount. However, recent studies have revealed the vulnerability of graph learning models to adversarial attacks, highlighting the urgent need to enhance their robustness. This thesis delves into the vulnerability and robustness of graph learning models from three key perspectives: adversarial attacks, empirical defenses, and certifiable defenses. First, we introduce coupled-space attack to investigate the vulnerability of the random-walk-based anomaly detection (RWAD) model. A unique characteristic of RWAD is that it can operate on both pre-existing and feature-derived graphs, which present two potential attack surfaces: graph-space and feature-space. Our proposed coupled-space attacks are the first to investigate the interplay between graph-space and feature-space attacks. We prove the NP-hardness of attacking RWAD and propose efficient strategies to solve the bi-level optimization problem associated with the attacks. Secondly, we propose a powerful MetaC attack for both GNN-based and matrix-factorization-based recommender systems. Leveraging insights from our vulnerability analysis, we design a robust recommender system with empirical defense named PDR system. GraphRfi, a state-of-the-art robust GNN-based recommender system, was proposed to mitigate the effects of injected fake users. Unfortunately, we demonstrate that GraphRfi is still vulnerable to strong attacks likes MetaC due to the supervised nature of its fraudster detection component, where the clean labels it relies on are hard to obtain in practice. Then, we design an adjustable fraudster detection module that explicitly considers label uncertainty. This module can serve as a plug-in that can be easily integrated into different models, resulting in the PDR system. Thirdly, while empirical and certified robustness techniques have been developed to defend against graph modification attacks (GMAs), the problem of certified robustness against graph injection attacks (GIAs) remains largely unexplored. To bridge this gap, we introduce the node-aware bi-smoothing framework, which is the first certifiably robust approach for general node classification tasks against GIAs. Specifically, it randomly deletes nodes and edges of the graphs to obtain smoothed predictions with a large amount of random inputs. Through rigorous theoretical analysis, we establish the certifiable conditions of our smoothing scheme. Finally, we present the first collective certificate against GIAs, significantly improving the certified performance compared to existing sample-wise certificates. Our collective certificate certifies a set of target nodes simultaneously, overcoming the limitations of previous approaches. We formulate the problem as a binary integer quadratic constrained linear programming (BQ-CLP) and develop a customized linearization technique to relax it into efficiently solvable linear programming (LP). Through extensive evaluations, we demonstrate the effectiveness of our proposed adversarial attacks and defense techniques, paving the way for developing more robust and trustworthy graph learning models for real-world applications. |
Rights: | All rights reserved |
Access: | open access |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/13739