Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Computing | en_US |
dc.contributor.advisor | Liu, Yan (COMP) | en_US |
dc.contributor.advisor | Guo, Song (COMP) | en_US |
dc.creator | Liu, Yi | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/13240 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | en_US |
dc.rights | All rights reserved | en_US |
dc.title | Optimizations on resource constrained federated learning with diverse clients | en_US |
dcterms.abstract | Federated Learning (FL) stands as an innovative paradigm that revolutionizes collaborative model training, safeguarding the privacy of sensitive data. In the conventional FL setup, clients independently refine their models using their data, sharing model parameters or updates rather than exposing raw data to the FL server. The server then consolidates these local models and distributes the refined model to all participants for further training iterations until the desired accuracy is achieved. Although FL offers substantial benefits in terms of efficient communication and privacy preservation, it grapples with various challenges in resource-constrained environments, which can be categorized into three types: limited reliability, limited computational power, and limited communication bandwidth. Collectively, these challenges pose constraints on the sustainability, efficiency and compatibility of FL. This thesis aims to investigate effective strategies for mitigating the aforementioned resource-constrained challenges and propose innovative solutions to enhance the training quality of FL. | en_US |
dcterms.abstract | First, we consider to develop a robustness-aware incentive scheme to mitigate the challenge of limited reliability. Since data owners and their equipment differ, the costs and computing resources for participating in federated learning training vary. Our main focus is finding ways to encourage heterogeneous users to join in this complex federated learning environment in the long-term, while the same time reduce training time, improve training effectiveness, and prevent malicious users from interfering. Our strategy breaks down the complex task into three key subtasks: total Pricing determination to maximize long-term utility; bonus distribution to maximize short-term utility; edge nodes selection to expel malicious and lazy nodes. We employ a three-layer Hierarchical Reinforcement Learning (HRL) approach to concurrently learn optimal policies for these sub-tasks. At the outer layer, we consider systematic criteria and model accuracy, ensuring the sustainability of federated learning by accommodating varying client characteristics. The middle layer focuses on time efficiency to prevent resource waste, while the inner layer aims to enhance performance by distinguishing honest from dishonest edge nodes through iterative interactions. | en_US |
dcterms.abstract | Second, we consider customizing client model architectures to alleviate the computational strain stemming from limited resources. This strategy aims to pinpoint the most suitable architecture aligned with the distribution of local data across clients, ensuring that the complexity is optimized for both performance and the client’s computational capacity. We strive to strike a delicate balance, tailoring the architecture complexity to be manageable for the client while also achieving high performance. To attain this equilibrium, we utilize a model architecture search algorithm, fostering collaborative exploration among clients to unearth appropriate architectures. Specifically, our approach transform the conventional centralized Neural Architecture Search (NAS) into a distributed framework known as FedLAS. Within this FedLAS framework, we facilitate differentiable architecture fine-tuning through gradient-descent optimization, thereby enabling each client to procure a model that best aligns with their specific requirements. Additionally, to effectively aggregate knowledge from a diverse array of neural architectures, we introduce a knowledge distillation-based training framework. This framework strikes a judicious balance between model generalization and personalization in the context of federated learning. | en_US |
dcterms.abstract | Thirdly, in addressing the challenge of limited communication bandwidth, we explore the development of a factorization-based aggregation algorithm aimed at significantly reducing communication costs within federated learning environments, particularly in scenarios where clients possess heterogeneous models. Diverging from conventional methods that involve the exchange of model parameters or feature mappings, our approach hinges on the efficient exchange of feature correlation information. This innovative strategy is adept at minimizing disparities in local representation learning processes, thereby promoting collaboration among diverse clients. To implement this approach, we employ a factorization-based technique to extract a cross-feature relation matrix from local representations. This matrix serves as a knowledge intermediary during the aggregation phase. It is worth noting that our framework, referred to as FedFoA, distinguishes itself by being communication-efficient, model-agnostic and privacy-preserving. Furthermore, it seamlessly integrates with contemporary federated self-supervised learning methods, ensuring compatibility with state-of-the-art approaches in the field. | en_US |
dcterms.abstract | In summary, the primary aim of this thesis is to devise effective and resilient methodologies tailored for resource-constrained FL, with a particular emphasis on addressing the limited reliability, limited computational power and limited communication bandwidth within the FL domain. These efforts are directed towards enhancing the reliability and efficiency of the FL framework, thereby enriching its applicability and effectiveness in real-world scenarios characterized by limited resources. | en_US |
dcterms.extent | xxiv, 150 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2024 | en_US |
dcterms.educationalLevel | Ph.D. | en_US |
dcterms.educationalLevel | All Doctorate | en_US |
dcterms.LCSH | Federated learning (Machine learning) | en_US |
dcterms.LCSH | Machine learning | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.accessRights | open access | en_US |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/13240