Author: | Wang, Ruoheng |
Title: | Real-time regulation for power system operational security enhancement using deep reinforcement learning |
Advisors: | Bu, Siqi (EEE) Chung, Edward (EEE) |
Degree: | Ph.D. |
Year: | 2024 |
Subject: | Electric power systems -- Safety measures Electric power systems -- Reliability Machine learning Hong Kong Polytechnic University -- Dissertations |
Department: | Department of Electrical and Electronic Engineering |
Pages: | 164 pages : color illustrations |
Language: | English |
Abstract: | The secure and economic operation of power systems bears on the livelihood of families, industrial and commercial activities, and many other exercises in human society. With inevitably increasing integration of renewable energy and fast transportation electrification, power systems may face considerable operation challenges in the future. The uncertainty of renewable power generation and electric vehicle charging with human factors can impact the critical dynamics of power systems, such as frequency and voltage, leading to poor power quality, unexpected damage to power equipment, excessive power losses, or even power outage. To prepare for such a future, the real-time regulation of power systems is widely investigated to improve power system operation performance. By dispatching various power control equipment, the real-time regulation can optimize power flows to mitigate excessive frequency and voltage deviations, reduce power costs, and avoid extreme operational conditions. With the increasing uncertainty and complexity of systems, it is urgent to develop competent system-level coordination methods to adaptively control multiple power equipment featuring diversity and flexibility so as to reduce the risks of incidents and improve the efficiency of system operation. Considering the system models become more expensive to build or estimate in real time, the data-driven methods can be a promising solution to system-level coordination tasks, thanks to the model-free and learning-based nature of deep reinforcement learning (DRL). Hence, this thesis aims at investigating data-driven solutions to enhance power system operation performance considering various operation scenarios. Specifically, this thesis develops DRL-based methods for both short-term and long-term real-time regulation problems. In the short term, the joint regulation for voltage and frequency dynamics by remotely dispatching numerous distributed energy sources (DERs) via multi-agent DRL are studied in the thesis to facilitate coordination between distribution and transmission systems to tackle emergencies. In the long term, real-time voltage regulation problems are investigated in three aspects: 1) a DRL method is proposed to demonstrate the secure and economic performance of systems can be further enhanced by coordinating two common voltage regulation techniques, i.e., volt-VAR control and dynamic network reconfiguration (DNR); 2) considering the popularization of electric vehicles, a multi-timescale and multi-agent DRL method is designed to address two-timescale voltage dynamics in a distribution system impacted by the high-power charging from the flash-charging-enabled public transit system; 3) a sequential-masking DRL algorithm is proposed to fully address the complex discrete action space of DNR, proving that it can further enhance voltage regulation performance. These data-driven real-time regulation methods developed in this thesis are tested and validated via comprehensive simulation on IEEE standard test feeders to demonstrate their effectiveness in improving operation dynamics (e.g., frequency and voltage deviations to nominal values and oscillations) and reducing various operation costs (e.g., power losses and device operation costs). These methods are instrumental in helping power systems to capably accommodate more renewable energy and electric vehicles, thereby advancing modernization and carbon neutrality of power systems. |
Rights: | All rights reserved |
Access: | open access |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/13265