| Author: | Hu, Ze |
| Title: | Reinforcement learning for multi-scale demand side energy management |
| Advisors: | Bu, Siqi (EEE) Chan, K. W. Kevin (EEE) |
| Degree: | Ph.D. |
| Year: | 2025 |
| Department: | Department of Electrical and Electronic Engineering |
| Pages: | xiv, 137 pages : color illustrations |
| Language: | English |
| Abstract: | The global energy system is undergoing a significant transition driven by climate change and global warming, largely resulting from substantial carbon emissions associated with expanding industrial production. This transition is characterized by a shift from fossil-fuel-based thermal generation toward renewable energy sources on the supply side and from centralized large-scale generation toward decentralized and distributed generation on the demand side. Consequently, there is a growing emphasis on unlocking demand-side flexibility to provide dispatchable resources for multiple uses of the grid. In this context, this thesis investigates optimal decision-making problems, particularly focusing on energy management faced by entities on the demand side within the distribution network. In operational energy management problems, demand-side entities typically aim to minimize their energy costs by strategically adjusting load profiles and managing energy devices subject to operational constraints. The diversity and distributed nature of demand-side entities--including individual buildings, energy communities, and retail electricity markets with responsive consumers--present unique challenges in energy management that require tailored solutions rather than a universal approach. In other words, optimization of energy management at different scales emphasizes different issues: individual consumers face uncertainty in energy prices and distributed generation; community systems grapple with complexity arising from diverse energy consumption profiles and non-convex network constraints involving multiple energy types; collective participation in the retail electricity market (REM) involves strategic interactions under dynamic pricing schemes. Therefore, energy management strategies adapted to scenarios with different scales need to be developed individually. For this reason, this thesis specifically addresses multi-scale energy management problems on the demand side with multi scales to provide adaptive, scenario-specific solutions, ultimately contributing to the broader goals of energy transition and carbon emission mitigation. Meanwhile, machine learning (ML) has become a useful and reliable technique for multiple uses, e.g., forecasting, anomaly detection, and decision-making. As one of the most popular categories of ML techniques, reinforcement learning (RL) has been gaining much attention as a decision-making tool for multiple scenarios in power systems. RL can enable the algorithm as a smart agent to learn from interactions with the environment by "trial and error" in a Markovian environment. Given the inherent uncertainties in electricity prices, energy demands, and distributed generation, these operational decision-making problems can naturally be formulated as stochastic processes and modeled as MDPs, making RL particularly suitable for automating energy management decisions on the demand side. For multi-scale demand side operation problems, RL can be implemented as a smart energy management system to optimize energy consumption decisions automatically, reducing the need for sophisticated manual calculation to lower energy costs. To make the most of RL techniques in demand-side energy management problems, this thesis thus develops novel RL algorithms specifically tailored to address multi-scale, scenario-specific objectives within demand-side decision-making contexts. Specifically, this thesis advances the state-of-the-art by developing three novel RL algorithms tailored specifically to different scales and scenarios of demand-side energy management. At the individual building level, a forecast-enhanced RL approach is proposed to optimally dispatch integrated energy devices based on predictive models of loads, renewable generation, and prices, achieving cost reduction while satisfying multi-energy demands. At the community level, a safe RL method is introduced, enabling the Lagrangian method in the RL algorithm to reduce network constraint violations within integrated community energy systems (ICES), significantly improving operational safety. In the retail electricity market scenario, interactions between consumers and the utility are modeled as a dynamic Stackelberg game, where a novel multi-agent RL (MARL) algorithm is developed to estimate the multiple equilibria of this game, providing possible market outcomes in the REM. Finally, the three novel RL algorithms are validated by using real-world datasets and provide superior performance to baseline approaches. The numerical results of this thesis underscore the transformative potential of the RL technique to empower energy consumers as active and efficient participants within modern energy distribution systems. |
| Rights: | All rights reserved |
| Access: | open access |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/14095

