Paper

Curriculum-based Reinforcement Learning for Distribution System Critical Load Restoration

This paper focuses on the critical load restoration problem in distribution systems following major outages. To provide fast online response and optimal sequential decision-making support, a reinforcement learning (RL) based approach is proposed to optimize the restoration. Due to the complexities stemming from the large policy search space, renewable uncertainty, and nonlinearity in a complex grid control problem, directly applying RL algorithms to train a satisfactory policy requires extensive tuning to be successful. To address this challenge, this paper leverages the curriculum learning (CL) technique to design a training curriculum involving a simpler steppingstone problem that guides the RL agent to learn to solve the original hard problem in a progressive and more effective manner. We demonstrate that compared with direct learning, CL facilitates controller training to achieve better performance. To study realistic scenarios where renewable forecasts used for decision-making are in general imperfect, the experiments compare the trained RL controllers against two model predictive controllers (MPCs) using renewable forecasts with different error levels and observe how these controllers can hedge against the uncertainty. Results show that RL controllers are less susceptible to forecast errors than the baseline MPCs and can provide a more reliable restoration process.

Results in Papers With Code
(↓ scroll down to see all results)