Paper

Double Deep Q-learning Based Real-Time Optimization Strategy for Microgrids

The uncertainties from distributed energy resources (DERs) bring significant challenges to the real-time operation of microgrids. In addition, due to the nonlinear constraints in the AC power flow equation and the nonlinearity of the battery storage model, etc., the optimization of the microgrid is a mixed-integer nonlinear programming (MINLP) problem. It is challenging to solve this kind of stochastic nonlinear optimization problem. To address the challenge, this paper proposes a deep reinforcement learning (DRL) based optimization strategy for the real-time operation of the microgrid. Specifically, we construct the detailed operation model for the microgrid and formulate the real-time optimization problem as a Markov Decision Process (MDP). Then, a double deep Q network (DDQN) based architecture is designed to solve the MINLP problem. The proposed approach can learn a near-optimal strategy only from the historical data. The effectiveness of the proposed algorithm is validated by the simulations on a 10-bus microgrid system and a modified IEEE 69-bus microgrid system. The numerical simulation results demonstrate that the proposed approach outperforms several existing methods.

Results in Papers With Code
(↓ scroll down to see all results)