District Cooling System Control for Providing Operating Reserve based on Safe Deep Reinforcement Learning

21 Dec 2021  ·  Peipei Yu, Hongxun Hui, Hongcai Zhang, Ge Chen, Yonghua Song ·

Heating, ventilation, and air conditioning (HVAC) systems are well proved to be capable to provide operating reserve for power systems. As a type of large-capacity and energy-efficient HVAC system (up to 100 MW), district cooling system (DCS) is emerging in modern cities and has huge potential to be regulated as a flexible load. However, strategically controlling a DCS to provide flexibility is challenging, because one DCS services multiple buildings with complex thermal dynamics and uncertain cooling demands. Improper control may lead to significant thermal discomfort and even deteriorate the power system's operation security. To address the above issues, we propose a model-free control strategy based on the deep reinforcement learning (DRL) without the requirement of accurate system model and uncertainty distribution. To avoid damaging "trial & error" actions that may violate the system's operation security during the training process, we further propose a safe layer combined to the DRL to guarantee the satisfaction of critical constraints, forming a safe-DRL scheme. Moreover, after providing operating reserve, DCS increases power and tries to recover all the buildings' temperature back to set values, which may probably cause an instantaneous peak-power rebound and bring a secondary impact on power systems. Therefore, we design a self-adaption reward function within the proposed safe-DRL scheme to constrain the peak-power effectively. Numerical studies based on a realistic DCS demonstrate the effectiveness of the proposed methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here