PhysQ: A Physics Informed Reinforcement Learning Framework for Building Control

21 Nov 2022  ·  Gargya Gokhale, Bert Claessens, Chris Develder ·

Large-scale integration of intermittent renewable energy sources calls for substantial demand side flexibility. Given that the built environment accounts for approximately 40% of total energy consumption in EU, unlocking its flexibility is a key step in the energy transition process. This paper focuses specifically on energy flexibility in residential buildings, leveraging their intrinsic thermal mass. Building on recent developments in the field of data-driven control, we propose PhysQ. As a physics-informed reinforcement learning framework for building control, PhysQ forms a step in bridging the gap between conventional model-based control and data-intensive control based on reinforcement learning. Through our experiments, we show that the proposed PhysQ framework can learn high quality control policies that outperform a business-as-usual, as well as a rudimentary model predictive controller. Our experiments indicate cost savings of about 9% compared to a business-as-usual controller. Further, we show that PhysQ efficiently leverages prior physics knowledge to learn such policies using fewer training samples than conventional reinforcement learning approaches, making PhysQ a scalable alternative for use in residential buildings. Additionally, the PhysQ control policy utilizes building state representations that are intuitive and based on conventional building models, that leads to better interpretation of the learnt policy over other data-driven controllers.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here