Online Residential Demand Response via Contextual Multi-Armed Bandits

7 Mar 2020  ·  Xin Chen, Yutong Nie, Na Li ·

Residential loads have great potential to enhance the efficiency and reliability of electricity systems via demand response (DR) programs. One major challenge in residential DR is to handle the unknown and uncertain customer behaviors. Previous works use learning techniques to predict customer DR behaviors, while the influence of time-varying environmental factors is generally neglected, which may lead to inaccurate prediction and inefficient load adjustment. In this paper, we consider the residential DR problem where the load service entity (LSE) aims to select an optimal subset of customers to maximize the expected load reduction with a financial budget. To learn the uncertain customer behaviors under the environmental influence, we formulate the residential DR as a contextual multi-armed bandit (MAB) problem, and the online learning and selection (OLS) algorithm based on Thompson sampling is proposed to solve it. This algorithm takes the contextual information into consideration and is applicable to complicated DR settings. Numerical simulations are performed to demonstrate the learning effectiveness of the proposed algorithm.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here