Recurrent Transform Learning

11 Dec 2019  ·  Megha Gupta, Angshul Majumdar ·

The objective of this work is to improve the accuracy of building demand forecasting. This is a more challenging task than grid level forecasting. For the said purpose, we develop a new technique called recurrent transform learning (RTL). Two versions are proposed. The first one (RTL) is unsupervised; this is used as a feature extraction tool that is further fed into a regression model. The second formulation embeds regression into the RTL framework leading to regressing recurrent transform learning (R2TL). Forecasting experiments have been carried out on three popular publicly available datasets. Both of our proposed techniques yield results superior to the state-of-the-art like long short term memory network, echo state network and sparse coding regression.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here