Symmetry-Augmented Representation for Time Series

1 Jan 2021  ·  Amine Mohamed Aboussalah, Chi-Guhn Lee ·

We examine the hypothesis that the concept of symmetry augmentation is fundamentally linked to learning. Our focus in this study is on the augmentation of symmetry embedded in 1-dimensional time series (1D-TS). Motivated by the duality between 1D-TS and networks, we augment the symmetry by converting 1D-TS into three 2-dimensional representations: temporal correlation (GAF), transition dynamics (MTF), and recurrent events (RP). This conversion does not require a priori knowledge of the types of symmetries hidden in the 1D-TS. We then exploit the equivariance property of CNNs to learn the hidden symmetries in the augmented 2-dimensional data. We show that such conversion only increases the amount of symmetry, which may lead to more efficient learning. Specifically, we prove that a direct sum based augmentation will never decrease the amount of symmetry. We also attempt to measure the amount of symmetry in the original 1D-TS and augmented representations using the notion of persistent homology, which reveals symmetry increase after augmentation quantitatively. We present empirical studies to confirm our findings using two cases: reinforcement learning for financial portfolio management and classification with the CBF data set. Both cases demonstrate significant improvements in learning efficiency.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here