Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization

6 Jul 2020  ·  Saurav Jha, Martin Schiemer, Juan Ye ·

Given the growing trend of continual learning techniques for deep neural networks focusing on the domain of computer vision, there is a need to identify which of these generalizes well to other tasks such as human activity recognition (HAR). As recent methods have mostly been composed of loss regularization terms and memory replay, we provide a constituent-wise analysis of some prominent task-incremental learning techniques employing these on HAR datasets. We find that most regularization approaches lack substantial effect and provide an intuition of when they fail. Thus, we make the case that the development of continual learning algorithms should be motivated by rather diverse task domains.

PDF Abstract

Datasets


Introduced in the Paper:

Human Activity Recognition

Used in the Paper:

MNIST Permuted MNIST

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here