Learning to Adapt to Semantic Shift

29 Sep 2021  ·  Ryan Y Benmalek, Sabhya Chhabria, Pedro O. Pinheiro, Claire Cardie, Serge Belongie ·

Machine learning systems are typically trained and tested on the same distribution of data. However, in the real world, models and agents must adapt to data distributions that change over time. Previous work in computer vision has proposed using image corruptions to model this change. In contrast, we propose studying models under a setting more similar to what an agent might encounter in the real world. In this setting, models must adapt online without labels to a test distribution that changes in semantics. We define two types of semantic distribution shift, one or both of which can occur: \emph{static shift}, where the test set contains labels unseen at train time, and \emph{continual shift}, where the distribution of labels changes throughout the test phase. Using a dataset that contains both class and attribute labels for image instances, we generate shifts by changing the joint distribution of class and attribute labels. We compare to previously proposed methods for distribution adaptation that optimize a fixed self-supervised criterion at test time or a meta-learning criterion at train time. Surprisingly, these provide little improvement in this more difficult setting, with some even underperforming a static model that does not change parameters at test time. In this setting, we introduce two models that ``learn to adapt''---via recurrence and learned Hebbian update rules. These models outperform both previous work and static models under both \emph{static} and \emph{continual} semantic shifts, suggesting that ``learning to adapt'' is a useful capability for models and agents in a changing world.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods