Neural Network Extrapolations with G-invariances from a Single Environment

ICLR 2021  ·  S Chandra Mouli, Bruno Ribeiro ·

Despite —or maybe because of— their astonishing capacity to fit data, neural networks are widely believed to be unable to extrapolate beyond training data distribution. This work shows that, for extrapolations based on transformation groups, a model’s inability to extrapolate is unrelated to its capacity. Rather, the shortcoming is inherited from a classical statistical learning hypothesis: Examples not explicitly observed with infinitely many training examples cannot be likely outcomes in the learner’s model. In order to endow neural networks with the ability to extrapolate over group transformations, we introduce a learning framework guided by a new learning hypothesis: Any invariance to transformation groups is mandatory even without evidence, unless the learner deems it inconsistent with the training data. Unlike existing invariance-driven methods for counterfactual inference, this framework allows extrapolations from a single environment. Finally, we introduce sequence and image extrapolation tasks that validate our framework and showcase the shortcomings of traditional approaches.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here