Environment Inference for Invariant Learning

14 Oct 2020  ·  Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel ·

Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domain-invariant. An important assumption in this area is that the training examples are partitioned into "domains" or "environments". Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds and CivilComments datasets. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Out-of-Distribution Generalization ImageNet-W EIIL (ResNet-50) IN-W Gap -19.71 # 1
Carton Gap +42 # 1
Out-of-Distribution Generalization UrbanCars EIIL (E=2) BG Gap -21.5 # 1
CoObj Gap -6.8 # 1
BG+CoObj Gap -59.6 # 1
Out-of-Distribution Generalization UrbanCars EIIL (E=1) BG Gap -4.2 # 1
CoObj Gap -24.7 # 1
BG+CoObj Gap -44.9 # 1

Methods


No methods listed for this paper. Add relevant methods here