Generative Interventions for Causal Learning

We introduce a framework for learning robust visual representations that generalize to new viewpoints, backgrounds, and scene contexts. Discriminative models often learn naturally occurring spurious correlations, which cause them to fail on images outside of the training distribution. In this paper, we show that we can steer generative models to manufacture interventions on features caused by confounding factors. Experiments, visualizations, and theoretical results show this method learns robust representations more consistent with the underlying causal relationships. Our approach improves performance on multiple datasets demanding out-of-distribution generalization, and we demonstrate state-of-the-art performance generalizing from ImageNet to ObjectNet dataset.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Ranked #44 on Image Classification on ObjectNet (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification ObjectNet ResNet-152 + GenInt with Transfer Top-5 Accuracy 61.43 # 11
Top-1 Accuracy 39.38 # 44
Image Classification ObjectNet ResNet-18 + GenInt with Transfer Top-5 Accuracy 48.02 # 27
Top-1 Accuracy 27.03 # 77

Methods


No methods listed for this paper. Add relevant methods here