A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others

Machine learning models have been found to learn shortcuts -- unintended decision rules that are unable to generalize -- undermining models' reliability. Previous works address this problem under the tenuous assumption that only a single shortcut exists in the training data. Real-world images are rife with multiple visual cues from background to texture. Key to advancing the reliability of vision systems is understanding whether existing methods can overcome multiple shortcuts or struggle in a Whac-A-Mole game, i.e., where mitigating one shortcut amplifies reliance on others. To address this shortcoming, we propose two benchmarks: 1) UrbanCars, a dataset with precisely controlled spurious cues, and 2) ImageNet-W, an evaluation set based on ImageNet for watermark, a shortcut we discovered affects nearly every modern vision model. Along with texture and background, ImageNet-W allows us to study multiple shortcuts emerging from training on natural images. We find computer vision models, including large foundation models -- regardless of training set, architecture, and supervision -- struggle when multiple shortcuts are present. Even methods explicitly designed to combat shortcuts struggle in a Whac-A-Mole dilemma. To tackle this challenge, we propose Last Layer Ensemble, a simple-yet-effective method to mitigate multiple shortcuts without Whac-A-Mole behavior. Our results surface multi-shortcut mitigation as an overlooked challenge critical to advancing the reliability of vision systems. The datasets and code are released: https://github.com/facebookresearch/Whac-A-Mole.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Domain Generalization ImageNet-R LLE (ViT-H/14, MAE, Edge Aug) Top-1 Error Rate 33.1 # 9
Domain Generalization ImageNet-R LLE (ViT-B/16, SWAG, Edge Aug) Top-1 Error Rate 31.3 # 6
Domain Generalization ImageNet-Sketch LLE (ViT-H/14, MAE, Edge Aug) Top-1 accuracy 53.39 # 6
Out-of-Distribution Generalization ImageNet-W LLE (ViT-B/16, SWAG (FT)) IN-W Gap -2.50 # 1
Carton Gap +8 # 1
Out-of-Distribution Generalization ImageNet-W LLE (ResNet-50) IN-W Gap -6.18 # 1
Carton Gap +10 # 1
Out-of-Distribution Generalization ImageNet-W LLE (ViT-H/14, MAE (FT)) IN-W Gap -1.11 # 1
Carton Gap +28 # 1
Out-of-Distribution Generalization ImageNet-W LLE (ViT-L/16, MAE (FT)) IN-W Gap -1.74 # 1
Carton Gap +12 # 1
Out-of-Distribution Generalization ImageNet-W LLE (ViT-B/16, MAE (FT)) IN-W Gap -2.48 # 1
Carton Gap +6 # 1
Image Classification ObjectNet LLE (ViT-H/14, MAE, Edge Aug) Top-1 Accuracy 60.78 # 18
Out-of-Distribution Generalization UrbanCars LLE BG Gap -2.1 # 1
CoObj Gap -2.7 # 1
BG+CoObj Gap -5.9 # 1

Methods


No methods listed for this paper. Add relevant methods here