Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization

5 Jun 2023  ·  Yibing Liu, Chris Xing Tian, Haoliang Li, Lei Ma, Shiqi Wang ·

The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, i.e., in-distribution (InD). In this paper, we study the OOD problem from a neuron activation view. We first formulate neuron activation states by considering both the neuron output and its influence on model decisions. Then, to characterize the relationship between neurons and OOD issues, we introduce the \textit{neuron activation coverage} (NAC) -- a simple measure for neuron behaviors under InD data. Leveraging our NAC, we show that 1) InD and OOD inputs can be largely separated based on the neuron behavior, which significantly eases the OOD detection problem and beats the 21 previous methods over three benchmarks (CIFAR-10, CIFAR-100, and ImageNet-1K). 2) a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets, which enables a NAC-based criterion for evaluating model robustness. Compared to prevalent InD validation criteria, we show that NAC not only can select more robust models, but also has a stronger correlation with OOD test performance.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Out-of-Distribution Detection ImageNet-1k vs iNaturalist NAC-UE (ResNet-50) AUROC 96.52 # 9
Out-of-Distribution Detection ImageNet-1k vs OpenImage-O NAC-UE (ResNet-50) AUROC 91.45 # 4
Out-of-Distribution Detection ImageNet-1k vs Textures NAC-UE (ResNet-50) AUROC 97.9 # 2

Methods