Reduction of Class Activation Uncertainty with Background Information

5 May 2023  ยท  H M Dipu Kabir ยท

Multitask learning is a popular approach to training high-performing neural networks with improved generalization. In this paper, we propose a background class to achieve improved generalization at a lower computation compared to multitask learning to help researchers and organizations with limited computation power. We also present a methodology for selecting background images and discuss potential future improvements. We apply our approach to several datasets and achieved improved generalization with much lower computation. We also investigate class activation mappings (CAMs) of the trained model and observed the tendency towards looking at a bigger picture in a few class classification problems with the proposed model training methodology. Applying transformer with the proposed background class, we receive state-of-the-art (SOTA) performance on STL-10, Caltech-101, and CINIC-10 datasets. Example scripts are available in the `CAM' folder of the following GitHub Repository: github.com/dipuk0506/UQ

PDF Abstract

Results from the Paper


 Ranked #1 on Image Classification on CINIC-10 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Fine-Grained Image Classification Caltech-101 VIT-L/16 (Background) Top-1 Error Rate 1.98 # 1
Accuracy 98.02 # 8
Image Classification CIFAR-10 VIT-L/16 (Background, Spinal FC) Percentage correct 99.15 # 10
Image Classification CIFAR-100 VIT-L/16 (Background, Spinal FC) Percentage correct 93.3 # 10
Image Classification CINIC-10 VIT-L/16 (Background) Accuracy 95.80 # 1
Image Classification CINIC-10 VIT-L/16 (Background, Spinal FC) Accuracy 95.80 # 1
Image Classification EMNIST-Balanced ResNet-18 Accuracy 90.04 # 5
Image Classification EMNIST-Byclass ResNet-18 Accuracy 88.22 # 1
Image Classification Flowers-102 VIT-L/16 (Background) Accuracy 99.75 # 2
Image Classification Flowers-102 WideResNet-101 Accuracy 99.03 # 16
Image Classification Kuzushiji-MNIST ResNet-18 Accuracy 98.60 # 14
Image Classification STL-10 VIT-L/16 (Background, Spinal FC) Percentage correct 99.71 # 1
Image Classification STL-10 WideResNet Percentage correct 98.58 # 5

Methods


No methods listed for this paper. Add relevant methods here