Efficient Large-scale Audio Tagging via Transformer-to-CNN Knowledge Distillation

9 Nov 2022  ·  Florian Schmid, Khaled Koutini, Gerhard Widmer ·

Audio Spectrogram Transformer models rule the field of Audio Tagging, outrunning previously dominating Convolutional Neural Networks (CNNs). Their superiority is based on the ability to scale up and exploit large-scale datasets such as AudioSet. However, Transformers are demanding in terms of model size and computational requirements compared to CNNs. We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers. The proposed training schema and the efficient CNN design based on MobileNetV3 results in models outperforming previous solutions in terms of parameter and computational efficiency and prediction performance. We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of .483 mAP on AudioSet. Source Code available at: https://github.com/fschmid56/EfficientAT

PDF Abstract

Results from the Paper


Ranked #2 on Audio Tagging on AudioSet (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Audio Tagging AudioSet mn40_as (Ensemble) mean average precision 0.498 # 2
Audio Classification AudioSet mn40_as (Ensemble) Test mAP 0.498 # 7
Audio Tagging AudioSet mn40_as (Single) mean average precision 0.483 # 6
Audio Classification AudioSet mn40_as (Single) Test mAP 0.483 # 16
Audio Classification ESC-50 mn40_as Top-1 Accuracy 97.45 # 4
PRE-TRAINING DATASET AudioSet # 1
Accuracy (5-fold) 97.45 # 4

Methods