HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection

2 Feb 2022  ·  Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick, Shlomo Dubnov ·

Audio classification is an important task of mapping audio samples into their corresponding labels. Recently, the transformer model with self-attention mechanisms has been adopted in this field. However, existing audio transformers require large GPU memories and long training time, meanwhile relying on pretrained vision models to achieve high performance, which limits the model's scalability in audio tasks. To combat these problems, we introduce HTS-AT: an audio transformer with a hierarchical structure to reduce the model size and training time. It is further combined with a token-semantic module to map final outputs into class featuremaps, thus enabling the model for the audio event detection (i.e. localization in time). We evaluate HTS-AT on three datasets of audio classification where it achieves new state-of-the-art (SOTA) results on AudioSet and ESC-50, and equals the SOTA on Speech Command V2. It also achieves better performance in event localization than the previous CNN-based models. Moreover, HTS-AT requires only 35% model parameters and 15% training time of the previous audio transformer. These results demonstrate the high performance and high efficiency of HTS-AT.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Audio Classification AudioSet HTS-AT (Single) Test mAP 0.471 # 21
Audio Classification AudioSet HTS-AT (Ensemble) Test mAP 0.487 # 12
Sound Event Detection DESED HTS-AT event-based F1 score 50.7 # 4
Audio Classification ESC-50 HTS-AT Top-1 Accuracy 97.0 # 6
PRE-TRAINING DATASET AudioSet # 1
Accuracy (5-fold) 97.0 # 6
Keyword Spotting Google Speech Commands HTS-AT Google Speech Commands V2 35 98.0 # 5

Methods


No methods listed for this paper. Add relevant methods here