Cross-scale Attention Model for Acoustic Event Classification

27 Dec 2019  ·  Xugang Lu, Peng Shen, Sheng Li, Yu Tsao, Hisashi Kawai ·

A major advantage of a deep convolutional neural network (CNN) is that the focused receptive field size is increased by stacking multiple convolutional layers. Accordingly, the model can explore the long-range dependency of features from the top layers. However, a potential limitation of the network is that the discriminative features from the bottom layers (which can model the short-range dependency) are smoothed out in the final representation. This limitation is especially evident in the acoustic event classification (AEC) task, where both short- and long-duration events are involved in an audio clip and needed to be classified. In this paper, we propose a cross-scale attention (CSA) model, which explicitly integrates features from different scales to form the final representation. Moreover, we propose the adoption of the attention mechanism to specify the weights of local and global features based on the spatial and temporal characteristics of acoustic events. Using mathematic formulations, we further reveal that the proposed CSA model can be regarded as a weighted residual CNN (ResCNN) model when the ResCNN is used as a backbone model. We tested the proposed model on two AEC datasets: one is an urban AEC task, and the other is an AEC task in smart car environments. Experimental results show that the proposed CSA model can effectively improve the performance of current state-of-the-art deep learning algorithms.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here