Deep CNN Framework for Audio Event Recognition using Weakly Labeled Web Data

9 Jul 2017  ·  Anurag Kumar, Bhiksha Raj ·

The development of audio event recognition systems require labeled training data, which are generally hard to obtain. One promising source of recordings of audio events is the large amount of multimedia data on the web. In particular, if the audio content analysis must itself be performed on web audio, it is important to train the recognizers themselves from such data. Training from these web data, however, poses several challenges, the most important being the availability of labels: labels, if any, that may be obtained for the data are generally weak, and not of the kind conventionally required for training detectors or classifiers. We propose that learning algorithms that can exploit weak labels offer an effective method to learn from web data. We then propose a robust and efficient deep convolutional neural network (CNN) based framework to learn audio event recognizers from weakly labeled data. The proposed method can train from and analyze recordings of variable length in an efficient manner and outperforms a network trained with strongly labeled web data by a considerable margin. Moreover, even though we learn from weakly labeled data, where event time stamps within the recording are not available during training, our proposed framework is able to localize events during the inference stage.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here