Duration robust sound event detection

8 Apr 2019  ·  Heinrich Dinkel, Kai Yu ·

Task 4 of the Dcase2018 challenge demonstrated that substantially more research is needed for a real-world application of sound event detection. Analyzing the challenge results it can be seen that most successful models are biased towards predicting long (e.g., over 5s) utterances. This work aims to investigate the performance impact of fixed sized window median filter post-processing and advocate the use of double thresholding as a more robust and predictable post-processing method. Further, four different temporal subsampling methods within the CRNN framework are proposed: mean-max, alpha-mean-max, Lp-norm and convolutional. We show that for this task subsampling the temporal resolution by a neural network enhances the F1 score as well as onset and offset accuracies. Our best single model achieves 30.1% F1 on the evaluation set and the best fusion model 32.5%, outperforming the previously best attempt by 0.1% while maintaining robustness to short events.

PDF Abstract

Datasets