Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention

17 Jun 2021  ·  Efthymios Tzinis, Scott Wisdom, Tal Remez, John R. Hershey ·

We introduce a state-of-the-art audio-visual on-screen sound separation system which is capable of learning to separate sounds and associate them with on-screen objects by looking at in-the-wild videos. We identify limitations of previous work on audio-visual on-screen sound separation, including the simplicity and coarse resolution of spatio-temporal attention, and poor convergence of the audio separation model. Our proposed model addresses these issues using cross-modal and self-attention modules that capture audio-visual dependencies at a finer resolution over time, and by unsupervised pre-training of audio separation model. These improvements allow the model to generalize to a much wider set of unseen videos. We also show a robust way to further improve the generalization capability of our models by calibrating the probabilities of our audio-visual on-screen classifier, using only a small amount of in-domain videos labeled for their on-screen presence. For evaluation and semi-supervised training, we collected human annotations of on-screen audio from a large database of in-the-wild videos (YFCC100m). Our results show marked improvements in on-screen separation performance, in more general conditions than previous methods.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here