All for One and One for All: Improving Music Separation by Bridging Networks

8 Oct 2020  ·  Ryosuke Sawata, Stefan Uhlich, Shusuke Takahashi, Yuki Mitsufuji ·

This paper proposes several improvements for music separation with deep neural networks (DNNs), namely a multi-domain loss (MDL) and two combination schemes. First, by using MDL we take advantage of the frequency and time domain representation of audio signals. Next, we utilize the relationship among instruments by jointly considering them. We do this on the one hand by modifying the network architecture and introducing a CrossNet structure. On the other hand, we consider combinations of instrument estimates by using a new combination loss (CL). MDL and CL can easily be applied to many existing DNN-based separation methods as they are merely loss functions which are only used during training and which do not affect the inference step. Experimental results show that the performance of Open-Unmix (UMX), a well-known and state-of-the-art open source library for music separation, can be improved by utilizing our above schemes. Our modifications of UMX are open-sourced together with this paper.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Music Source Separation MUSDB18 X-UMX SDR (vocals) 6.61 # 22
SDR (drums) 6.47 # 20
SDR (other) 4.64 # 16
SDR (bass) 5.43 # 21
SDR (avg) 5.79 # 21

Methods