Meta-learning Extractors for Music Source Separation

17 Feb 2020  ·  David Samuel, Aditya Ganeshan, Jason Naradowsky ·

We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models. This enables efficient parameter-sharing, while still allowing for instrument-specific parameterization. Meta-TasNet is shown to be more effective than the models trained independently or in a multi-task setting, and achieve performance comparable with state-of-the-art methods. In comparison to the latter, our extractors contain fewer parameters and have faster run-time performance. We discuss important architectural considerations, and explore the costs and benefits of this approach.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Music Source Separation MUSDB18 Meta-TasNet SDR (vocals) 6.40 # 24
SDR (drums) 5.91 # 23
SDR (other) 4.19 # 21
SDR (bass) 5.58 # 19
SDR (avg) 5.52 # 24

Methods


No methods listed for this paper. Add relevant methods here