Hybrid Transformers for Music Source Separation

15 Nov 2022  ·  Simon Rouard, Francisco Massa, Alexandre Défossez ·

A natural question arising in Music Source Separation (MSS) is whether long range contextual information is useful, or whether local acoustic features are sufficient. In other fields, attention based Transformers have shown their ability to integrate information over long sequences. In this work, we introduce Hybrid Transformer Demucs (HT Demucs), an hybrid temporal/spectral bi-U-Net based on Hybrid Demucs, where the innermost layers are replaced by a cross-domain Transformer Encoder, using self-attention within one domain, and cross-attention across domains. While it performs poorly when trained only on MUSDB, we show that it outperforms Hybrid Demucs (trained on the same data) by 0.45 dB of SDR when using 800 extra training songs. Using sparse attention kernels to extend its receptive field, and per source fine-tuning, we achieve state-of-the-art results on MUSDB with extra training data, with 9.20 dB of SDR.

PDF Abstract

Results from the Paper


 Ranked #1 on Music Source Separation on MUSDB18 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Music Source Separation MUSDB18 Sparse HT Demucs (fine tuned) SDR (vocals) 9.37 # 4
SDR (drums) 10.83 # 1
SDR (other) 6.41 # 5
SDR (bass) 10.47 # 1
SDR (avg) 9.20 # 1
Music Source Separation MUSDB18 Hybrid Transformer Demucs (f.t.) SDR (vocals) 9.20 # 5
SDR (drums) 10.08 # 3
SDR (other) 6.42 # 4
SDR (bass) 9.78 # 2
SDR (avg) 9.00 # 2
Music Source Separation MUSDB18-HQ Sparse HT Demucs (fine tuned) SDR (drums) 10.83 # 2
SDR (bass) 10.47 # 3
SDR (others) 6.41 # 7
SDR (vocals) 9.37 # 7
SDR (avg) 9.20 # 3
Music Source Separation MUSDB18-HQ Hybrid Transformer Demucs (f.t.) SDR (drums) 10.08 # 4
SDR (bass) 10.39 # 4
SDR (others) 6.32 # 8
SDR (vocals) 9.20 # 8
SDR (avg) 9.00 # 4

Methods