DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement

14 May 2023  ·  Hendrik Schröter, Tobias Rosenkranz, Alberto N. Escalante-B., Andreas Maier ·

Multi-frame algorithms for single-channel speech enhancement are able to take advantage from short-time correlations within the speech signal. Deep Filtering (DF) was proposed to directly estimate a complex filter in frequency domain to take advantage of these correlations. In this work, we present a real-time speech enhancement demo using DeepFilterNet. DeepFilterNet's efficiency is enabled by exploiting domain knowledge of speech production and psychoacoustic perception. Our model is able to match state-of-the-art speech enhancement benchmarks while achieving a real-time-factor of 0.19 on a single threaded notebook CPU. The framework as well as pretrained weights have been published under an open source license.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Enhancement VoiceBank + DEMAND DeepFilterNet3 PESQ 3.17 # 9
CSIG 4.34 # 11
CBAK 3.61 # 4
COVL 3.77 # 9
STOI 0.944 # 8

Methods


No methods listed for this paper. Add relevant methods here