In this work, we propose Exformer, a time-domain architecture for target speaker extraction.
In real life, room effect, also known as room reverberation, and the present background noise degrade the quality of speech.
As deep speech enhancement algorithms have recently demonstrated capabilities greatly surpassing their traditional counterparts for suppressing noise, reverberation and echo, attention is turning to the problem of packet loss concealment (PLC).
Singing voice separation aims to separate music into vocals and accompaniment components.
Neural vocoders have recently demonstrated high quality speech synthesis, but typically require a high computational complexity.
Neural speech synthesis models can synthesize high quality speech but typically require a high computational complexity to do so.
The presented approach doesn't assume the presence of labeled anomalies in the training dataset and uses a novel deep neural network architecture to learn the temporal dynamics of the multivariate time series at multiple resolutions while being robust to contaminations in the training dataset.
The presence of multiple talkers in the surrounding environment poses a difficult challenge for real-time speech communication systems considering the constraints on network size and complexity.
Given a limited set of labeled data, we present a method to leverage a large volume of unlabeled data to improve the model's performance.
Audio codecs based on discretized neural autoencoders have recently been developed and shown to provide significantly higher compression levels for comparable quality speech output.
For tasks such as classification, there is a good case for learning representations of the data that are invariant to such transformations, yet this is not explicitly enforced by classification losses such as the cross-entropy loss.
Neural network applications generally benefit from larger-sized models, but for current speech enhancement models, larger scale networks often suffer from decreased robustness to the variety of real-world use cases beyond what is encountered in training data.
Many neural speech enhancement and source separation systems operate in the time-frequency domain.
Supervised deep learning has gained significant attention for speech enhancement recently.
Ranked #2 on Speech Enhancement on CHiME-3
We present enhancements to a speech-to-speech translation pipeline in order to perform automatic dubbing.