Search Results for author: Scott Wisdom

Found 26 papers, 5 papers with code

AudioSlots: A slot-centric generative model for audio separation

no code implementations9 May 2023 Pradyumna Reddy, Scott Wisdom, Klaus Greff, John R. Hershey, Thomas Kipf

We discuss the results and limitations of our approach in detail, and further outline potential ways to overcome the limitations and directions for future work.

blind source separation Speech Separation

AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation

no code implementations20 Jul 2022 Efthymios Tzinis, Scott Wisdom, Tal Remez, John R. Hershey

We identify several limitations of previous work on audio-visual on-screen sound separation, including the coarse resolution of spatio-temporal attention, poor convergence of the audio separation model, limited variety in training and evaluation data, and failure to account for the trade off between preservation of on-screen sounds and suppression of off-screen sounds.

Text-Driven Separation of Arbitrary Sounds

no code implementations12 Apr 2022 Kevin Kilgour, Beat Gfeller, Qingqing Huang, Aren Jansen, Scott Wisdom, Marco Tagliasacchi

The second model, SoundFilter, takes a mixed source audio clip as an input and separates it based on a conditioning vector from the shared text-audio representation defined by SoundWords, making the model agnostic to the conditioning modality.

CycleGAN-Based Unpaired Speech Dereverberation

no code implementations29 Mar 2022 Hannah Muckenhirn, Aleksandr Safin, Hakan Erdogan, Felix de Chaumont Quitry, Marco Tagliasacchi, Scott Wisdom, John R. Hershey

Typically, neural network-based speech dereverberation models are trained on paired data, composed of a dry utterance and its corresponding reverberant utterance.

Speech Dereverberation

Improving Bird Classification with Unsupervised Sound Separation

no code implementations7 Oct 2021 Tom Denton, Scott Wisdom, John R. Hershey

This paper addresses the problem of species classification in bird song recordings.

Classification

Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention

no code implementations17 Jun 2021 Efthymios Tzinis, Scott Wisdom, Tal Remez, John R. Hershey

We introduce a state-of-the-art audio-visual on-screen sound separation system which is capable of learning to separate sounds and associate them with on-screen objects by looking at in-the-wild videos.

Unsupervised Pre-training

Sparse, Efficient, and Semantic Mixture Invariant Training: Taming In-the-Wild Unsupervised Sound Separation

no code implementations1 Jun 2021 Scott Wisdom, Aren Jansen, Ron J. Weiss, Hakan Erdogan, John R. Hershey

The best performance is achieved using larger numbers of output sources, enabled by our efficient MixIT loss, combined with sparsity losses to prevent over-separation.

Self-Supervised Learning from Automatically Separated Sound Scenes

1 code implementation5 May 2021 Eduardo Fonseca, Aren Jansen, Daniel P. W. Ellis, Scott Wisdom, Marco Tagliasacchi, John R. Hershey, Manoj Plakal, Shawn Hershey, R. Channing Moore, Xavier Serra

Real-world sound scenes consist of time-varying collections of sound sources, each generating characteristic sound events that are mixed together in audio recordings.

Contrastive Learning Self-Supervised Learning

What's All the FUSS About Free Universal Sound Separation Data?

no code implementations2 Nov 2020 Scott Wisdom, Hakan Erdogan, Daniel Ellis, Romain Serizel, Nicolas Turpault, Eduardo Fonseca, Justin Salamon, Prem Seetharaman, John Hershey

We introduce the Free Universal Sound Separation (FUSS) dataset, a new corpus for experiments in separating mixtures of an unknown number of sounds from an open domain of sound types.

Data Augmentation

Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds

no code implementations ICLR 2021 Efthymios Tzinis, Scott Wisdom, Aren Jansen, Shawn Hershey, Tal Remez, Daniel P. W. Ellis, John R. Hershey

For evaluation and semi-supervised experiments, we collected human labels for presence of on-screen and off-screen sounds on a small subset of clips.

Scene Understanding

Unsupervised Sound Separation Using Mixture Invariant Training

no code implementations NeurIPS 2020 Scott Wisdom, Efthymios Tzinis, Hakan Erdogan, Ron J. Weiss, Kevin Wilson, John R. Hershey

In such supervised approaches, a model is trained to predict the component sources from synthetic mixtures created by adding up isolated ground-truth sources.

Speech Enhancement Speech Separation +1

Sequential Multi-Frame Neural Beamforming for Speech Separation and Enhancement

no code implementations18 Nov 2019 Zhong-Qiu Wang, Hakan Erdogan, Scott Wisdom, Kevin Wilson, Desh Raj, Shinji Watanabe, Zhuo Chen, John R. Hershey

This work introduces sequential neural beamforming, which alternates between neural network based spectral separation and beamforming based spatial separation.

Speaker Separation Speech Enhancement +3

Improving Universal Sound Separation Using Sound Classification

no code implementations18 Nov 2019 Efthymios Tzinis, Scott Wisdom, John R. Hershey, Aren Jansen, Daniel P. W. Ellis

Deep learning approaches have recently achieved impressive performance on both audio source separation and sound classification.

Audio Source Separation Classification +2

Transfer Learning From Sound Representations For Anger Detection in Speech

no code implementations6 Feb 2019 Mohamed Ezzeldin A. ElShaer, Scott Wisdom, Taniya Mishra

In this work, we train fully convolutional networks to detect anger in speech.

Transfer Learning

Differentiable Consistency Constraints for Improved Deep Speech Enhancement

no code implementations20 Nov 2018 Scott Wisdom, John R. Hershey, Kevin Wilson, Jeremy Thorpe, Michael Chinen, Brian Patton, Rif A. Saurous

Furthermore, the only previous approaches that apply mixture consistency use real-valued masks; mixture consistency has been ignored for complex-valued masks.

Sound Audio and Speech Processing

SDR - half-baked or well done?

1 code implementation6 Nov 2018 Jonathan Le Roux, Scott Wisdom, Hakan Erdogan, John R. Hershey

In speech enhancement and source separation, signal-to-noise ratio is a ubiquitous objective measure of denoising/separation quality.

Sound Audio and Speech Processing

Deep Recurrent NMF for Speech Separation by Unfolding Iterative Thresholding

1 code implementation21 Sep 2017 Scott Wisdom, Thomas Powers, James Pitton, Les Atlas

This interpretability also provides principled initializations that enable faster training and convergence to better solutions compared to conventional random initialization.

Speech Separation

Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery

1 code implementation22 Nov 2016 Scott Wisdom, Thomas Powers, James Pitton, Les Atlas

Recurrent neural networks (RNNs) are powerful and effective for processing sequential data.

Compressive Sensing

Full-Capacity Unitary Recurrent Neural Networks

2 code implementations NeurIPS 2016 Scott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, Les Atlas

To address this question, we propose full-capacity uRNNs that optimize their recurrence matrix over all unitary matrices, leading to significantly improved performance over uRNNs that use a restricted-capacity recurrence matrix.

Open-Ended Question Answering Sequential Image Classification

Enhancement and Recognition of Reverberant and Noisy Speech by Extending Its Coherence

no code implementations2 Sep 2015 Scott Wisdom, Thomas Powers, Les Atlas, James Pitton

Our approach centers around using a single-channel minimum mean-square error log-spectral amplitude (MMSE-LSA) estimator proposed by Habets, which scales coefficients in a time-frequency domain to suppress noise and reverberation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.