Search Results for author: Ethan Manilow

Found 16 papers, 10 papers with code

SingSong: Generating musical accompaniments from singing

no code implementations30 Jan 2023 Chris Donahue, Antoine Caillon, Adam Roberts, Ethan Manilow, Philippe Esling, Andrea Agostinelli, Mauro Verzetti, Ian Simon, Olivier Pietquin, Neil Zeghidour, Jesse Engel

We present SingSong, a system that generates instrumental music to accompany input vocals, potentially offering musicians and non-musicians alike an intuitive new way to create music featuring their own voice.

Audio Generation Retrieval

The Chamber Ensemble Generator: Limitless High-Quality MIR Data via Generative Modeling

1 code implementation28 Sep 2022 Yusong Wu, Josh Gardner, Ethan Manilow, Ian Simon, Curtis Hawthorne, Jesse Engel

We call this system the Chamber Ensemble Generator (CEG), and use it to generate a large dataset of chorales from four different chamber ensembles (CocoChorales).

Information Retrieval Music Information Retrieval +2

Music Separation Enhancement with Generative Modeling

no code implementations26 Aug 2022 Noah Schaffer, Boaz Cogan, Ethan Manilow, Max Morrison, Prem Seetharaman, Bryan Pardo

Despite phenomenal progress in recent years, state-of-the-art music separation systems produce source estimates with significant perceptual shortcomings, such as adding extraneous noise or removing harmonics.

Music Source Separation

Multi-instrument Music Synthesis with Spectrogram Diffusion

1 code implementation11 Jun 2022 Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, Jesse Engel

An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes.

Generative Adversarial Network Music Generation

Unsupervised Source Separation By Steering Pretrained Music Models

1 code implementation25 Oct 2021 Ethan Manilow, Patrick O'Reilly, Prem Seetharaman, Bryan Pardo

We showcase an unsupervised method that repurposes deep models trained for music generation and music tagging for audio source separation, without any retraining.

Audio Generation Audio Source Separation +3

Deep Learning Tools for Audacity: Helping Researchers Expand the Artist's Toolkit

no code implementations25 Oct 2021 Hugo Flores Garcia, Aldo Aguilar, Ethan Manilow, Dmitry Vedenko, Bryan Pardo

We present a software framework that integrates neural networks into the popular open-source audio editing software, Audacity, with a minimal amount of developer effort.

Sequence-to-Sequence Piano Transcription with Transformers

2 code implementations19 Jul 2021 Curtis Hawthorne, Ian Simon, Rigel Swavely, Ethan Manilow, Jesse Engel

Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets.

Information Retrieval Music Information Retrieval +2

Leveraging Hierarchical Structures for Few-Shot Musical Instrument Recognition

1 code implementation14 Jul 2021 Hugo Flores Garcia, Aldo Aguilar, Ethan Manilow, Bryan Pardo

Deep learning work on musical instrument recognition has generally focused on instrument classes for which we have abundant data.

Few-Shot Learning Instrument Recognition

Bespoke Neural Networks for Score-Informed Source Separation

no code implementations29 Sep 2020 Ethan Manilow, Bryan Pardo

In this paper, we introduce a simple method that can separate arbitrary musical instruments from an audio mixture.

Towards Musically Meaningful Explanations Using Source Separation

1 code implementation4 Sep 2020 Verena Haunschmid, Ethan Manilow, Gerhard Widmer

Prior work on explainable models in MIR has generally used image processing tools to produce explanations for DNN predictions, but these are not necessarily musically meaningful, or can be listened to (which, arguably, is important in music).

Explainable Models Image Segmentation +4

audioLIME: Listenable Explanations Using Source Separation

2 code implementations2 Aug 2020 Verena Haunschmid, Ethan Manilow, Gerhard Widmer

Deep neural networks (DNNs) are successfully applied in a wide variety of music information retrieval (MIR) tasks but their predictions are usually not interpretable.

Information Retrieval Music Information Retrieval +2

Simultaneous Separation and Transcription of Mixtures with Multiple Polyphonic and Percussive Instruments

no code implementations22 Oct 2019 Ethan Manilow, Prem Seetharaman, Bryan Pardo

We present a single deep learning architecture that can both separate an audio recording of a musical mixture into constituent single-instrument recordings and transcribe these instruments into a human-readable format at the same time, learning a shared musical representation for both tasks.

WHAM!: Extending Speech Separation to Noisy Environments

1 code implementation2 Jul 2019 Gordon Wichern, Joe Antognini, Michael Flynn, Licheng Richard Zhu, Emmett McQuinn, Dwight Crow, Ethan Manilow, Jonathan Le Roux

Recent progress in separating the speech signals from multiple overlapping speakers using a single audio channel has brought us closer to solving the cocktail party problem.

Speech Separation

Cannot find the paper you are looking for? You can Submit a new open access paper.