Search Results for author: Sebastian Ewert

Found 10 papers, 7 papers with code

Towards Robust Unsupervised Disentanglement of Sequential Data -- A Case Study Using Music Audio

1 code implementation12 May 2022 Yin-Jyun Luo, Sebastian Ewert, Simon Dixon

In this paper, we show that the vanilla DSAE suffers from being sensitive to the choice of model architecture and capacity of the dynamic latent variables, and is prone to collapse the static latent variable.

Data Augmentation Disentanglement +1

A Lightweight Instrument-Agnostic Model for Polyphonic Note Transcription and Multipitch Estimation

1 code implementation18 Mar 2022 Rachel M. Bittner, Juan José Bosch, David Rubinstein, Gabriel Meseguer-Brocal, Sebastian Ewert

Despite its simplicity, benchmark results show our system's note estimation to be substantially better than a comparable baseline, and its frame-level accuracy to be only marginally below those of specialized state-of-the-art AMT systems.

Music Transcription

Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling

1 code implementation14 Nov 2019 Daniel Stoller, Mi Tian, Sebastian Ewert, Simon Dixon

In comparison to TCN and Wavenet, our network consistently saves memory and computation time, with speed-ups for training and inference of over 4x in the audio generation experiment in particular, while achieving a comparable performance in all tasks.

Audio Generation

Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators

1 code implementation ICLR 2020 Daniel Stoller, Sebastian Ewert, Simon Dixon

We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available.

Audio Source Separation Image Generation +1

End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model

1 code implementation18 Feb 2019 Daniel Stoller, Simon Durand, Sebastian Ewert

Time-aligned lyrics can enrich the music listening experience by enabling karaoke, text-based song retrieval and intra-song navigation, and other applications.

Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation

9 code implementations8 Jun 2018 Daniel Stoller, Sebastian Ewert, Simon Dixon

Models for audio source separation usually operate on the magnitude spectrum, which ignores phase information and makes separation performance dependant on hyper-parameters for the spectral front-end.

Audio Source Separation Music Source Separation

Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction

3 code implementations31 Oct 2017 Daniel Stoller, Sebastian Ewert, Simon Dixon

Based on this idea, we drive the separator towards outputs deemed as realistic by discriminator networks that are trained to tell apart real from separator samples.

Audio Source Separation Data Augmentation +1

Structured Dropout for Weak Label and Multi-Instance Learning and Its Application to Score-Informed Source Separation

no code implementations15 Sep 2016 Sebastian Ewert, Mark B. Sandler

Many success stories involving deep neural networks are instances of supervised learning, where available labels power gradient-based learning methods.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.