Search Results for author: Pritish Chandna

Found 9 papers, 6 papers with code

A Deep Learning Based Analysis-Synthesis Framework For Unison Singing

1 code implementation21 Sep 2020 Pritish Chandna, Helena Cuesta, Emilia Gómez

Unison singing is the name given to an ensemble of singers simultaneously singing the same melody and lyrics.

Deep Learning Based Source Separation Applied To Choir Ensembles

no code implementations17 Aug 2020 Darius Petermann, Pritish Chandna, Helena Cuesta, Jordi Bonada, Emilia Gomez

However, most of the research has been focused on a typical case which consists in separating vocal, percussion and bass sources from a mixture, each of which has a distinct spectral structure.

Content Based Singing Voice Extraction From a Musical Mixture

1 code implementation12 Feb 2020 Pritish Chandna, Merlijn Blaauw, Jordi Bonada, Emilia Gomez

We present a deep learning based methodology for extracting the singing voice signal from a musical mixture based on the underlying linguistic content.

Knowledge Distillation

Neural Percussive Synthesis Parameterised by High-Level Timbral Features

1 code implementation25 Nov 2019 António Ramires, Pritish Chandna, Xavier Favory, Emilia Gómez, Xavier Serra

We present a deep neural network-based methodology for synthesising percussive sounds with control over high-level timbral characteristics of the sounds.

Vocal Bursts Intensity Prediction

A Framework for Multi-f0 Modeling in SATB Choir Recordings

no code implementations10 Apr 2019 Helena Cuesta, Emilia Gómez, Pritish Chandna

We observe, however, that the scenario of multiple singers for each choir part (i. e. unison singing) is far more challenging.

WGANSing: A Multi-Voice Singing Voice Synthesizer Based on the Wasserstein-GAN

2 code implementations26 Mar 2019 Pritish Chandna, Merlijn Blaauw, Jordi Bonada, Emilia Gomez

We present a deep neural network based singing voice synthesizer, inspired by the Deep Convolutions Generative Adversarial Networks (DCGAN) architecture and optimized using the Wasserstein-GAN algorithm.

Sound Audio and Speech Processing

WGANSing

1 code implementation Interspeech 2019 Pritish Chandna, Merlijn Blaauw

We present a deep neural network based singing voice synthesizer, inspired by the Deep Convolutions Generative Adversarial Networks (DCGAN) architecture and optimized using the Wasserstein-GAN algorithm.

Acoustic Modelling

Deep Learning for Singing Processing: Achievements, Challenges and Impact on Singers and Listeners

no code implementations9 Jul 2018 Emilia Gómez, Merlijn Blaauw, Jordi Bonada, Pritish Chandna, Helena Cuesta

This paper summarizes some recent advances on a set of tasks related to the processing of singing using state-of-the-art deep learning techniques.

Cannot find the paper you are looking for? You can Submit a new open access paper.