Search Results for author: Raza Habib

Found 7 papers, 1 papers with code

Integrated Weak Learning

no code implementations19 Jun 2022 Peter Hayes, Mingtian Zhang, Raza Habib, Jordan Burgess, Emine Yilmaz, David Barber

We introduce a label model that can learn to aggregate weak supervision sources differently for different datapoints and takes into consideration the performance of the end-model during training.

Sample Efficient Model Evaluation

no code implementations24 Sep 2021 Emine Yilmaz, Peter Hayes, Raza Habib, Jordan Burgess, David Barber

Labelling data is a major practical bottleneck in training and testing classifiers.

Semi-Supervised Generative Modeling for Controllable Speech Synthesis

no code implementations ICLR 2020 Raza Habib, Soroosh Mariooryad, Matt Shannon, Eric Battenberg, RJ Skerry-Ryan, Daisy Stanton, David Kao, Tom Bagby

We present a novel generative model that combines state-of-the-art neural text-to-speech (TTS) with semi-supervised probabilistic latent variable models.

Speech Synthesis

Variational f-divergence Minimization

no code implementations27 Jul 2019 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution.

Image Generation

Auxiliary Variational MCMC

1 code implementation ICLR 2019 Raza Habib, David Barber

We introduce Auxiliary Variational MCMC, a novel framework for learning MCMC kernels that combines recent advances in variational inference with insights drawn from traditional auxiliary variable MCMC methods such as Hamiltonian Monte Carlo.

regression Variational Inference

Spread Divergence

no code implementations21 Nov 2018 Mingtian Zhang, Peter Hayes, Tom Bird, Raza Habib, David Barber

For distributions $\mathbb{P}$ and $\mathbb{Q}$ with different supports or undefined densities, the divergence $\textrm{D}(\mathbb{P}||\mathbb{Q})$ may not exist.

Training generative latent models by variational f-divergence minimization

no code implementations27 Sep 2018 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific form of f-divergence between the model and data distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.