Search Results for author: Rewon Child

Found 13 papers, 8 papers with code

Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images

8 code implementations ICLR 2021 Rewon Child

We present a hierarchical VAE that, for the first time, generates samples quickly while outperforming the PixelCNN in log-likelihood on all natural image benchmarks.

Ranked #2 on Image Generation on FFHQ 1024 x 1024 (bits/dimension metric)

Image Generation

Generative Pretraining from Pixels

4 code implementations ICML 2020 Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, Ilya Sutskever

Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.

Ranked #15 on Image Classification on STL-10 (using extra training data)

Representation Learning Self-Supervised Image Classification

Language Models are Unsupervised Multitask Learners

15 code implementations Preprint 2019 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 Ranked #1 on Language Modelling on enwik8 (using extra training data)

Common Sense Reasoning Coreference Resolution +10

Exploring Neural Transducers for End-to-End Speech Recognition

no code implementations24 Jul 2017 Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur, Yi Li, Hairong Liu, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu

In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition.

Language Modelling speech-recognition +1

Reducing Bias in Production Speech Models

no code implementations11 May 2017 Eric Battenberg, Rewon Child, Adam Coates, Christopher Fougner, Yashesh Gaur, Jiaji Huang, Heewoo Jun, Ajay Kannan, Markus Kliegl, Atul Kumar, Hairong Liu, Vinay Rao, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu

Replacing hand-engineered pipelines with end-to-end deep learning systems has enabled strong results in applications like speech and object recognition.

Object Recognition

Active Learning for Speech Recognition: the Power of Gradients

no code implementations10 Dec 2016 Jiaji Huang, Rewon Child, Vinay Rao, Hairong Liu, Sanjeev Satheesh, Adam Coates

For speech recognition, confidence scores and other likelihood-based active learning methods have been shown to be effective.

Active Learning Informativeness +2

Cannot find the paper you are looking for? You can Submit a new open access paper.