Search Results for author: Douglas Eck

Found 23 papers, 14 papers with code

Deduplicating Training Data Makes Language Models Better

1 code implementation14 Jul 2021 Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, Nicholas Carlini

As a result, over 1% of the unprompted output of language models trained on these datasets is copied verbatim from the training data.

Language Modelling

Joint Attention for Multi-Agent Coordination and Social Learning

no code implementations15 Apr 2021 Dennis Lee, Natasha Jaques, Chase Kew, Douglas Eck, Dale Schuurmans, Aleksandra Faust

We then train agents to minimize the difference between the attention weights that they apply to the environment at each timestep, and the attention of other agents.

Emergent Social Learning via Multi-agent Reinforcement Learning

no code implementations1 Oct 2020 Kamal Ndousse, Douglas Eck, Sergey Levine, Natasha Jaques

We analyze the reasons for this deficiency, and show that by imposing constraints on the training environment and introducing a model-based auxiliary loss we are able to obtain generalized social learning policies which enable agents to: i) discover complex skills that are not learned from single-agent training, and ii) adapt online to novel environments by taking cues from experts present in the new environment.

Imitation Learning Multi-agent Reinforcement Learning

Toward Better Storylines with Sentence-Level Language Models

1 code implementation ACL 2020 Daphne Ippolito, David Grangier, Douglas Eck, Chris Callison-Burch

We propose a sentence-level language model which selects the next sentence in a story from a finite set of fluent alternatives.

Language Modelling Sentence Embeddings +1

Learning to Groove with Inverse Sequence Transformations

no code implementations14 May 2019 Jon Gillick, Adam Roberts, Jesse Engel, Douglas Eck, David Bamman

We explore models for translating abstract musical ideas (scores, rhythms) into expressive performances using Seq2Seq and recurrent Variational Information Bottleneck (VIB) models.

Quantization

A Learned Representation for Scalable Vector Graphics

2 code implementations ICCV 2019 Raphael Gontijo Lopes, David Ha, Douglas Eck, Jonathon Shlens

Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world.

Vector Graphics

Counterpoint by Convolution

2 code implementations18 Mar 2019 Cheng-Zhi Anna Huang, Tim Cooijmans, Adam Roberts, Aaron Courville, Douglas Eck

Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end.

Music Modeling

Music Transformer

6 code implementations ICLR 2019 Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, Douglas Eck

This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length.

Music Modeling

This Time with Feeling: Learning Expressive Musical Performance

2 code implementations10 Aug 2018 Sageev Oore, Ian Simon, Sander Dieleman, Douglas Eck, Karen Simonyan

Music generation has generally been focused on either creating scores or interpreting them.

Music Generation

Learning a Latent Space of Multitrack Measures

1 code implementation1 Jun 2018 Ian Simon, Adam Roberts, Colin Raffel, Jesse Engel, Curtis Hawthorne, Douglas Eck

Discovering and exploring the underlying structure of multi-instrumental music using learning-based approaches remains an open problem.

A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music

5 code implementations ICML 2018 Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, Douglas Eck

The Variational Autoencoder (VAE) has proven to be an effective model for producing semantically meaningful latent representations for natural data.

Learning via social awareness: Improving a deep generative sketching model with facial feedback

no code implementations13 Feb 2018 Natasha Jaques, Jennifer McCleary, Jesse Engel, David Ha, Fred Bertsch, Rosalind Picard, Douglas Eck

We use a Latent Constraints GAN (LC-GAN) to learn from the facial feedback of a small group of viewers, by optimizing the model to produce sketches that it predicts will lead to more positive facial expressions.

Onsets and Frames: Dual-Objective Piano Transcription

1 code implementation30 Oct 2017 Curtis Hawthorne, Erich Elsen, Jialin Song, Adam Roberts, Ian Simon, Colin Raffel, Jesse Engel, Sageev Oore, Douglas Eck

We advance the state of the art in polyphonic piano music transcription by using a deep convolutional and recurrent neural network which is trained to jointly predict onsets and frames.

Music Transcription

A Neural Representation of Sketch Drawings

17 code implementations ICLR 2018 David Ha, Douglas Eck

We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects.

Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders

5 code implementations ICML 2017 Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, Mohammad Norouzi

Generative models in vision have seen rapid progress due to algorithmic improvements and the availability of high-quality image datasets.

Online and Linear-Time Attention by Enforcing Monotonic Alignments

2 code implementations ICML 2017 Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck

Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems.

Machine Translation Sentence Summarization +1

Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control

no code implementations ICML 2017 Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck

This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity.

An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

no code implementations NeurIPS 2009 Douglas Eck, Yoshua Bengio, Aaron C. Courville

The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors.

Cannot find the paper you are looking for? You can Submit a new open access paper.