Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task-specific components.
no code implementations • • Brendan Shillingford, Yannis Assael, Matthew W. Hoffman, Thomas Paine, Cían Hughes, Utsav Prabhu, Hank Liao, Hasim Sak, Kanishka Rao, Lorrayne Bennett, Marie Mulville, Ben Coppin, Ben Laurie, Andrew Senior, Nando de Freitas
To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3, 886 hours of video).
Ranked #7 on Lipreading on LRS3-TED (using extra training data)
This work adopts the very successful distributional perspective on reinforcement learning and adapts it to the continuous control setting.
This paper introduces the Intentional Unintentional (IU) agent.
Two of the primary barriers to its adoption are an inability to scale to larger problems and a limited ability to generalize to new tasks.
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent.
The move from hand-designed features to learned features in machine learning has been wildly successful.
Of particular interest to us is to efficiently solve problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently.
Unknown constraints arise in many types of expensive black-box optimization problems.
How- ever, the performance of a Bayesian optimization method very much depends on its exploration strategy, i. e. the choice of acquisition function, and it is not clear a priori which choice will result in superior performance.
We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES).
This problem is also known as fixed-budget best arm identification in the multi-armed bandit literature.