Search Results for author: Felix Hill

Found 40 papers, 16 papers with code

BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation

1 code implementation18 Oct 2021 Thomas Scialom, Felix Hill

There is currently no simple, unified way to compare, analyse or evaluate metrics across a representative set of tasks.

Text Generation

Multimodal Few-Shot Learning with Frozen Language Models

no code implementations NeurIPS 2021 Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill

When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples.

Few-Shot Learning Language Modelling +2

Towards mental time travel: a hierarchical memory for reinforcement learning agents

3 code implementations NeurIPS 2021 Andrew Kyle Lampinen, Stephanie C. Y. Chan, Andrea Banino, Felix Hill

Agents with common memory architectures struggle to recall and integrate across multiple timesteps of a past event, or even to recall the details of a single timestep that is followed by distractor tasks.

Meta-Learning

Neural spatio-temporal reasoning with object-centric self-supervised learning

no code implementations1 Jan 2021 David Ding, Felix Hill, Adam Santoro, Matthew Botvinick

Transformer-based language models have proved capable of rudimentary symbolic reasoning, underlining the effectiveness of applying self-attention computations to sets of discrete entities.

Language Modelling Self-Supervised Learning

Attention over learned object embeddings enables complex visual reasoning

1 code implementation NeurIPS 2021 David Ding, Felix Hill, Adam Santoro, Malcolm Reynolds, Matt Botvinick

Neural networks have achieved success in a wide array of perceptual tasks but often fail at tasks involving both perception and higher-level reasoning.

Visual Reasoning

Grounded Language Learning Fast and Slow

1 code implementation ICLR 2021 Felix Hill, Olivier Tieleman, Tamara von Glehn, Nathaniel Wong, Hamza Merzic, Stephen Clark

Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning.

Grounded language learning Meta-Learning +1

Probing Emergent Semantics in Predictive Agents via Question Answering

no code implementations ICML 2020 Abhishek Das, Federico Carnevale, Hamza Merzic, Laura Rimell, Rosalia Schneider, Josh Abramson, Alden Hung, Arun Ahuja, Stephen Clark, Gregory Wayne, Felix Hill

Recent work has shown how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments.

Question Answering

Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text

no code implementations19 May 2020 Felix Hill, Sona Mokra, Nathaniel Wong, Tim Harley

Here, we propose a conceptually simple method for training instruction-following agents with deep RL that are robust to natural human instructions.

Language Modelling Representation Learning +1

Environmental drivers of systematicity and generalization in a situated agent

no code implementations ICLR 2020 Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L. McClelland, Adam Santoro

The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI.

Unity

Robust Instruction-Following in a Situated Agent via Transfer-Learning from Text

no code implementations25 Sep 2019 Felix Hill, Sona Mokra, Nathaniel Wong, Tim Harley

We address this issue by integrating language encoders that are pretrained on large text corpora into a situated, instruction-following agent.

Representation Learning Transfer Learning

Higher-order Comparisons of Sentence Encoder Representations

no code implementations IJCNLP 2019 Mostafa Abdou, Artur Kulmizev, Felix Hill, Daniel M. Low, Anders Søgaard

Representational Similarity Analysis (RSA) is a technique developed by neuroscientists for comparing activity patterns of different measurement modalities (e. g., fMRI, electrophysiology, behavior).

Eye Tracking

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

2 code implementations NeurIPS 2019 Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks.

Language understanding Transfer Learning

Analysing Mathematical Reasoning Abilities of Neural Models

6 code implementations ICLR 2019 David Saxton, Edward Grefenstette, Felix Hill, Pushmeet Kohli

The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes.

Mathematical Reasoning Math Word Problem Solving

Learning to Make Analogies by Contrasting Abstract Relational Structure

1 code implementation ICLR 2019 Felix Hill, Adam Santoro, David G. T. Barrett, Ari S. Morcos, Timothy Lillicrap

Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data.

Generating Diverse Programs with Instruction Conditioned Reinforced Adversarial Learning

no code implementations3 Dec 2018 Aishwarya Agrawal, Mateusz Malinowski, Felix Hill, Ali Eslami, Oriol Vinyals, tejas kulkarni

In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction.

Neural Arithmetic Logic Units

22 code implementations NeurIPS 2018 Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, Phil Blunsom

Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training.

Measuring abstract reasoning in neural networks

2 code implementations ICML 2018 David G. T. Barrett, Felix Hill, Adam Santoro, Ari S. Morcos, Timothy Lillicrap

To succeed at this challenge, models must cope with various generalisation `regimes' in which the training and test data differ in clearly-defined ways.

Learning to Understand Goal Specifications by Modelling Reward

1 code implementation ICLR 2019 Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, Edward Grefenstette

Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards.

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

6 code implementations WS 2018 Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset.

Language understanding Natural Language Inference +2

Understanding Grounded Language Learning Agents

no code implementations ICLR 2018 Felix Hill, Karl Moritz Hermann, Phil Blunsom, Stephen Clark

Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds.

Grounded language learning Policy Gradient Methods

Understanding Early Word Learning in Situated Artificial Agents

no code implementations ICLR 2018 Felix Hill, Stephen Clark, Karl Moritz Hermann, Phil Blunsom

Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and execute symbolic instructions as first-person actors in partially-observable worlds.

Grounded language learning Policy Gradient Methods

Grounded Language Learning in a Simulated 3D World

1 code implementation20 Jun 2017 Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom

Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions.

Grounded language learning

HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment

no code implementations CL 2017 Ivan Vulić, Daniela Gerz, Douwe Kiela, Felix Hill, Anna Korhonen

We introduce HyperLex - a dataset and evaluation resource that quantifies the extent of of the semantic category membership, that is, type-of relation also known as hyponymy-hypernymy or lexical entailment (LE) relation between 2, 616 concept pairs.

Lexical Entailment Representation Learning

SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity

no code implementations EMNLP 2016 Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, Anna Korhonen

Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research.

Representation Learning

Learning Distributed Representations of Sentences from Unlabelled Data

1 code implementation NAACL 2016 Felix Hill, Kyunghyun Cho, Anna Korhonen

Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data.

Unsupervised Representation Learning

Learning to Understand Phrases by Embedding the Dictionary

2 code implementations TACL 2016 Felix Hill, Kyunghyun Cho, Anna Korhonen, Yoshua Bengio

Distributional models that learn rich semantic word representations are a success story of recent NLP research.

Embedding Word Similarity with Neural Machine Translation

no code implementations19 Dec 2014 Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, Yoshua Bengio

Here we investigate the embeddings learned by neural machine translation models, a recently-developed class of neural language model.

Language Modelling Machine Translation +2

Not All Neural Embeddings are Born Equal

no code implementations2 Oct 2014 Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, Yoshua Bengio

Neural language models learn word representations that capture rich linguistic and conceptual information.

Machine Translation Translation

SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation

3 code implementations CL 2015 Felix Hill, Roi Reichart, Anna Korhonen

We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways.

Representation Learning

Multi-Modal Models for Concrete and Abstract Concept Meaning

no code implementations TACL 2014 Felix Hill, Roi Reichart, Anna Korhonen

Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition.

Language Acquisition

Cannot find the paper you are looking for? You can Submit a new open access paper.