Search Results for author: Felix Hill

Found 57 papers, 19 papers with code

The Edge of Orthogonality: A Simple View of What Makes BYOL Tick

no code implementations9 Feb 2023 Pierre H. Richemond, Allison Tam, Yunhao Tang, Florian Strub, Bilal Piot, Felix Hill

With simple linear algebra, we show that when using a linear predictor, the optimal predictor is close to an orthogonal projection, and propose a general framework based on orthonormalization that enables to interpret and give intuition on why BYOL works.

Collaborating with language models for embodied reasoning

no code implementations1 Feb 2023 Ishita Dasgupta, Christine Kaeser-Chen, Kenneth Marino, Arun Ahuja, Sheila Babayan, Felix Hill, Rob Fergus

On the other hand, Large Scale Language Models (LSLMs) have exhibited strong reasoning ability and the ability to to adapt to new tasks through in-context learning.

Language Modelling reinforcement-learning +1

SemPPL: Predicting pseudo-labels for better contrastive representations

no code implementations12 Jan 2023 Matko Bošnjak, Pierre H. Richemond, Nenad Tomasev, Florian Strub, Jacob C. Walker, Felix Hill, Lars Holger Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic

We propose a new semi-supervised learning method, Semantic Positives via Pseudo-Labels (SemPPL), that combines labelled and unlabelled data to learn informative representations.

Contrastive Learning Pseudo Label

Transformers generalize differently from information stored in context vs in weights

no code implementations11 Oct 2022 Stephanie C. Y. Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew K. Lampinen, Felix Hill

In transformers trained on controlled stimuli, we find that generalization from weights is more rule-based whereas generalization from context is largely exemplar-based.

Meaning without reference in large language models

no code implementations5 Aug 2022 Steven T. Piantadosi, Felix Hill

The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings.

Language models show human-like content effects on reasoning

no code implementations14 Jul 2022 Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill

We find that state of the art large language models (with 7 or 70 billion parameters; Hoffman et al., 2022) reflect many of the same patterns observed in humans across these tasks -- like humans, models reason more effectively about believable situations than unrealistic or abstract ones.

Language Modelling Logical Reasoning +1

Know your audience: specializing grounded language models with the game of Dixit

no code implementations16 Jun 2022 Aaditya K. Singh, David Ding, Andrew Saxe, Felix Hill, Andrew K. Lampinen

In a series of controlled experiments, we show that the speaker can adapt according to the idiosyncratic strengths and weaknesses of various pairs of different listeners.

Language Modelling

Data Distributional Properties Drive Emergent In-Context Learning in Transformers

1 code implementation22 Apr 2022 Stephanie C. Y. Chan, Adam Santoro, Andrew K. Lampinen, Jane X. Wang, Aaditya Singh, Pierre H. Richemond, Jay McClelland, Felix Hill

In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models.

Few-Shot Learning

Zipfian environments for Reinforcement Learning

1 code implementation15 Mar 2022 Stephanie C. Y. Chan, Andrew K. Lampinen, Pierre H. Richemond, Felix Hill

As humans and animals learn in the natural world, they encounter distributions of entities, situations and events that are far from uniform.

reinforcement-learning Reinforcement Learning (RL) +1

Feature-Attending Recurrent Modules for Generalizing Object-Centric Behavior

no code implementations15 Dec 2021 Wilka Carvalho, Andrew Lampinen, Kyriacos Nikiforou, Felix Hill, Murray Shanahan

To generalize in object-centric tasks, a reinforcement learning (RL) agent needs to exploit the structure that objects induce.

Reinforcement Learning (RL)

BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation

2 code implementations18 Oct 2021 Thomas Scialom, Felix Hill

There is currently no simple, unified way to compare, analyse or evaluate metrics across a representative set of tasks.

General Knowledge Informativeness +1

Tell me why!—Explanations support learning relational and causal structure

no code implementations29 Sep 2021 Andrew Kyle Lampinen, Nicholas Andrew Roy, Ishita Dasgupta, Stephanie C.Y. Chan, Allison Tam, Chen Yan, Adam Santoro, Neil Charles Rabinowitz, Jane X Wang, Felix Hill

Explanations play a considerable role in human learning, especially in areas that remain major challenges for AI—forming abstractions, and learning about the relational and causal structure of the world.

Odd One Out

Multimodal Few-Shot Learning with Frozen Language Models

no code implementations NeurIPS 2021 Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill

When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples.

Few-Shot Learning Language Modelling +3

Towards mental time travel: a hierarchical memory for reinforcement learning agents

3 code implementations NeurIPS 2021 Andrew Kyle Lampinen, Stephanie C. Y. Chan, Andrea Banino, Felix Hill

Agents with common memory architectures struggle to recall and integrate across multiple timesteps of a past event, or even to recall the details of a single timestep that is followed by distractor tasks.

Meta-Learning Navigate +2

Neural spatio-temporal reasoning with object-centric self-supervised learning

no code implementations1 Jan 2021 David Ding, Felix Hill, Adam Santoro, Matthew Botvinick

Transformer-based language models have proved capable of rudimentary symbolic reasoning, underlining the effectiveness of applying self-attention computations to sets of discrete entities.

Language Modelling Self-Supervised Learning

Attention over learned object embeddings enables complex visual reasoning

1 code implementation NeurIPS 2021 David Ding, Felix Hill, Adam Santoro, Malcolm Reynolds, Matt Botvinick

Neural networks have achieved success in a wide array of perceptual tasks but often fail at tasks involving both perception and higher-level reasoning.

Video Object Tracking Visual Reasoning

Grounded Language Learning Fast and Slow

1 code implementation ICLR 2021 Felix Hill, Olivier Tieleman, Tamara von Glehn, Nathaniel Wong, Hamza Merzic, Stephen Clark

Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning.

Grounded language learning Meta-Learning +1

Probing Emergent Semantics in Predictive Agents via Question Answering

no code implementations ICML 2020 Abhishek Das, Federico Carnevale, Hamza Merzic, Laura Rimell, Rosalia Schneider, Josh Abramson, Alden Hung, Arun Ahuja, Stephen Clark, Gregory Wayne, Felix Hill

Recent work has shown how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments.

Question Answering

Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text

no code implementations19 May 2020 Felix Hill, Sona Mokra, Nathaniel Wong, Tim Harley

Here, we propose a conceptually simple method for training instruction-following agents with deep RL that are robust to natural human instructions.

Instruction Following Language Modelling +4

Extending Machine Language Models toward Human-Level Language Understanding

no code implementations12 Dec 2019 James L. McClelland, Felix Hill, Maja Rudolph, Jason Baldridge, Hinrich Schütze

We take language to be a part of a system for understanding and communicating about situations.

Environmental drivers of systematicity and generalization in a situated agent

no code implementations ICLR 2020 Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L. McClelland, Adam Santoro

The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI.


Robust Instruction-Following in a Situated Agent via Transfer-Learning from Text

no code implementations25 Sep 2019 Felix Hill, Sona Mokra, Nathaniel Wong, Tim Harley

We address this issue by integrating language encoders that are pretrained on large text corpora into a situated, instruction-following agent.

Instruction Following Representation Learning +1

Higher-order Comparisons of Sentence Encoder Representations

no code implementations IJCNLP 2019 Mostafa Abdou, Artur Kulmizev, Felix Hill, Daniel M. Low, Anders Søgaard

Representational Similarity Analysis (RSA) is a technique developed by neuroscientists for comparing activity patterns of different measurement modalities (e. g., fMRI, electrophysiology, behavior).

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

4 code implementations NeurIPS 2019 Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks.

Transfer Learning

Analysing Mathematical Reasoning Abilities of Neural Models

6 code implementations ICLR 2019 David Saxton, Edward Grefenstette, Felix Hill, Pushmeet Kohli

The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes.

Mathematical Reasoning Math Word Problem Solving

Learning to Make Analogies by Contrasting Abstract Relational Structure

2 code implementations ICLR 2019 Felix Hill, Adam Santoro, David G. T. Barrett, Ari S. Morcos, Timothy Lillicrap

Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data.

Generating Diverse Programs with Instruction Conditioned Reinforced Adversarial Learning

no code implementations3 Dec 2018 Aishwarya Agrawal, Mateusz Malinowski, Felix Hill, Ali Eslami, Oriol Vinyals, tejas kulkarni

In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction.

Neural Arithmetic Logic Units

22 code implementations NeurIPS 2018 Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, Phil Blunsom

Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training.

Measuring abstract reasoning in neural networks

2 code implementations ICML 2018 David G. T. Barrett, Felix Hill, Adam Santoro, Ari S. Morcos, Timothy Lillicrap

To succeed at this challenge, models must cope with various generalisation `regimes' in which the training and test data differ in clearly-defined ways.

Learning to Understand Goal Specifications by Modelling Reward

1 code implementation ICLR 2019 Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, Edward Grefenstette

Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards.

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

10 code implementations WS 2018 Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset.

Natural Language Inference Natural Language Understanding +1

Understanding Grounded Language Learning Agents

no code implementations ICLR 2018 Felix Hill, Karl Moritz Hermann, Phil Blunsom, Stephen Clark

Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds.

Grounded language learning Policy Gradient Methods

Understanding Early Word Learning in Situated Artificial Agents

no code implementations ICLR 2018 Felix Hill, Stephen Clark, Karl Moritz Hermann, Phil Blunsom

Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and execute symbolic instructions as first-person actors in partially-observable worlds.

Grounded language learning Policy Gradient Methods

Grounded Language Learning in a Simulated 3D World

1 code implementation20 Jun 2017 Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom

Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions.

Grounded language learning

HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment

no code implementations CL 2017 Ivan Vulić, Daniela Gerz, Douwe Kiela, Felix Hill, Anna Korhonen

We introduce HyperLex - a dataset and evaluation resource that quantifies the extent of of the semantic category membership, that is, type-of relation also known as hyponymy-hypernymy or lexical entailment (LE) relation between 2, 616 concept pairs.

Lexical Entailment Representation Learning

SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity

1 code implementation EMNLP 2016 Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, Anna Korhonen

Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research.

Association Representation Learning

Learning Distributed Representations of Sentences from Unlabelled Data

1 code implementation NAACL 2016 Felix Hill, Kyunghyun Cho, Anna Korhonen

Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data.

Representation Learning

Learning to Understand Phrases by Embedding the Dictionary

2 code implementations TACL 2016 Felix Hill, Kyunghyun Cho, Anna Korhonen, Yoshua Bengio

Distributional models that learn rich semantic word representations are a success story of recent NLP research.

General Knowledge

Embedding Word Similarity with Neural Machine Translation

no code implementations19 Dec 2014 Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, Yoshua Bengio

Here we investigate the embeddings learned by neural machine translation models, a recently-developed class of neural language model.

Language Modelling Machine Translation +2

Not All Neural Embeddings are Born Equal

no code implementations2 Oct 2014 Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, Yoshua Bengio

Neural language models learn word representations that capture rich linguistic and conceptual information.

Machine Translation Translation

SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation

3 code implementations CL 2015 Felix Hill, Roi Reichart, Anna Korhonen

We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways.

Association Representation Learning

Multi-Modal Models for Concrete and Abstract Concept Meaning

no code implementations TACL 2014 Felix Hill, Roi Reichart, Anna Korhonen

Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition.

Language Acquisition

Cannot find the paper you are looking for? You can Submit a new open access paper.