Search Results for author: Angeliki Lazaridou

Found 45 papers, 14 papers with code

Internet-augmented language models through few-shot prompting for open-domain question answering

no code implementations10 Mar 2022 Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, Nikolai Grigorev

In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language models (LSLMs) to overcome some of their challenges with respect to grounding to factual and up-to-date information.

Language Modelling Open-Domain Question Answering

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

no code implementations NA 2021 Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent SIfre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, Geoffrey Irving

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Abstract Algebra Anachronisms +133

Dynamic population-based meta-learning for multi-agent communication with natural language

no code implementations NeurIPS 2021 Abhinav Gupta, Marc Lanctot, Angeliki Lazaridou

In this work, our goal is to train agents that can coordinate with seen, unseen as well as human partners in a multi-agent communication environment involving natural language.

Meta-Learning Text Generation

Meta Learning for Multi-agent Communication

no code implementations ICLR Workshop Learning_to_Learn 2021 Abhinav Gupta, Angeliki Lazaridou, Marc Lanctot

Recent works have shown remarkable progress in training artificial agents to understand natural language but are focused on using large amounts of raw data involving huge compute requirements.

Meta-Learning Meta Reinforcement Learning

Mind the Gap: Assessing Temporal Generalization in Neural Language Models

1 code implementation NeurIPS 2021 Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, Phil Blunsom

Hence, given the compilation of ever-larger language modelling datasets, combined with the growing list of language-model-based NLP applications that require up-to-date factual knowledge about the world, we argue that now is the right time to rethink the static way in which we currently train and evaluate our language models, and develop adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world.

Language Modelling

Emergent Communication under Competition

1 code implementation25 Jan 2021 Michael Noukhovitch, Travis LaCroix, Angeliki Lazaridou, Aaron Courville

First, we show that communication is proportional to cooperation, and it can occur for partially competitive scenarios using standard learning algorithms.

Misconceptions

Emergent Multi-Agent Communication in the Deep Learning Era

no code implementations3 Jun 2020 Angeliki Lazaridou, Marco Baroni

The ability to cooperate through language is a defining feature of humans.

Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning

no code implementations ACL 2020 Angeliki Lazaridou, Anna Potapenko, Olivier Tieleman

We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language.

Language Modelling

Experience Grounds Language

2 code implementations EMNLP 2020 Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, Joseph Turian

Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.

Representation Learning

Shaping representations through communication: community size effect in artificial learning systems

no code implementations12 Dec 2019 Olivier Tieleman, Angeliki Lazaridou, Shibl Mourad, Charles Blundell, Doina Precup

Motivated by theories of language and communication that explain why communities with large numbers of speakers have, on average, simpler languages with more regularity, we cast the representation learning problem in terms of learning to communicate.

Representation Learning

Biases for Emergent Communication in Multi-agent Reinforcement Learning

no code implementations NeurIPS 2019 Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, Thore Graepel

We study the problem of emergent communication, in which language arises because speakers and listeners must communicate information in order to solve tasks.

Multi-agent Reinforcement Learning reinforcement-learning +1

Intrinsic Social Motivation via Causal Influence in Multi-Agent RL

no code implementations ICLR 2019 Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro A. Ortega, DJ Strouse, Joel Z. Leibo, Nando de Freitas

Therefore, we also employ influence to train agents to use an explicit communication channel, and find that it leads to more effective communication and higher collective reward.

Inductive Bias Multi-agent Reinforcement Learning

Learning and Evaluating General Linguistic Intelligence

no code implementations31 Jan 2019 Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, Phil Blunsom

We define general linguistic intelligence as the ability to reuse previously acquired knowledge about a language's lexicon, syntax, semantics, and pragmatic conventions to adapt to new tasks quickly.

Natural Language Understanding Question Answering

Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning

3 code implementations ICLR 2019 Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro A. Ortega, DJ Strouse, Joel Z. Leibo, Nando de Freitas

We propose a unified mechanism for achieving coordination and communication in Multi-Agent Reinforcement Learning (MARL), through rewarding agents for having causal influence over other agents' actions.

Multi-agent Reinforcement Learning reinforcement-learning +1

Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input

1 code implementation ICLR 2018 Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, Stephen Clark

The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks.

reinforcement-learning Reinforcement Learning (RL)

Emergent Communication through Negotiation

1 code implementation ICLR 2018 Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z. Leibo, Karl Tuyls, Stephen Clark

We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.

Multi-agent Reinforcement Learning

Compositional Obverter Communication Learning From Raw Visual Input

2 code implementations ICLR 2018 Edward Choi, Angeliki Lazaridou, Nando de Freitas

Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e. g. hand- engineered features).

A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning

1 code implementation NeurIPS 2017 Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Perolat, David Silver, Thore Graepel

To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL).

reinforcement-learning Reinforcement Learning (RL)

The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference with Sentence Representations

no code implementations WS 2017 Nikita Nangia, Adina Williams, Angeliki Lazaridou, Samuel R. Bowman

This paper presents the results of the RepEval 2017 Shared Task, which evaluated neural network sentence representation learning models on the Multi-Genre Natural Language Inference corpus (MultiNLI) recently introduced by Williams et al. (2017).

Natural Language Inference Representation Learning

CommAI: Evaluating the first steps towards a useful general AI

no code implementations31 Jan 2017 Marco Baroni, Armand Joulin, Allan Jabri, Germàn Kruszewski, Angeliki Lazaridou, Klemen Simonic, Tomas Mikolov

With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal.

BIG-bench Machine Learning Continual Learning +2

Multi-Agent Cooperation and the Emergence of (Natural) Language

1 code implementation21 Dec 2016 Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni

The sender is told one of them is the target and is allowed to send a message from a fixed, arbitrary vocabulary to the receiver.

Towards Multi-Agent Communication-Based Language Learning

no code implementations23 May 2016 Angeliki Lazaridou, Nghia The Pham, Marco Baroni

We propose an interactive multimodal framework for language learning.

The red one!: On learning to refer to things based on their discriminative properties

no code implementations8 Mar 2016 Angeliki Lazaridou, Nghia The Pham, Marco Baroni

As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (cat) and a context (sofa), identifies their discriminative attributes, i. e., properties that distinguish them (has_tail).

Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation

no code implementations10 Jun 2015 Angeliki Lazaridou, Dat Tien Nguyen, Raffaella Bernardi, Marco Baroni

We introduce language-driven image generation, the task of generating an image visualizing the semantic contents of a word embedding, e. g., given the word embedding of grasshopper, we generate a natural image of a grasshopper.

Image Generation Word Embeddings

From Visual Attributes to Adjectives through Decompositional Distributional Semantics

no code implementations TACL 2015 Angeliki Lazaridou, Georgiana Dinu, Adam Liska, Marco Baroni

By building on the recent "zero-shot learning" approach, and paying attention to the linguistic nature of attributes as noun modifiers, and specifically adjectives, we show that it is possible to tag images with attribute-denoting adjectives even when no training data containing the relevant annotation are available.

Object Recognition Retrieval +2

Combining Language and Vision with a Multimodal Skip-gram Model

no code implementations HLT 2015 Angeliki Lazaridou, Nghia The Pham, Marco Baroni

We extend the SKIP-GRAM model of Mikolov et al. (2013a) by taking visual information into account.

Retrieval

Improving zero-shot learning by mitigating the hubness problem

4 code implementations20 Dec 2014 Georgiana Dinu, Angeliki Lazaridou, Marco Baroni

The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels.

Image Retrieval Retrieval +1

Cross-Language Authorship Attribution

no code implementations LREC 2014 Dasha Bogdanova, Angeliki Lazaridou

We propose a number of cross-language stylometric features for the task of CLAA, such as those based on sentiment and emotional markers.

Information Retrieval Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.