1 code implementation • ICLR 2022 • Rahma Chaabouni, Florian Strub, Florent Altché, Eugene Tarassov, Corentin Tallec, Elnaz Davoodi, Kory Wallace Mathewson, Olivier Tieleman, Angeliki Lazaridou, Bilal Piot
Emergent communication aims for a better understanding of human language evolution and building more efficient representations.
no code implementations • 20 Sep 2021 • Alice Martin Donati, Guillaume Quispe, Charles Ollion, Sylvain Le Corff, Florian Strub, Olivier Pietquin
This paper introduces TRUncated ReinForcement Learning for Language (TrufLL), an original ap-proach to train conditional language models from scratch by only using reinforcement learning (RL).
1 code implementation • 20 May 2021 • Mathieu Seurin, Florian Strub, Philippe Preux, Olivier Pietquin
Sparse rewards are double-edged training signals in reinforcement learning: easy to design but hard to optimize.
1 code implementation • ICCV 2021 • Adrià Recasens, Pauline Luc, Jean-Baptiste Alayrac, Luyu Wang, Ross Hemsley, Florian Strub, Corentin Tallec, Mateusz Malinowski, Viorica Patraucean, Florent Altché, Michal Valko, Jean-bastien Grill, Aäron van den Oord, Andrew Zisserman
Most successful self-supervised learning methods are trained to align the representations of two independent views from the data.
Ranked #1 on
Self-Supervised Audio Classification
on ESC-50
no code implementations • ICLR Workshop SSL-RL 2021 • Mathieu Seurin, Florian Strub, Philippe Preux, Olivier Pietquin
We evaluate RAM on the procedurally-generated environment MiniGrid, against state-of-the-art methods.
5 code implementations • NeurIPS 2020 • Jean-bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Remi Munos, Michal Valko
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
3 code implementations • 20 Oct 2020 • Pierre H. Richemond, Jean-bastien Grill, Florent Altché, Corentin Tallec, Florian Strub, Andrew Brock, Samuel Smith, Soham De, Razvan Pascanu, Bilal Piot, Michal Valko
Bootstrap Your Own Latent (BYOL) is a self-supervised learning approach for image representation.
no code implementations • EMNLP 2020 • Yuchen Lu, Soumye Singhal, Florian Strub, Olivier Pietquin, Aaron Courville
Language drift has been one of the major obstacles to train language models through interaction.
no code implementations • 7 Aug 2020 • Mathieu Seurin, Florian Strub, Philippe Preux, Olivier Pietquin
To do so, we cast the speaker recognition task into a sequential decision-making problem that we solve with Reinforcement Learning.
no code implementations • 15 Jul 2020 • Alice Martin, Charles Ollion, Florian Strub, Sylvain Le Corff, Olivier Pietquin
This paper introduces the Sequential Monte Carlo Transformer, an original approach that naturally captures the observations distribution in a transformer architecture.
25 code implementations • 13 Jun 2020 • Jean-bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
Ranked #2 on
Self-Supervised Person Re-Identification
on SYSU-30k
Representation Learning
Self-Supervised Image Classification
+3
no code implementations • ICML 2020 • Yuchen Lu, Soumye Singhal, Florian Strub, Olivier Pietquin, Aaron Courville
At each time step, the teacher is created by copying the student agent, before being finetuned to maximize task completion.
no code implementations • 21 Oct 2019 • Geoffrey Cideron, Mathieu Seurin, Florian Strub, Olivier Pietquin
Language creates a compact representation of the world and allows the description of unlimited situations and objectives through compositionality.
no code implementations • 25 Sep 2019 • Geoffrey Cideron, Mathieu Seurin, Florian Strub, Olivier Pietquin
Language creates a compact representation of the world and allows the description of unlimited situations and objectives through compositionality.
1 code implementation • 7 Mar 2019 • Florian Strub, Marie-Agathe Charpagne, Tresa M. Pollock
The quality of the reconstruction of the maps is critical to study the spatial distribution of phases and crystallographic orientation relationships between phases, a key interest in materials science.
1 code implementation • 7 Mar 2019 • Marie-Agathe Charpagne, Florian Strub, Tresa M. Pollock
This function is then applied to un-distort the EBSD data, and the phase information is inferred using the data of the segmented speckle.
no code implementations • 6 Dec 2018 • Hado van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, Joseph Modayil
In this work, we investigate the impact of the deadly triad in practice, in the context of a family of popular deep reinforcement learning models - deep Q-networks trained with experience replay - analysing how the components of this system play a role in the emergence of the deadly triad, and in the agent's performance
1 code implementation • ECCV 2018 • Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Jérémie Mary, Philippe Preux, Aaron Courville, Olivier Pietquin
Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue.
no code implementations • 29 Nov 2017 • Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville
We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context.
4 code implementations • 22 Sep 2017 • Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville
We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation.
Ranked #3 on
Visual Question Answering
on CLEVR-Humans
Image Retrieval with Multi-Modal Query
Visual Question Answering
+1
2 code implementations • 10 Jul 2017 • Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville
Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.
3 code implementations • NeurIPS 2017 • Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, Aaron Courville
It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected.
2 code implementations • 15 Mar 2017 • Florian Strub, Harm de Vries, Jeremie Mary, Bilal Piot, Aaron Courville, Olivier Pietquin
End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning.
3 code implementations • CVPR 2017 • Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville
Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images.
4 code implementations • 24 Jun 2016 • Florian Strub, Romaric Gaudel, Jérémie Mary
A standard model for Recommender Systems is the Matrix Completion setting: given partially known matrix of ratings given by users (rows) to items (columns), infer the unknown ratings.
Ranked #1 on
Recommendation Systems
on Douban
1 code implementation • 2 Mar 2016 • Florian Strub, Jeremie Mary, Romaric Gaudel
Such algorithms look for latent variables in a large sparse matrix of ratings.