Search Results for author: Emmanuel Bengio

Found 13 papers, 4 papers with code

GFlowNet Foundations

no code implementations17 Nov 2021 Yoshua Bengio, Tristan Deleu, Edward J. Hu, Salem Lahlou, Mo Tiwari, Emmanuel Bengio

Generative Flow Networks (GFlowNets) have been introduced as a method to sample a diverse set of candidates in an active learning context, with a training objective that makes them approximately sample in proportion to a given reward function.

Active Learning

Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation

1 code implementation NeurIPS 2021 Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, Yoshua Bengio

Using insights from Temporal Difference learning, we propose GFlowNet, based on a view of the generative process as a flow network, making it possible to handle the tricky case where different trajectories can yield the same final state, e. g., there are many ways to sequentially add atoms to generate some molecular graph.

Correcting Momentum in Temporal Difference Learning

1 code implementation7 Jun 2021 Emmanuel Bengio, Joelle Pineau, Doina Precup

A common optimization tool used in deep reinforcement learning is momentum, which consists in accumulating and discounting past gradients, reapplying them at each iteration.

TDprop: Does Jacobi Preconditioning Help Temporal Difference Learning?

no code implementations6 Jul 2020 Joshua Romoff, Peter Henderson, David Kanaa, Emmanuel Bengio, Ahmed Touati, Pierre-Luc Bacon, Joelle Pineau

We investigate whether Jacobi preconditioning, accounting for the bootstrap term in temporal difference (TD) learning, can help boost performance of adaptive optimizers.

Interference and Generalization in Temporal Difference Learning

no code implementations ICML 2020 Emmanuel Bengio, Joelle Pineau, Doina Precup

We study the link between generalization and interference in temporal-difference (TD) learning.

Assessing Generalization in TD methods for Deep Reinforcement Learning

no code implementations25 Sep 2019 Emmanuel Bengio, Doina Precup, Joelle Pineau

Current Deep Reinforcement Learning (DRL) methods can exhibit both data inefficiency and brittleness, which seem to indicate that they generalize poorly.

Attack and defence in cellular decision-making: lessons from machine learning

no code implementations10 Jul 2018 Thomas J. Rademaker, Emmanuel Bengio, Paul François

We then apply a gradient-descent approach from machine learning to different cellular decision-making models, and we reveal the existence of two regimes characterized by the presence or absence of a critical point for the gradient.

Decision Making

World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions

no code implementations EMNLP 2017 Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Chi Kit Cheung, Doina Precup

Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same.

Language Modelling Reading Comprehension

Independently Controllable Factors

no code implementations3 Aug 2017 Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, Doina Precup, Yoshua Bengio

It has been postulated that a good representation is one that disentangles the underlying explanatory factors of variation.

Independently Controllable Features

no code implementations22 Mar 2017 Emmanuel Bengio, Valentin Thomas, Joelle Pineau, Doina Precup, Yoshua Bengio

Finding features that disentangle the different causes of variation in real data is a difficult task, that has nonetheless received considerable attention in static domains like natural images.

Conditional Computation in Neural Networks for faster models

1 code implementation19 Nov 2015 Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup

In this paper, we use reinforcement learning as a tool to optimize conditional computation policies.

Cannot find the paper you are looking for? You can Submit a new open access paper.