Search Results for author: Salem Lahlou

Found 12 papers, 9 papers with code

On Generalization for Generative Flow Networks

no code implementations3 Jul 2024 Anas Krichel, Nikolay Malkin, Salem Lahlou, Yoshua Bengio

This paper attempts to formalize generalization in the context of GFlowNets, to link generalization with stability, and also to design experiments that assess the capacity of these models to uncover unseen parts of the reward function.

PORT: Preference Optimization on Reasoning Traces

no code implementations23 Jun 2024 Salem Lahlou, Abdalgader Abubaker, Hakim Hacid

This paper proposes using preference optimization methods on Chain-of-Thought steps in order to improve the reasoning performances of language models.

ARC GSM8K

BatchGFN: Generative Flow Networks for Batch Active Learning

1 code implementation26 Jun 2023 Shreshth A. Malik, Salem Lahlou, Andrew Jesson, Moksh Jain, Nikolay Malkin, Tristan Deleu, Yoshua Bengio, Yarin Gal

We introduce BatchGFN -- a novel approach for pool-based active learning that uses generative flow networks to sample sets of data points proportional to a batch reward.

Active Learning

torchgfn: A PyTorch GFlowNet library

2 code implementations24 May 2023 Salem Lahlou, Joseph D. Viviano, Victor Schmidt, Yoshua Bengio

The growing popularity of generative flow networks (GFlowNets or GFNs) from a range of researchers with diverse backgrounds and areas of expertise necessitates a library which facilitates the testing of new features such as training losses that can be easily compared to standard benchmark implementations, or on a set of common environments.

A theory of continuous generative flow networks

1 code implementation30 Jan 2023 Salem Lahlou, Tristan Deleu, Pablo Lemos, Dinghuai Zhang, Alexandra Volokhova, Alex Hernández-García, Léna Néhale Ezzine, Yoshua Bengio, Nikolay Malkin

Generative flow networks (GFlowNets) are amortized variational inference algorithms that are trained to sample from unnormalized target distributions over compositional objects.

Variational Inference

GFlowOut: Dropout with Generative Flow Networks

no code implementations24 Oct 2022 Dianbo Liu, Moksh Jain, Bonaventure Dossou, Qianli Shen, Salem Lahlou, Anirudh Goyal, Nikolay Malkin, Chris Emezue, Dinghuai Zhang, Nadhir Hassen, Xu Ji, Kenji Kawaguchi, Yoshua Bengio

These methods face two important challenges: (a) the posterior distribution over masks can be highly multi-modal which can be difficult to approximate with standard variational inference and (b) it is not trivial to fully utilize sample-dependent information and correlation among dropout masks to improve posterior estimation.

Bayesian Inference Variational Inference

GFlowNets and variational inference

1 code implementation2 Oct 2022 Nikolay Malkin, Salem Lahlou, Tristan Deleu, Xu Ji, Edward Hu, Katie Everett, Dinghuai Zhang, Yoshua Bengio

This paper builds bridges between two families of probabilistic algorithms: (hierarchical) variational inference (VI), which is typically used to model distributions over continuous spaces, and generative flow networks (GFlowNets), which have been used for distributions over discrete structures such as graphs.

Diversity Reinforcement Learning (RL) +1

GFlowNet Foundations

2 code implementations17 Nov 2021 Yoshua Bengio, Salem Lahlou, Tristan Deleu, Edward J. Hu, Mo Tiwari, Emmanuel Bengio

Generative Flow Networks (GFlowNets) have been introduced as a method to sample a diverse set of candidates in an active learning context, with a training objective that makes them approximately sample in proportion to a given reward function.

Active Learning

Mastering Rate based Curriculum Learning

1 code implementation14 Aug 2020 Lucas Willems, Salem Lahlou, Yoshua Bengio

Recent automatic curriculum learning algorithms, and in particular Teacher-Student algorithms, rely on the notion of learning progress, making the assumption that the good next tasks are the ones on which the learner is making the fastest progress or digress.

BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning

6 code implementations ICLR 2019 Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, Yoshua Bengio

Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons, but given the poor data efficiency of the current learning methods, this goal may require substantial research efforts.

Grounded language learning

Cannot find the paper you are looking for? You can Submit a new open access paper.