Search Results for author: Laurent Charlin

Found 45 papers, 24 papers with code

Deep Exponential Families

no code implementations10 Nov 2014 Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David M. Blei

We describe \textit{deep exponential families} (DEFs), a class of latent variable models that are inspired by the hidden structures used in deep neural networks.

Variational Inference

Content-based recommendations with Poisson factorization

3 code implementations NeurIPS 2014 Prem K. Gopalan, Laurent Charlin, David Blei

We develop collaborative topic Poisson factorization (CTPF), a generative model of articles and reader preferences.

Recommendation Systems Variational Inference

Dynamic Poisson Factorization

no code implementations15 Sep 2015 Laurent Charlin, Rajesh Ranganath, James McInerney, David M. Blei

Models for recommender systems use latent factors to explain the preferences and behaviors of users with respect to a set of items (e. g., movies, books, academic papers).

Recommendation Systems Variational Inference

A Survey of Available Corpora for Building Data-Driven Dialogue Systems

4 code implementations17 Dec 2015 Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau

During the past decade, several areas of speech and language understanding have witnessed substantial breakthroughs from the use of data-driven models.

Transfer Learning

On the Evaluation of Dialogue Systems with Next Utterance Classification

no code implementations WS 2016 Ryan Lowe, Iulian V. Serban, Mike Noseworthy, Laurent Charlin, Joelle Pineau

An open challenge in constructing dialogue systems is developing methods for automatically learning dialogue strategies from large amounts of unlabelled data.

Classification General Classification

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

9 code implementations19 May 2016 Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, Yoshua Bengio

Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue.

Response Generation

Generative Deep Neural Networks for Dialogue: A Short Review

no code implementations18 Nov 2016 Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau

Researchers have recently started investigating deep neural networks for dialogue applications.

Response Generation

Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus

no code implementations1 Jan 2017 Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, Joelle Pineau

In this paper, we analyze neural network-based dialogue systems trained in an end-to-end manner using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words.

Conversation Disentanglement Feature Engineering

Learnable Explicit Density for Continuous Latent Space and Variational Inference

no code implementations6 Oct 2017 Chin-wei Huang, Ahmed Touati, Laurent Dinh, Michal Drozdzal, Mohammad Havaei, Laurent Charlin, Aaron Courville

In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior.

Density Estimation Variational Inference

Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks

no code implementations ICLR 2018 Nan Rosemary Ke, Anirudh Goyal, Olexa Bilaniuk, Jonathan Binas, Laurent Charlin, Chris Pal, Yoshua Bengio

A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation.

The Deconfounded Recommender: A Causal Inference Approach to Recommendation

no code implementations20 Aug 2018 Yixin Wang, Dawen Liang, Laurent Charlin, David M. Blei

To this end, we develop a causal approach to recommendation, one where watching a movie is a "treatment" and a user's rating is an "outcome."

Causal Inference Recommendation Systems

Language GANs Falling Short

1 code implementation ICLR 2020 Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, Laurent Charlin

Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks.

Text Generation

Session-based Social Recommendation via Dynamic Graph Attention Networks

2 code implementations25 Feb 2019 Weiping Song, Zhiping Xiao, Yifan Wang, Laurent Charlin, Ming Zhang, Jian Tang

However, recommendation in online communities is a challenging problem: 1) users' interests are dynamic, and 2) users are influenced by their friends.

 Ranked #1 on Recommendation Systems on Douban (NDCG metric)

Graph Attention Recommendation Systems

Continual Learning of New Sound Classes using Generative Replay

no code implementations3 Jun 2019 Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis, Laurent Charlin

We show that by incrementally refining a classifier with generative replay a generator that is 4% of the size of all previous training data matches the performance of refining the classifier keeping 20% of all previous training data.

Continual Learning Sound Classification

Online Continual Learning with Maximally Interfered Retrieval

1 code implementation11 Aug 2019 Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, Tinne Tuytelaars

Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.

Continual Learning Retrieval

Online Continual Learning with Maximal Interfered Retrieval

2 code implementations NeurIPS 2019 Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, Lucas Page-Caccia

Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.

Class Incremental Learning Retrieval

Beyond Trivial Counterfactual Generations with Diverse Valuable Explanations

no code implementations1 Jan 2021 Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam H. Laradji, Laurent Charlin, David Vazquez

In computer vision applications, most methods explain models by displaying the regions in the input image that they focus on for their prediction, but it is difficult to improve models based on these explanations since they do not indicate why the model fail.

Attribute counterfactual +1

Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations

2 code implementations ICCV 2021 Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin, David Vazquez

Explainability for machine learning models has gained considerable attention within the research community given the importance of deploying more reliable machine-learning systems.

Attribute BIG-bench Machine Learning +2

Continual Learning via Local Module Composition

1 code implementation NeurIPS 2021 Oleksiy Ostapenko, Pau Rodriguez, Massimo Caccia, Laurent Charlin

We introduce local module composition (LMC), an approach to modular CL where each module is provided a local structural component that estimates a module's relevance to the input.

Continual Learning Transfer Learning

Continual Learning with Foundation Models: An Empirical Study of Latent Replay

1 code implementation30 Apr 2022 Oleksiy Ostapenko, Timothee Lesort, Pau Rodríguez, Md Rifat Arefin, Arthur Douillard, Irina Rish, Laurent Charlin

Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios.

Benchmarking Continual Learning

Task-Agnostic Continual Reinforcement Learning: Gaining Insights and Overcoming Challenges

2 code implementations28 May 2022 Massimo Caccia, Jonas Mueller, Taesup Kim, Laurent Charlin, Rasool Fakoor

We pose two hypotheses: (1) task-agnostic methods might provide advantages in settings with limited data, computation, or high dimensionality, and (2) faster adaptation may be particularly beneficial in continual learning settings, helping to mitigate the effects of catastrophic forgetting.

Continual Learning Continuous Control +3

Learning To Cut By Looking Ahead: Cutting Plane Selection via Imitation Learning

no code implementations27 Jun 2022 Max B. Paulus, Giulia Zarpellon, Andreas Krause, Laurent Charlin, Chris J. Maddison

Cutting planes are essential for solving mixed-integer linear problems (MILPs), because they facilitate bound improvements on the optimal solution value.

Imitation Learning

Challenging Common Assumptions about Catastrophic Forgetting

no code implementations10 Jul 2022 Timothée Lesort, Oleksiy Ostapenko, Diganta Misra, Md Rifat Arefin, Pau Rodríguez, Laurent Charlin, Irina Rish

In this paper, we study the progressive knowledge accumulation (KA) in DNNs trained with gradient-based algorithms in long sequences of tasks with data re-occurrence.

Continual Learning Memorization

Model-based graph reinforcement learning for inductive traffic signal control

1 code implementation1 Aug 2022 François-Xavier Devailly, Denis Larocque, Laurent Charlin

Most reinforcement learning methods for adaptive-traffic-signal-control require training from scratch to be applied on any new intersection or after any modification to the road network, traffic distribution, or behavioral constraints experienced during training.

reinforcement-learning Reinforcement Learning (RL)

Bayesian learning of Causal Structure and Mechanisms with GFlowNets and Variational Bayes

no code implementations4 Nov 2022 Mizu Nishikawa-Toomey, Tristan Deleu, Jithendaraa Subramanian, Yoshua Bengio, Laurent Charlin

We extend the method of Bayesian causal structure learning using GFlowNets to learn not only the posterior distribution over the structure, but also the parameters of a linear-Gaussian model.

Towards Compute-Optimal Transfer Learning

no code implementations25 Apr 2023 Massimo Caccia, Alexandre Galashov, Arthur Douillard, Amal Rannen-Triki, Dushyant Rao, Michela Paganini, Laurent Charlin, Marc'Aurelio Ranzato, Razvan Pascanu

The field of transfer learning is undergoing a significant shift with the introduction of large pretrained models which have demonstrated strong adaptability to a variety of downstream tasks.

Computational Efficiency Continual Learning +1

Improving the generalizability and robustness of large-scale traffic signal control

no code implementations2 Jun 2023 Tianyu Shi, Francois-Xavier Devailly, Denis Larocque, Laurent Charlin

Building upon the state-of-the-art previous model which uses a decentralized approach for large-scale traffic signal control with graph convolutional networks (GCNs), we first learn models using a distributional reinforcement learning (DisRL) approach.

Distributional Reinforcement Learning Multi-agent Reinforcement Learning +2

LitLLM: A Toolkit for Scientific Literature Review

1 code implementation2 Feb 2024 Shubham Agarwal, Issam H. Laradji, Laurent Charlin, Christopher Pal

Conducting literature reviews for scientific papers is essential for understanding research, its limitations, and building on existing work.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.