Search Results for author: Laurent Charlin

Found 37 papers, 21 papers with code

Foundational Models for Continual Learning: An Empirical Study of Latent Replay

1 code implementation30 Apr 2022 Oleksiy Ostapenko, Timothee Lesort, Pau Rodríguez, Md Rifat Arefin, Arthur Douillard, Irina Rish, Laurent Charlin

Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios.

Continual Learning

Continual Learning via Local Module Composition

1 code implementation NeurIPS 2021 Oleksiy Ostapenko, Pau Rodriguez, Massimo Caccia, Laurent Charlin

We introduce local module composition (LMC), an approach to modular CL where each module is provided a local structural component that estimates a module's relevance to the input.

Continual Learning Transfer Learning

Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations

2 code implementations ICCV 2021 Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin, David Vazquez

Explainability for machine learning models has gained considerable attention within the research community given the importance of deploying more reliable machine-learning systems.

Decision Making

Beyond Trivial Counterfactual Generations with Diverse Valuable Explanations

no code implementations1 Jan 2021 Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam H. Laradji, Laurent Charlin, David Vazquez

In computer vision applications, most methods explain models by displaying the regions in the input image that they focus on for their prediction, but it is difficult to improve models based on these explanations since they do not indicate why the model fail.

Decision Making

Online Continual Learning with Maximal Interfered Retrieval

1 code implementation NeurIPS 2019 Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, Lucas Page-Caccia

Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.

Continual Learning

Online Continual Learning with Maximally Interfered Retrieval

1 code implementation11 Aug 2019 Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, Tinne Tuytelaars

Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.

Continual Learning

Continual Learning of New Sound Classes using Generative Replay

no code implementations3 Jun 2019 Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis, Laurent Charlin

We show that by incrementally refining a classifier with generative replay a generator that is 4% of the size of all previous training data matches the performance of refining the classifier keeping 20% of all previous training data.

Continual Learning

Session-based Social Recommendation via Dynamic Graph Attention Networks

2 code implementations25 Feb 2019 Weiping Song, Zhiping Xiao, Yifan Wang, Laurent Charlin, Ming Zhang, Jian Tang

However, recommendation in online communities is a challenging problem: 1) users' interests are dynamic, and 2) users are influenced by their friends.

 Ranked #1 on Recommendation Systems on Douban (NDCG metric)

Graph Attention Recommendation Systems

Language GANs Falling Short

1 code implementation ICLR 2020 Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, Laurent Charlin

Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks.

Text Generation

The Deconfounded Recommender: A Causal Inference Approach to Recommendation

no code implementations20 Aug 2018 Yixin Wang, Dawen Liang, Laurent Charlin, David M. Blei

To this end, we develop a causal approach to recommendation, one where watching a movie is a "treatment" and a user's rating is an "outcome."

Causal Inference Recommendation Systems

Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks

no code implementations ICLR 2018 Nan Rosemary Ke, Anirudh Goyal, Olexa Bilaniuk, Jonathan Binas, Laurent Charlin, Chris Pal, Yoshua Bengio

A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation.

Learnable Explicit Density for Continuous Latent Space and Variational Inference

no code implementations6 Oct 2017 Chin-wei Huang, Ahmed Touati, Laurent Dinh, Michal Drozdzal, Mohammad Havaei, Laurent Charlin, Aaron Courville

In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior.

Density Estimation Variational Inference

Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus

no code implementations1 Jan 2017 Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, Joelle Pineau

In this paper, we analyze neural network-based dialogue systems trained in an end-to-end manner using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words.

Conversation Disentanglement Feature Engineering

Generative Deep Neural Networks for Dialogue: A Short Review

no code implementations18 Nov 2016 Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau

Researchers have recently started investigating deep neural networks for dialogue applications.

Response Generation

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

9 code implementations19 May 2016 Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, Yoshua Bengio

Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue.

Response Generation

On the Evaluation of Dialogue Systems with Next Utterance Classification

no code implementations WS 2016 Ryan Lowe, Iulian V. Serban, Mike Noseworthy, Laurent Charlin, Joelle Pineau

An open challenge in constructing dialogue systems is developing methods for automatically learning dialogue strategies from large amounts of unlabelled data.

Classification General Classification

A Survey of Available Corpora for Building Data-Driven Dialogue Systems

4 code implementations17 Dec 2015 Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau

During the past decade, several areas of speech and language understanding have witnessed substantial breakthroughs from the use of data-driven models.

Transfer Learning

Dynamic Poisson Factorization

no code implementations15 Sep 2015 Laurent Charlin, Rajesh Ranganath, James McInerney, David M. Blei

Models for recommender systems use latent factors to explain the preferences and behaviors of users with respect to a set of items (e. g., movies, books, academic papers).

Recommendation Systems Variational Inference

Content-based recommendations with Poisson factorization

3 code implementations NeurIPS 2014 Prem K. Gopalan, Laurent Charlin, David Blei

We develop collaborative topic Poisson factorization (CTPF), a generative model of articles and reader preferences.

Recommendation Systems Variational Inference

Deep Exponential Families

no code implementations10 Nov 2014 Rajesh Ranganath, Linpeng Tang, Laurent Charlin, David M. Blei

We describe \textit{deep exponential families} (DEFs), a class of latent variable models that are inspired by the hidden structures used in deep neural networks.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.