no code implementations • 23 Nov 2022 • Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt, Alexandre Lacoste, Sai Rajeswar
Unsupervised skill learning aims to learn a rich repertoire of behaviors without external supervision, providing artificial agents with the ability to control and influence the environment.
no code implementations • 4 Nov 2022 • Nasim Rahaman, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, Bernhard Schölkopf
Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications.
no code implementations • 24 Sep 2022 • Sai Rajeswar, Pietro Mazzaglia, Tim Verbelen, Alexandre Piché, Bart Dhoedt, Aaron Courville, Alexandre Lacoste
Controlling artificial agents from visual sensory data is an arduous task.
no code implementations • 1 Dec 2021 • Alexandre Lacoste, Evan David Sherwin, Hannah Kerner, Hamed Alemohammad, Björn Lütjens, Jeremy Irvin, David Dao, Alex Chang, Mehmet Gunturkun, Alexandre Drouin, Pau Rodriguez, David Vazquez
Recent progress in self-supervision shows that pre-training large neural networks on vast amounts of unsupervised data can lead to impressive increases in generalisation for downstream tasks.
1 code implementation • 22 Jul 2021 • Philippe Brouillard, Perouz Taslakian, Alexandre Lacoste, Sebastien Lachapelle, Alexandre Drouin
Causal discovery from observational data is a challenging task that can only be solved up to a set of equivalent solutions, called an equivalence class.
1 code implementation • 21 Jul 2021 • Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien
This work introduces a novel principle we call disentanglement via mechanism sparsity regularization, which can be applied when the latent factors of interest depend sparsely on past latent factors and/or observed auxiliary variables.
1 code implementation • 14 Jun 2021 • Yashas Annadani, Jonas Rothfuss, Alexandre Lacoste, Nino Scherrer, Anirudh Goyal, Yoshua Bengio, Stefan Bauer
However, a crucial aspect to acting intelligently upon the knowledge about causal structure which has been inferred from finite data demands reasoning about its uncertainty.
3 code implementations • 14 Apr 2021 • Frédéric Branchaud-Charron, Parmida Atighehchian, Pau Rodríguez, Grace Abuhamad, Alexandre Lacoste
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
1 code implementation • ICCV 2021 • Oscar Mañas, Alexandre Lacoste, Xavier Giro-i-Nieto, David Vazquez, Pau Rodriguez
Transfer learning approaches can reduce the data requirements of deep learning algorithms.
Ranked #3 on
Change Detection
on OSCD - 13ch
(using extra training data)
2 code implementations • ICCV 2021 • Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin, David Vazquez
Explainability for machine learning models has gained considerable attention within the research community given the importance of deploying more reliable machine-learning systems.
no code implementations • 1 Jan 2021 • Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam H. Laradji, Laurent Charlin, David Vazquez
In computer vision applications, most methods explain models by displaying the regions in the input image that they focus on for their prediction, but it is difficult to improve models based on these explanations since they do not indicate why the model fail.
no code implementations • NeurIPS 2020 • Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Page-Caccia, Issam Hadj Laradji, Irina Rish, Alexandre Lacoste, David Vázquez, Laurent Charlin
The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream.
1 code implementation • 14 Nov 2020 • Issam Laradji, Pau Rodriguez, Freddie Kalaitzis, David Vazquez, Ross Young, Ed Davey, Alexandre Lacoste
Cattle farming is responsible for 8. 8\% of greenhouse gas emissions worldwide.
3 code implementations • NeurIPS 2020 • Alexandre Lacoste, Pau Rodríguez, Frédéric Branchaud-Charron, Parmida Atighehchian, Massimo Caccia, Issam Laradji, Alexandre Drouin, Matt Craddock, Laurent Charlin, David Vázquez
Progress in the field of machine learning has been fueled by the introduction of benchmark datasets pushing the limits of existing algorithms.
1 code implementation • NeurIPS 2020 • Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, Alexandre Drouin
This work constitutes a new step in this direction by proposing a theoretically-grounded method based on neural networks that can leverage interventional data.
2 code implementations • 17 Jun 2020 • Parmida Atighehchian, Frédéric Branchaud-Charron, Alexandre Lacoste
Active learning is able to reduce the amount of labelling effort by using a machine learning model to query the user for specific inputs.
1 code implementation • NeurIPS 2020 • Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexandre Lacoste, David Vazquez, Laurent Charlin
We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario.
1 code implementation • ECCV 2020 • Pau Rodríguez, Issam Laradji, Alexandre Drouin, Alexandre Lacoste
Furthermore, we show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16\% points.
1 code implementation • 21 Oct 2019 • Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, Thomas Dandres
From an environmental standpoint, there are a few crucial aspects of training a neural network that have a major impact on the quantity of carbon that it emits.
no code implementations • 25 Sep 2019 • Prudencio Tossou, Basile Dura, Daniel Cohen, Mario Marchand, François Laviolette, Alexandre Lacoste
Due to the significant costs of data generation, many prediction tasks within drug discovery are by nature few-shot regression (FSR) problems, including accurate modelling of biological assays.
no code implementations • 10 Jun 2019 • Chin-wei Huang, Ahmed Touati, Pascal Vincent, Gintare Karolina Dziugaite, Alexandre Lacoste, Aaron Courville
Recent advances in variational inference enable the modelling of highly structured joint distributions, but are limited in their capacity to scale to the high-dimensional setting of stochastic neural networks.
3 code implementations • 10 Jun 2019 • David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua Bengio
Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help.
no code implementations • 28 May 2019 • Prudencio Tossou, Basile Dura, Francois Laviolette, Mario Marchand, Alexandre Lacoste
Deep kernel learning provides an elegant and principled framework for combining the structural properties of deep learning algorithms with the flexibility of kernel methods.
1 code implementation • 13 May 2019 • Chin-wei Huang, Kris Sankaran, Eeshan Dhekane, Alexandre Lacoste, Aaron Courville
We believe a joint proposal has the potential of reducing the number of redundant samples, and introduce a hierarchical structure to induce correlation.
no code implementations • 27 Sep 2018 • Chin-wei Huang, Faruk Ahmed, Kundan Kumar, Alexandre Lacoste, Aaron Courville
Probability distillation has recently been of interest to deep learning practitioners as it presents a practical solution for sampling from autoregressive models for deployment in real-time applications.
1 code implementation • NeurIPS 2018 • Chin-wei Huang, Shawn Tan, Alexandre Lacoste, Aaron Courville
Despite the advances in the representational capacity of approximate distributions for variational inference, the optimization process can still limit the density that is ultimately learned.
no code implementations • ICLR 2019 • Alexandre Lacoste, Boris Oreshkin, Wonchang Chung, Thomas Boquet, Negar Rostamzadeh, David Krueger
The result is a rich and meaningful prior capable of few-shot learning on new tasks.
3 code implementations • NeurIPS 2018 • Boris N. Oreshkin, Pau Rodriguez, Alexandre Lacoste
We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space.
5 code implementations • ICML 2018 • Chin-wei Huang, David Krueger, Alexandre Lacoste, Aaron Courville
Normalizing flows and autoregressive models have been successfully combined to produce state-of-the-art results in density estimation, via Masked Autoregressive Flows (MAF), and to accelerate state-of-the-art WaveNet-based speech synthesis to 20x faster than real-time, via Inverse Autoregressive Flows (IAF).
no code implementations • 13 Dec 2017 • Alexandre Lacoste, Thomas Boquet, Negar Rostamzadeh, Boris Oreshkin, Wonchang Chung, David Krueger
The recent literature on deep learning offers new tools to learn a rich probability distribution over high dimensional data such as images or sounds.
no code implementations • ICLR 2018 • David Krueger, Chin-wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, Aaron Courville
We study Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.
no code implementations • 6 Nov 2016 • Eunsol Choi, Daniel Hewlett, Alexandre Lacoste, Illia Polosukhin, Jakob Uszkoreit, Jonathan Berant
We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models.
2 code implementations • ACL 2016 • Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, David Berthelot
The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs).
no code implementations • NeurIPS 2016 • Pascal Germain, Francis Bach, Alexandre Lacoste, Simon Lacoste-Julien
That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood.
no code implementations • 4 Feb 2014 • Alexandre Lacoste, Hugo Larochelle, François Laviolette, Mario Marchand
One of the most tedious tasks in the application of machine learning is model selection, i. e. hyperparameter selection.