Search Results for author: Guillaume Lajoie

Found 23 papers, 7 papers with code

Continuous-Time Meta-Learning with Forward Mode Differentiation

no code implementations ICLR 2022 Tristan Deleu, David Kanaa, Leo Feng, Giancarlo Kerg, Yoshua Bengio, Guillaume Lajoie, Pierre-Luc Bacon

Drawing inspiration from gradient-based meta-learning methods with infinitely small gradient steps, we introduce Continuous-Time Meta-Learning (COMLN), a meta-learning algorithm where adaptation follows the dynamics of a gradient vector field.

Few-Shot Image Classification Meta-Learning

Clarifying MCMC-based training of modern EBMs : Contrastive Divergence versus Maximum Likelihood

no code implementations24 Feb 2022 Léo Gagnon, Guillaume Lajoie

The Energy-Based Model (EBM) framework is a very general approach to generative modeling that tries to learn and exploit probability distributions only defined though unnormalized scores.

Image Generation

Learning shared neural manifolds from multi-subject FMRI data

no code implementations22 Dec 2021 Jessie Huang, Erica L. Busch, Tom Wallenstein, Michal Gerasimiuk, Andrew Benz, Guillaume Lajoie, Guy Wolf, Nicholas B. Turk-Browne, Smita Krishnaswamy

In order to understand the connection between stimuli of interest and brain activity, and analyze differences and commonalities between subjects, it becomes important to learn a meaningful embedding of the data that denoises, and reveals its intrinsic structure.

Multi-scale Feature Learning Dynamics: Insights for Double Descent

1 code implementation6 Dec 2021 Mohammad Pezeshki, Amartya Mitra, Yoshua Bengio, Guillaume Lajoie

A key challenge in building theoretical foundations for deep learning is the complex optimization dynamics of neural networks, resulting from the high-dimensional interactions between the large number of network parameters.

Compositional Attention: Disentangling Search and Retrieval

2 code implementations ICLR 2022 Sarthak Mittal, Sharath Chandra Raparthy, Irina Rish, Yoshua Bengio, Guillaume Lajoie

Through our qualitative analysis, we demonstrate that Compositional Attention leads to dynamic specialization based on the type of retrieval needed.

Embedding Signals on Knowledge Graphs with Unbalanced Diffusion Earth Mover's Distance

no code implementations26 Jul 2021 Alexander Tong, Guillaume Huguet, Dennis Shung, Amine Natik, Manik Kuchroo, Guillaume Lajoie, Guy Wolf, Smita Krishnaswamy

We propose to compare and organize such datasets of graph signals by using an earth mover's distance (EMD) with a geodesic cost over the underlying graph.

Knowledge Graph Embedding Knowledge Graphs

Efficient and robust multi-task learning in the brain with modular task primitives

no code implementations28 May 2021 Christian David Marton, Guillaume Lajoie, Kanaka Rajan

Using a corpus of nine different tasks, we show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low.

Multi-Task Learning

Exploring the Geometry and Topology of Neural Network Loss Landscapes

no code implementations31 Jan 2021 Stefan Horoi, Jessie Huang, Bastian Rieck, Guillaume Lajoie, Guy Wolf, Smita Krishnaswamy

This suggests that qualitative and quantitative examination of the loss landscape geometry could yield insights about neural network generalization performance during training.

Dimensionality Reduction

Gradient Starvation: A Learning Proclivity in Neural Networks

2 code implementations NeurIPS 2021 Mohammad Pezeshki, Sékou-Oumar Kaba, Yoshua Bengio, Aaron Courville, Doina Precup, Guillaume Lajoie

We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks.

LEAD: Least-Action Dynamics for Min-Max Optimization

1 code implementation26 Oct 2020 Reyhane Askari Hemmat, Amartya Mitra, Guillaume Lajoie, Ioannis Mitliagkas

Adversarial formulations such as generative adversarial networks (GANs) have rekindled interest in two-player min-max games.

Image Generation

Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules

1 code implementation ICML 2020 Sarthak Mittal, Alex Lamb, Anirudh Goyal, Vikram Voleti, Murray Shanahan, Guillaume Lajoie, Michael Mozer, Yoshua Bengio

To effectively utilize the wealth of potential top-down information available, and to prevent the cacophony of intermixed signals in a bidirectional architecture, mechanisms are needed to restrict information flow.

Language Modelling Sequential Image Classification +1

On Lyapunov Exponents for RNNs: Understanding Information Propagation Using Dynamical Systems Tools

no code implementations25 Jun 2020 Ryan Vogt, Maximilian Puelma Touzel, Eli Shlizerman, Guillaume Lajoie

Recurrent neural networks (RNNs) have been successfully applied to a variety of problems involving sequential data, but their optimization is sensitive to parameter initialization, architecture, and optimizer hyperparameters.

Advantages of biologically-inspired adaptive neural activation in RNNs during learning

no code implementations22 Jun 2020 Victor Geadah, Giancarlo Kerg, Stefan Horoi, Guy Wolf, Guillaume Lajoie

Dynamic adaptation in single-neuron response plays a fundamental role in neural coding in biological neural networks.

Transfer Learning

Untangling tradeoffs between recurrence and self-attention in neural networks

no code implementations16 Jun 2020 Giancarlo Kerg, Bhargav Kanuparthi, Anirudh Goyal, Kyle Goyette, Yoshua Bengio, Guillaume Lajoie

Attention and self-attention mechanisms, are now central to state-of-the-art deep learning on sequential tasks.

Internal representation dynamics and geometry in recurrent neural networks

no code implementations9 Jan 2020 Stefan Horoi, Guillaume Lajoie, Guy Wolf

The efficiency of recurrent neural networks (RNNs) in dealing with sequential data has long been established.

Modelling Working Memory using Deep Recurrent Reinforcement Learning

no code implementations NeurIPS Workshop Neuro_AI 2019 Pravish Sainath, Pierre Bellec, Guillaume Lajoie

We train these neural networks to solve the working memory task by training them with a sequence of images in supervised and reinforcement learning settings.

Decision Making reinforcement-learning +1

Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics

1 code implementation NeurIPS 2019 Giancarlo Kerg, Kyle Goyette, Maximilian Puelma Touzel, Gauthier Gidel, Eugene Vorontsov, Yoshua Bengio, Guillaume Lajoie

A recent strategy to circumvent the exploding and vanishing gradient problem in RNNs, and to allow the stable propagation of signals over long time scales, is to constrain recurrent connectivity matrices to be orthogonal or unitary.

An Investigation of Memory in Recurrent Neural Networks

no code implementations17 May 2019 Aude Forcione-Lambert, Guy Wolf, Guillaume Lajoie

We investigate the learned dynamical landscape of a recurrent neural network solving a simple task requiring the interaction of two memory mechanisms: long- and short-term.

Cannot find the paper you are looking for? You can Submit a new open access paper.