Search Results for author: Leo Feng

Found 10 papers, 4 papers with code

Tree Cross Attention

1 code implementation29 Sep 2023 Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed

In this work, we propose Tree Cross Attention (TCA) - a module based on Cross Attention that only retrieves information from a logarithmic $\mathcal{O}(\log(N))$ number of tokens for performing inference.

Constant Memory Attention Block

no code implementations21 Jun 2023 Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed

Modern foundation model architectures rely on attention mechanisms to effectively capture context.

Point Processes

Memory Efficient Neural Processes via Constant Memory Attention Block

no code implementations23 May 2023 Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed

Neural Processes (NPs) are popular meta-learning methods for efficiently modelling predictive uncertainty.


Latent Bottlenecked Attentive Neural Processes

1 code implementation15 Nov 2022 Leo Feng, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed

We demonstrate that LBANPs can trade-off the computational cost and performance according to the number of latent vectors.

Meta-Learning Multi-Armed Bandits

Designing Biological Sequences via Meta-Reinforcement Learning and Bayesian Optimization

no code implementations13 Sep 2022 Leo Feng, Padideh Nouri, Aneri Muni, Yoshua Bengio, Pierre-Luc Bacon

The problem can be framed as a global optimization problem where the objective is an expensive black-box function such that we can query large batches restricted with a limitation of a low number of rounds.

Bayesian Optimization Meta-Learning +3

Towards Better Selective Classification

1 code implementation17 Jun 2022 Leo Feng, Mohamed Osama Ahmed, Hossein Hajimirsadeghi, Amir Abdi

We tackle the problem of Selective Classification where the objective is to achieve the best performance on a predetermined ratio (coverage) of the dataset.


Continuous-Time Meta-Learning with Forward Mode Differentiation

no code implementations ICLR 2022 Tristan Deleu, David Kanaa, Leo Feng, Giancarlo Kerg, Yoshua Bengio, Guillaume Lajoie, Pierre-Luc Bacon

Drawing inspiration from gradient-based meta-learning methods with infinitely small gradient steps, we introduce Continuous-Time Meta-Learning (COMLN), a meta-learning algorithm where adaptation follows the dynamics of a gradient vector field.

Few-Shot Image Classification Meta-Learning

Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning

1 code implementation2 Oct 2020 Luisa Zintgraf, Leo Feng, Cong Lu, Maximilian Igl, Kristian Hartikainen, Katja Hofmann, Shimon Whiteson

To rapidly learn a new task, it is often essential for agents to explore efficiently -- especially when performance matters from the first timestep.

Meta-Learning Meta Reinforcement Learning +2

VIABLE: Fast Adaptation via Backpropagating Learned Loss

no code implementations29 Nov 2019 Leo Feng, Luisa Zintgraf, Bei Peng, Shimon Whiteson

In few-shot learning, typically, the loss function which is applied at test time is the one we are ultimately interested in minimising, such as the mean-squared-error loss for a regression problem.

Few-Shot Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.