Search Results for author: Nader Asadi

Found 7 papers, 3 papers with code

Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning

1 code implementation26 Mar 2023 Nader Asadi, MohammadReza Davar, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky

To continually adapt the prototypes without keeping any prior task data, we propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data.

Continual Learning Representation Learning

Tackling Online One-Class Incremental Learning by Removing Negative Contrasts

no code implementations24 Mar 2022 Nader Asadi, Sudhir Mudur, Eugene Belilovsky

Recent work studies the supervised online continual learning setting where a learner receives a stream of data whose class distribution changes over time.

Class Incremental Learning Contrastive Learning +2

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

2 code implementations ICLR 2022 Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky

In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.

Continual Learning

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

3 code implementations11 Apr 2021 Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky

In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.

Continual Learning Metric Learning

Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation

no code implementations1 Jul 2019 Nader Asadi, AmirMohammad Sarfi, Mehrdad Hosseinzadeh, Sahba Tahsini, Mahdi Eftekhari

Our method can be applied to any layer of any arbitrary model without the need of any modification or additional training.

Cannot find the paper you are looking for? You can Submit a new open access paper.