1 code implementation • 26 Mar 2023 • Nader Asadi, MohammadReza Davar, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
To continually adapt the prototypes without keeping any prior task data, we propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data.
no code implementations • 24 Mar 2022 • Nader Asadi, Sudhir Mudur, Eugene Belilovsky
Recent work studies the supervised online continual learning setting where a learner receives a stream of data whose class distribution changes over time.
no code implementations • CVPR 2022 • MohammadReza Davari, Nader Asadi, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Continual Learning research typically focuses on tackling the phenomenon of catastrophic forgetting in neural networks.
2 code implementations • ICLR 2022 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
3 code implementations • 11 Apr 2021 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
no code implementations • 18 Sep 2019 • Nader Asadi, Amir M. Sarfi, Mehrdad Hosseinzadeh, Zahra Karimpour, Mahdi Eftekhari
In this work, we propose a learning framework to improve the shape bias property of self-supervised methods.
Ranked #37 on
Domain Generalization
on PACS
no code implementations • 1 Jul 2019 • Nader Asadi, AmirMohammad Sarfi, Mehrdad Hosseinzadeh, Sahba Tahsini, Mahdi Eftekhari
Our method can be applied to any layer of any arbitrary model without the need of any modification or additional training.