1 code implementation • 23 Aug 2023 • Wenxuan Zhang, Paul Janson, Rahaf Aljundi, Mohamed Elhoseiny
Our method achieves improvements on the accuracy of the newly learned tasks up to 7% while preserving the pretraining knowledge with a negligible decrease of 0. 9% on a representative control set accuracy.
1 code implementation • ICCV 2023 • Wenxuan Zhang, Paul Janson, Kai Yi, Ivan Skorokhodov, Mohamed Elhoseiny
The GRW loss augments the training by continually encouraging the model to generate realistic and characterized samples to represent the unseen space.
1 code implementation • 10 Oct 2022 • Paul Janson, Wenxuan Zhang, Rahaf Aljundi, Mohamed Elhoseiny
With the success of pretraining techniques in representation learning, a number of continual learning methods based on pretrained models have been proposed.
no code implementations • 24 Dec 2021 • Kai Yi, Paul Janson, Wenxuan Zhang, Mohamed Elhoseiny
Accordingly, we propose a Domain-Invariant Network (DIN) to learn factorized features for shifting domains and improved textual representation for unseen classes.