Search Results for author: Thomas De Min

Found 1 papers, 1 papers with code

On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers

1 code implementation18 Aug 2023 Thomas De Min, Massimiliano Mancini, Karteek Alahari, Xavier Alameda-Pineda, Elisa Ricci

State-of-the-art rehearsal-free continual learning methods exploit the peculiarities of Vision Transformers to learn task-specific prompts, drastically reducing catastrophic forgetting.

Continual Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.