Search Results for author: Thomas De Min

Found 2 papers, 2 papers with code

Less is more: Summarizing Patch Tokens for efficient Multi-Label Class-Incremental Learning

1 code implementation24 May 2024 Thomas De Min, Massimiliano Mancini, Stéphane Lathuilière, Subhankar Roy, Elisa Ricci

Since independent pathways in truly incremental scenarios will result in an explosion of computation due to the quadratically complex multi-head self-attention (MSA) operation in prompt tuning, we propose to reduce the original patch token embeddings into summarized tokens.

On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers

1 code implementation18 Aug 2023 Thomas De Min, Massimiliano Mancini, Karteek Alahari, Xavier Alameda-Pineda, Elisa Ricci

State-of-the-art rehearsal-free continual learning methods exploit the peculiarities of Vision Transformers to learn task-specific prompts, drastically reducing catastrophic forgetting.

Continual Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.