1 code implementation • 11 Jul 2024 • Alex Gomez-Villa, Dipam Goswami, Kai Wang, Andrew D. Bagdanov, Bartlomiej Twardowski, Joost Van de Weijer
Prototype-based approaches, when continually updated, face the critical issue of semantic drift due to which the old class prototypes drift to different positions in the new feature space.
no code implementations • 14 Sep 2023 • Francesco Fabbri, Xianghang Liu, Jack R. McKenzie, Bartlomiej Twardowski, Tri Kurniawan Wijaya
Federated Learning (FL) has emerged as a key approach for distributed machine learning, enhancing online personalization while ensuring user data privacy.
1 code implementation • 12 Sep 2023 • Alex Gomez-Villa, Bartlomiej Twardowski, Kai Wang, Joost Van de Weijer
In the second phase, we combine this new knowledge with the previous network in an adaptation-retrospection phase to avoid forgetting and initialize a new expert with the knowledge of the old network.
1 code implementation • 12 Feb 2023 • Alejandro Ariza-Casabona, Bartlomiej Twardowski, Tri Kurniawan Wijaya
This approach helps to mitigate the negative knowledge transfer problem from multiple domains and improve overall representation.
1 code implementation • 30 Dec 2021 • Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost Van de Weijer
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches.
no code implementations • 25 Aug 2021 • Javad Zolfaghari Bengar, Joost Van de Weijer, Bartlomiej Twardowski, Bogdan Raducanu
Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high.
1 code implementation • 28 Oct 2020 • Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost Van de Weijer
For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning.
1 code implementation • CVPR 2020 • Vacit Oguz Yazici, Abel Gonzalez-Garcia, Arnau Ramisa, Bartlomiej Twardowski, Joost Van de Weijer
Recurrent neural networks (RNN) are popular for many computer vision tasks, including multi-label classification.