no code implementations • 25 Nov 2024 • Xide Xu, Muhammad Atif Butt, Sandesh Kamath, Bogdan Raducanu
The growing demand for customized visual content has led to the rise of personalized text-to-image (T2I) diffusion models.
1 code implementation • 29 Oct 2024 • Kai Wang, Fei Yang, Bogdan Raducanu, Joost Van de Weijer
However, in many realistic scenarios, we only have access to few samples and knowledge of the class names (e. g., when considering instances of classes).
no code implementations • 18 Oct 2024 • Héctor Laria, Alex Gomez-Villa, Imad Eddine Marouf, Kai Wang, Bogdan Raducanu, Joost Van de Weijer
Our research presents the first comprehensive investigation into open-world forgetting in diffusion models, focusing on semantic and appearance drift of representations.
no code implementations • 7 Jun 2024 • Sandesh Kamath, Albin Soutif-Cormerais, Joost Van de Weijer, Bogdan Raducanu
In this paper, we show that the stability gap also occurs when applying joint incremental training of homogeneous tasks.
1 code implementation • 6 Sep 2023 • Eduardo Aguilar, Bogdan Raducanu, Petia Radeva, Joost Van de Weijer
Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions.
1 code implementation • 24 Mar 2022 • Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost Van de Weijer
In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM).
1 code implementation • 4 Dec 2021 • Héctor Laria, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
GANs have matured in recent years and are able to generate high-resolution, realistic images.
1 code implementation • 9 Oct 2021 • Javad Zolfaghari Bengar, Joost Van de Weijer, Laura Lopez Fuentes, Bogdan Raducanu
Results on three datasets showed that the method is general (it can be combined with most existing active learning algorithms) and can be effectively applied to boost the performance of both informative and representative-based active learning methods.
no code implementations • 25 Aug 2021 • Javad Zolfaghari Bengar, Joost Van de Weijer, Bartlomiej Twardowski, Bogdan Raducanu
Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high.
no code implementations • 30 Jul 2021 • Javad Zolfaghari Bengar, Bogdan Raducanu, Joost Van de Weijer
Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples.
no code implementations • 20 Jul 2021 • Carola Figueroa-Flores, David Berga, Joost van der Weijer, Bogdan Raducanu
Saliency is the perceptual capacity of our visual system to focus our attention (i. e. gaze) on relevant objects.
no code implementations • ICCV 2021 • Yaxing Wang, Hector Laria Mantecon, Joost Van de Weijer, Laura Lopez-Fuentes, Bogdan Raducanu
In this paper, we propose a new transfer learning for I2I translation (TransferI2I).
no code implementations • 31 Jul 2020 • Minghan Li, Xialei Liu, Joost Van de Weijer, Bogdan Raducanu
Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications (such as image/video indexing and retrieval, autonomous driving, etc.).
no code implementations • 24 Jul 2020 • Carola Figueroa-Flores, Bogdan Raducanu, David Berga, Joost Van de Weijer
Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification.
1 code implementation • 20 Apr 2020 • Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D. Bagdanov, Shangling Jui, Joost Van de Weijer
To prevent forgetting, we combine generative feature replay in the classifier with feature distillation in the feature extractor.
no code implementations • 30 Aug 2019 • Javad Zolfaghari Bengar, Abel Gonzalez-Garcia, Gabriel Villalonga, Bogdan Raducanu, Hamed H. Aghdam, Mikhail Mozerov, Antonio M. Lopez, Joost Van de Weijer
Our active learning criterion is based on the estimated number of errors in terms of false positives and false negatives.
no code implementations • 7 Dec 2018 • Idoia Ruiz, Bogdan Raducanu, Rakesh Mehta, Jaume Amores
Additionally, we propose and analyse network distillation as a learning strategy to reduce the computational cost of the deep learning approach at test time.
1 code implementation • NeurIPS 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
2 code implementations • 6 Sep 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
no code implementations • 1 Aug 2018 • Carola Figueroa Flores, Abel Gonzalez-García, Joost Van de Weijer, Bogdan Raducanu
Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition.
1 code implementation • ECCV 2018 • Yaxing Wang, Chenshen Wu, Luis Herranz, Joost Van de Weijer, Abel Gonzalez-Garcia, Bogdan Raducanu
Transferring the knowledge of pretrained networks to new domains by means of finetuning is a widely used practice for applications based on discriminative models.
Ranked #7 on 10-shot image generation on Babies
6 code implementations • 19 Nov 2016 • Guim Perarnau, Joost Van de Weijer, Bogdan Raducanu, Jose M. Álvarez
Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions.
Ranked #4 on Image-to-Image Translation on RaFD