Search Results for author: Bogdan Raducanu

Found 18 papers, 9 papers with code

Continual Evidential Deep Learning for Out-of-Distribution Detection

1 code implementation6 Sep 2023 Eduardo Aguilar, Bogdan Raducanu, Petia Radeva, Joost Van de Weijer

Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions.

Continual Learning Out-of-Distribution Detection

Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization

1 code implementation24 Mar 2022 Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost Van de Weijer

In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM).

Continual Learning

Class-Balanced Active Learning for Image Classification

1 code implementation9 Oct 2021 Javad Zolfaghari Bengar, Joost Van de Weijer, Laura Lopez Fuentes, Bogdan Raducanu

Results on three datasets showed that the method is general (it can be combined with most existing active learning algorithms) and can be effectively applied to boost the performance of both informative and representative-based active learning methods.

Active Learning Classification +1

Reducing Label Effort: Self-Supervised meets Active Learning

no code implementations25 Aug 2021 Javad Zolfaghari Bengar, Joost Van de Weijer, Bartlomiej Twardowski, Bogdan Raducanu

Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high.

Active Learning Object Recognition

When Deep Learners Change Their Mind: Learning Dynamics for Active Learning

no code implementations30 Jul 2021 Javad Zolfaghari Bengar, Bogdan Raducanu, Joost Van de Weijer

Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples.

Active Learning Informativeness

Learning to Rank for Active Learning: A Listwise Approach

no code implementations31 Jul 2020 Minghan Li, Xialei Liu, Joost Van de Weijer, Bogdan Raducanu

Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications (such as image/video indexing and retrieval, autonomous driving, etc.).

Active Learning Autonomous Driving +3

Hallucinating Saliency Maps for Fine-Grained Image Classification for Limited Data Domains

no code implementations24 Jul 2020 Carola Figueroa-Flores, Bogdan Raducanu, David Berga, Joost Van de Weijer

Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification.

Classification Fine-Grained Image Classification +3

Optimizing speed/accuracy trade-off for person re-identification via knowledge distillation

no code implementations7 Dec 2018 Idoia Ruiz, Bogdan Raducanu, Rakesh Mehta, Jaume Amores

Additionally, we propose and analyse network distillation as a learning strategy to reduce the computational cost of the deep learning approach at test time.

General Classification Image Classification +3

Memory Replay GANs: Learning to Generate New Categories without Forgetting

1 code implementation NeurIPS 2018 Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu

In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.

Memory Replay GANs: learning to generate images from new categories without forgetting

1 code implementation6 Sep 2018 Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu

In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.

Transferring GANs: generating images from limited data

1 code implementation ECCV 2018 Yaxing Wang, Chenshen Wu, Luis Herranz, Joost Van de Weijer, Abel Gonzalez-Garcia, Bogdan Raducanu

Transferring the knowledge of pretrained networks to new domains by means of finetuning is a widely used practice for applications based on discriminative models.

10-shot image generation Domain Adaptation +1

Invertible Conditional GANs for image editing

6 code implementations19 Nov 2016 Guim Perarnau, Joost Van de Weijer, Bogdan Raducanu, Jose M. Álvarez

Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions.

Conditional Image Generation Image-to-Image Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.