Search Results for author: Bogdan Raducanu

Found 22 papers, 10 papers with code

Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack

no code implementations25 Nov 2024 Xide Xu, Muhammad Atif Butt, Sandesh Kamath, Bogdan Raducanu

The growing demand for customized visual content has led to the rise of personalized text-to-image (T2I) diffusion models.

Adversarial Attack

Multi-Class Textual-Inversion Secretly Yields a Semantic-Agnostic Classifier

1 code implementation29 Oct 2024 Kai Wang, Fei Yang, Bogdan Raducanu, Joost Van de Weijer

However, in many realistic scenarios, we only have access to few samples and knowledge of the class names (e. g., when considering instances of classes).

Assessing Open-world Forgetting in Generative Image Model Customization

no code implementations18 Oct 2024 Héctor Laria, Alex Gomez-Villa, Imad Eddine Marouf, Kai Wang, Bogdan Raducanu, Joost Van de Weijer

Our research presents the first comprehensive investigation into open-world forgetting in diffusion models, focusing on semantic and appearance drift of representations.

Image Generation Zero-Shot Learning

Continual Evidential Deep Learning for Out-of-Distribution Detection

1 code implementation6 Sep 2023 Eduardo Aguilar, Bogdan Raducanu, Petia Radeva, Joost Van de Weijer

Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions.

Continual Learning Deep Learning +1

Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization

1 code implementation24 Mar 2022 Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost Van de Weijer

In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM).

Continual Learning

Class-Balanced Active Learning for Image Classification

1 code implementation9 Oct 2021 Javad Zolfaghari Bengar, Joost Van de Weijer, Laura Lopez Fuentes, Bogdan Raducanu

Results on three datasets showed that the method is general (it can be combined with most existing active learning algorithms) and can be effectively applied to boost the performance of both informative and representative-based active learning methods.

Active Learning Classification +1

Reducing Label Effort: Self-Supervised meets Active Learning

no code implementations25 Aug 2021 Javad Zolfaghari Bengar, Joost Van de Weijer, Bartlomiej Twardowski, Bogdan Raducanu

Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high.

Active Learning Object Recognition

When Deep Learners Change Their Mind: Learning Dynamics for Active Learning

no code implementations30 Jul 2021 Javad Zolfaghari Bengar, Bogdan Raducanu, Joost Van de Weijer

Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples.

Active Learning Informativeness

Learning to Rank for Active Learning: A Listwise Approach

no code implementations31 Jul 2020 Minghan Li, Xialei Liu, Joost Van de Weijer, Bogdan Raducanu

Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications (such as image/video indexing and retrieval, autonomous driving, etc.).

Active Learning Autonomous Driving +3

Hallucinating Saliency Maps for Fine-Grained Image Classification for Limited Data Domains

no code implementations24 Jul 2020 Carola Figueroa-Flores, Bogdan Raducanu, David Berga, Joost Van de Weijer

Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification.

Classification Fine-Grained Image Classification +3

Optimizing speed/accuracy trade-off for person re-identification via knowledge distillation

no code implementations7 Dec 2018 Idoia Ruiz, Bogdan Raducanu, Rakesh Mehta, Jaume Amores

Additionally, we propose and analyse network distillation as a learning strategy to reduce the computational cost of the deep learning approach at test time.

Deep Learning General Classification +4

Memory Replay GANs: Learning to Generate New Categories without Forgetting

1 code implementation NeurIPS 2018 Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu

In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.

Memory Replay GANs: learning to generate images from new categories without forgetting

2 code implementations6 Sep 2018 Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu

In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.

Transferring GANs: generating images from limited data

1 code implementation ECCV 2018 Yaxing Wang, Chenshen Wu, Luis Herranz, Joost Van de Weijer, Abel Gonzalez-Garcia, Bogdan Raducanu

Transferring the knowledge of pretrained networks to new domains by means of finetuning is a widely used practice for applications based on discriminative models.

10-shot image generation Diversity +2

Invertible Conditional GANs for image editing

6 code implementations19 Nov 2016 Guim Perarnau, Joost Van de Weijer, Bogdan Raducanu, Jose M. Álvarez

Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions.

Conditional Image Generation Image-to-Image Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.