1 code implementation • ECCV 2020 • My Kieu, Andrew D. Bagdanov, Marco Bertini, Alberto del Bimbo
Despite its broad application and interest, it remains a challenging problem in part due to the vast range of conditions under which it must be robust.
1 code implementation • 7 Feb 2025 • Daniel Marczak, Simone Magistri, Sebastian Cygert, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
In this paper, we investigate the key characteristics of task matrices -- weight update matrices applied to a pre-trained model -- that enable effective merging.
1 code implementation • 6 Feb 2025 • Marco Mistretta, Alberto Baldrati, Lorenzo Agnolucci, Marco Bertini, Andrew D. Bagdanov
In this paper, we show that the common practice of individually exploiting the text or image encoders of these powerful multi-modal models is highly suboptimal for intra-modal tasks like image-to-image retrieval.
no code implementations • 15 Jan 2025 • Carlo Romeo, Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov
In this paper we propose a sample-efficient method to improve computational efficiency by separating training into distinct learning phases in order to exploit gradient updates more effectively.
no code implementations • 18 Dec 2024 • Dipam Goswami, Simone Magistri, Kai Wang, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
Our method, which only uses first-order statistics in the form of class means communicated by clients to the server, incurs only a fraction of the communication costs required by methods based on communicating second-order statistics.
no code implementations • 23 Oct 2024 • Marco Mistretta, Andrew D. Bagdanov
In this paper we introduce RE-tune, a novel approach for fine-tuning pre-trained Multimodal Biomedical Vision-Language models (VLMs) in Incremental Learning scenarios for multi-label chest disease diagnosis.
no code implementations • 27 Sep 2024 • Tomaso Trinci, Simone Magistri, Roberto Verdecchia, Andrew D. Bagdanov
Despite the potential that continual learning could have towards Green AI, its environmental sustainability remains relatively uncharted.
no code implementations • 15 Jul 2024 • Carlo Romeo, Andrew D. Bagdanov
Offline Reinforcement Learning (ORL) offers a robust solution to training agents in applications where interactions with the environment must be strictly limited due to cost, safety, or lack of accurate simulation environments.
1 code implementation • 12 Jul 2024 • Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov
The environment simulates a single-agent racing game in which the objective is to complete the track through optimal navigation.
1 code implementation • 11 Jul 2024 • Alex Gomez-Villa, Dipam Goswami, Kai Wang, Andrew D. Bagdanov, Bartlomiej Twardowski, Joost Van de Weijer
Prototype-based approaches, when continually updated, face the critical issue of semantic drift due to which the old class prototypes drift to different positions in the new feature space.
1 code implementation • 3 Jul 2024 • Marco Mistretta, Alberto Baldrati, Marco Bertini, Andrew D. Bagdanov
Our approach, which we call Knowledge Distillation Prompt Learning (KDPL), can be integrated into existing prompt learning techniques and eliminates the need for labeled examples during adaptation.
1 code implementation • 4 Jun 2024 • Francesc Net, Marc Folia, Pep Casals, Andrew D. Bagdanov, Lluis Gomez
In this paper, we address the challenges of automatic metadata annotation in the domain of Galleries, Libraries, Archives, and Museums (GLAMs) by introducing a novel dataset, EUFCC340K, collected from the Europeana portal.
1 code implementation • 6 Feb 2024 • Simone Magistri, Tomaso Trinci, Albin Soutif-Cormerais, Joost Van de Weijer, Andrew D. Bagdanov
Experimental results on CIFAR-100, Tiny-ImageNet, ImageNet-Subset and ImageNet-1K demonstrate that Elastic Feature Consolidation is better able to learn new tasks by maintaining model plasticity and significantly outperform the state-of-the-art.
no code implementations • 15 Dec 2023 • Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov
Offline reinforcement learning leverages pre-collected datasets of transitions to train policies.
no code implementations • 31 Oct 2023 • Xialei Liu, Xusheng Cao, Haori Lu, Jia-Wen Xiao, Andrew D. Bagdanov, Ming-Ming Cheng
We also propose a method for parameter retention in the adapter layers that uses a measure of parameter importance to better maintain stability and plasticity during incremental learning.
1 code implementation • ICCV 2023 • Jiang-Tian Zhai, Xialei Liu, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
Moreover, MAEs can reliably reconstruct original input images from randomly selected patches, which we use to store exemplars from past tasks more efficiently for CIL.
1 code implementation • CVPR 2024 • Xialei Liu, Jiang-Tian Zhai, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
EFCIL is of interest because it mitigates concerns about privacy and long-term storage of data, while at the same time alleviating the problem of catastrophic forgetting in incremental learning.
1 code implementation • 22 Nov 2022 • Marco Cotogni, Fei Yang, Claudio Cusano, Andrew D. Bagdanov, Joost Van de Weijer
Secondly, we propose a new method of feature drift compensation that accommodates feature drift in the backbone when learning new tasks.
1 code implementation • 1 Oct 2022 • Xialei Liu, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
However, conventional CIL methods consider a balanced distribution for each new task, which ignores the prevalence of long-tailed distributions in the real world.
no code implementations • 15 Aug 2022 • Alessandro Sestini, Joakim Bergdahl, Konrad Tollmar, Andrew D. Bagdanov, Linus Gisslén
In games, as in and many other domains, design validation and testing is a huge challenge as systems are growing in size and manual testing is becoming infeasible.
no code implementations • 21 Feb 2022 • Alessandro Sestini, Linus Gisslén, Joakim Bergdahl, Konrad Tollmar, Andrew D. Bagdanov
This paper proposes a novel deep reinforcement learning algorithm to perform automatic analysis and detection of gameplay issues in complex 3D navigation environments.
1 code implementation • 16 Feb 2022 • Simone Zini, Alex Gomez-Villa, Marco Buzzelli, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
The data augmentations used are of crucial importance to the quality of learned feature representations.
1 code implementation • 30 Dec 2021 • Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost Van de Weijer
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches.
no code implementations • 21 Apr 2021 • Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov
To this end, we propose four different policy fusion methods for combining pre-trained policies.
no code implementations • 3 Feb 2021 • My Kieu, Lorenzo Berlincioni, Leonardo Galteri, Marco Bertini, Andrew D. Bagdanov, Alberto del Bimbo
Experimental results demonstrate the effectiveness of our approach: using less than 50\% of available real thermal training data, and relying on synthesized data generated by our model in the domain adaptation phase, our detector achieves state-of-the-art results on the KAIST Multispectral Pedestrian Detection Benchmark; even if more real thermal data is available adding GAN generated images to the training data results in improved performance, thus showing that these images act as an effective form of data augmentation.
no code implementations • 7 Dec 2020 • Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov
Recent advances in Deep Reinforcement Learning (DRL) have largely focused on improving the performance of agents with the aim of replacing humans in known and well-defined environments.
no code implementations • 4 Dec 2020 • Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov
We propose a technique based on Adversarial Inverse Reinforcement Learning which can significantly decrease the need for expert demonstrations in PCG games.
no code implementations • 3 Dec 2020 • Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov
In this paper we introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL).
1 code implementation • 28 Oct 2020 • Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost Van de Weijer
For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning.
2 code implementations • NeurIPS 2020 • Riccardo Del Chiaro, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight egularization and knowledge distillation to recurrent continual learning problems.
1 code implementation • 20 Apr 2020 • Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D. Bagdanov, Shangling Jui, Joost Van de Weijer
To prevent forgetting, we combine generative feature replay in the classifier with feature distillation in the feature extractor.
no code implementations • 22 Mar 2020 • Pietro Bongini, Federico Becattini, Andrew D. Bagdanov, Alberto del Bimbo
This will turn the classic audio guide into a smart personal instructor with which the visitor can interact by asking for explanations focused on specific interests.
2 code implementations • 17 Feb 2019 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
1 code implementation • 4 Sep 2018 • Dena Bazazian, Dimosthenis Karatzas, Andrew D. Bagdanov
In this paper we propose a technique to create and exploit an intermediate representation of images based on text attributes which are character probability maps.
1 code implementation • CVPR 2018 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework.
Ranked #19 on
Crowd Counting
on UCF CC 50
2 code implementations • 8 Feb 2018 • Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, Andrew D. Bagdanov
In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios.
2 code implementations • ICCV 2017 • Marc Masana, Joost Van de Weijer, Luis Herranz, Andrew D. Bagdanov, Jose M. Alvarez
We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.
no code implementations • 24 Aug 2017 • Laura Lopez-Fuentes, Joost Van de Weijer, Manuel Gonzalez-Hidalgo, Harald Skinnemoen, Andrew D. Bagdanov
The number of emergencies where computer vision tools has been considered or used is very wide, and there is a great overlap across related emergency research.
2 code implementations • ICCV 2017 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
no code implementations • 5 Jun 2017 • Suman K. Ghosh, Ernest Valveny, Andrew D. Bagdanov
A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image.
1 code implementation • 16 Feb 2017 • Dena Bazazian, Raul Gomez, Anguelos Nicolaou, Lluis Gomez, Dimosthenis Karatzas, Andrew D. Bagdanov
Text Proposals have emerged as a class-dependent version of object proposals - efficient approaches to reduce the search space of possible text object locations in an image.
no code implementations • 16 Jan 2017 • Laura Lopez-Fuentes, Andrew D. Bagdanov, Joost Van de Weijer, Harald Skinnemoen
This paper proposes a novel method to optimize bandwidth usage for object detection in critical communication scenarios.
no code implementations • 14 Dec 2016 • Fahad Shahbaz Khan, Joost Van de Weijer, Rao Muhammad Anwer, Andrew D. Bagdanov, Michael Felsberg, Jorma Laaksonen
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding.
no code implementations • 11 May 2016 • Marc Masana, Joost Van de Weijer, Andrew D. Bagdanov
Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image.
no code implementations • 23 Apr 2015 • Anguelos Nicolaou, Andrew D. Bagdanov, Marcus Liwicki, Dimosthenis Karatzas
In this paper we present the use of Sparse Radial Sampling Local Binary Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture classification.
no code implementations • NeurIPS 2011 • Fahad S. Khan, Joost Weijer, Andrew D. Bagdanov, Maria Vanrell
We describe a novel technique for feature combination in the bag-of-words model of image classification.