Search Results for author: Andrew D. Bagdanov

Found 46 papers, 24 papers with code

Task-conditioned Domain Adaptation for Pedestrian Detection in Thermal Imagery

1 code implementation ECCV 2020 My Kieu, Andrew D. Bagdanov, Marco Bertini, Alberto del Bimbo

Despite its broad application and interest, it remains a challenging problem in part due to the vast range of conditions under which it must be robust.

Domain Adaptation Pedestrian Detection

No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces

1 code implementation7 Feb 2025 Daniel Marczak, Simone Magistri, Sebastian Cygert, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer

In this paper, we investigate the key characteristics of task matrices -- weight update matrices applied to a pre-trained model -- that enable effective merging.

Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion

1 code implementation6 Feb 2025 Marco Mistretta, Alberto Baldrati, Lorenzo Agnolucci, Marco Bertini, Andrew D. Bagdanov

In this paper, we show that the common practice of individually exploiting the text or image encoders of these powerful multi-modal models is highly suboptimal for intra-modal tasks like image-to-image retrieval.

Image Classification Image Retrieval +2

SPEQ: Stabilization Phases for Efficient Q-Learning in High Update-To-Data Ratio Reinforcement Learning

no code implementations15 Jan 2025 Carlo Romeo, Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov

In this paper we propose a sample-efficient method to improve computational efficiency by separating training into distinct learning phases in order to exploit gradient updates more effectively.

Computational Efficiency continuous-control +3

Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models

no code implementations18 Dec 2024 Dipam Goswami, Simone Magistri, Kai Wang, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer

Our method, which only uses first-order statistics in the form of class means communicated by clients to the server, incurs only a fraction of the communication costs required by methods based on communicating second-order statistics.

Federated Learning

RE-tune: Incremental Fine Tuning of Biomedical Vision-Language Models for Multi-label Chest X-ray Classification

no code implementations23 Oct 2024 Marco Mistretta, Andrew D. Bagdanov

In this paper we introduce RE-tune, a novel approach for fine-tuning pre-trained Multimodal Biomedical Vision-Language models (VLMs) in Incremental Learning scenarios for multi-label chest disease diagnosis.

Computational Efficiency Incremental Learning +2

Offline Reinforcement Learning with Imputed Rewards

no code implementations15 Jul 2024 Carlo Romeo, Andrew D. Bagdanov

Offline Reinforcement Learning (ORL) offers a robust solution to training agents in applications where interactions with the environment must be strictly limited due to cost, safety, or lack of accurate simulation environments.

D4RL reinforcement-learning +1

A Benchmark Environment for Offline Reinforcement Learning in Racing Games

1 code implementation12 Jul 2024 Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov

The environment simulates a single-agent racing game in which the objective is to complete the track through optimal navigation.

reinforcement-learning Reinforcement Learning +2

Exemplar-free Continual Representation Learning via Learnable Drift Compensation

1 code implementation11 Jul 2024 Alex Gomez-Villa, Dipam Goswami, Kai Wang, Andrew D. Bagdanov, Bartlomiej Twardowski, Joost Van de Weijer

Prototype-based approaches, when continually updated, face the critical issue of semantic drift due to which the old class prototypes drift to different positions in the new feature space.

class-incremental learning Class Incremental Learning +2

Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation

1 code implementation3 Jul 2024 Marco Mistretta, Alberto Baldrati, Marco Bertini, Andrew D. Bagdanov

Our approach, which we call Knowledge Distillation Prompt Learning (KDPL), can be integrated into existing prompt learning techniques and eliminates the need for labeled examples during adaptation.

Domain Generalization Knowledge Distillation +1

EUFCC-340K: A Faceted Hierarchical Dataset for Metadata Annotation in GLAM Collections

1 code implementation4 Jun 2024 Francesc Net, Marc Folia, Pep Casals, Andrew D. Bagdanov, Lluis Gomez

In this paper, we address the challenges of automatic metadata annotation in the domain of Galleries, Libraries, Archives, and Museums (GLAMs) by introducing a novel dataset, EUFCC340K, collected from the Europeana portal.

Multi-Label Classification MUlTI-LABEL-ClASSIFICATION

Elastic Feature Consolidation for Cold Start Exemplar-Free Incremental Learning

1 code implementation6 Feb 2024 Simone Magistri, Tomaso Trinci, Albin Soutif-Cormerais, Joost Van de Weijer, Andrew D. Bagdanov

Experimental results on CIFAR-100, Tiny-ImageNet, ImageNet-Subset and ImageNet-1K demonstrate that Elastic Feature Consolidation is better able to learn new tasks by maintaining model plasticity and significantly outperform the state-of-the-art.

class-incremental learning Class Incremental Learning +1

Class Incremental Learning with Pre-trained Vision-Language Models

no code implementations31 Oct 2023 Xialei Liu, Xusheng Cao, Haori Lu, Jia-Wen Xiao, Andrew D. Bagdanov, Ming-Ming Cheng

We also propose a method for parameter retention in the adapter layers that uses a measure of parameter importance to better maintain stability and plasticity during incremental learning.

class-incremental learning Class Incremental Learning +2

Masked Autoencoders are Efficient Class Incremental Learners

1 code implementation ICCV 2023 Jiang-Tian Zhai, Xialei Liu, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng

Moreover, MAEs can reliably reconstruct original input images from randomly selected patches, which we use to store exemplars from past tasks more efficiently for CIL.

class-incremental learning Class Incremental Learning +1

Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning

1 code implementation CVPR 2024 Xialei Liu, Jiang-Tian Zhai, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng

EFCIL is of interest because it mitigates concerns about privacy and long-term storage of data, while at the same time alleviating the problem of catastrophic forgetting in incremental learning.

class-incremental learning Class Incremental Learning +1

Exemplar-free Continual Learning of Vision Transformers via Gated Class-Attention and Cascaded Feature Drift Compensation

1 code implementation22 Nov 2022 Marco Cotogni, Fei Yang, Claudio Cusano, Andrew D. Bagdanov, Joost Van de Weijer

Secondly, we propose a new method of feature drift compensation that accommodates feature drift in the backbone when learning new tasks.

Continual Learning

Long-Tailed Class Incremental Learning

1 code implementation1 Oct 2022 Xialei Liu, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng

However, conventional CIL methods consider a balanced distribution for each new task, which ignores the prevalence of long-tailed distributions in the real world.

class-incremental learning Class Incremental Learning +1

Towards Informed Design and Validation Assistance in Computer Games Using Imitation Learning

no code implementations15 Aug 2022 Alessandro Sestini, Joakim Bergdahl, Konrad Tollmar, Andrew D. Bagdanov, Linus Gisslén

In games, as in and many other domains, design validation and testing is a huge challenge as systems are growing in size and manual testing is becoming infeasible.

Imitation Learning Survey +1

CCPT: Automatic Gameplay Testing and Validation with Curiosity-Conditioned Proximal Trajectories

no code implementations21 Feb 2022 Alessandro Sestini, Linus Gisslén, Joakim Bergdahl, Konrad Tollmar, Andrew D. Bagdanov

This paper proposes a novel deep reinforcement learning algorithm to perform automatic analysis and detection of gameplay issues in complex 3D navigation environments.

Deep Reinforcement Learning Game Design +3

Continually Learning Self-Supervised Representations with Projected Functional Regularization

1 code implementation30 Dec 2021 Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost Van de Weijer

Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches.

Continual Learning Incremental Learning +1

Robust pedestrian detection in thermal imagery using synthesized images

no code implementations3 Feb 2021 My Kieu, Lorenzo Berlincioni, Leonardo Galteri, Marco Bertini, Andrew D. Bagdanov, Alberto del Bimbo

Experimental results demonstrate the effectiveness of our approach: using less than 50\% of available real thermal training data, and relying on synthesized data generated by our model in the domain adaptation phase, our detector achieves state-of-the-art results on the KAIST Multispectral Pedestrian Detection Benchmark; even if more real thermal data is available adding GAN generated images to the training data results in improved performance, thus showing that these images act as an effective form of data augmentation.

Data Augmentation Domain Adaptation +2

Deep Policy Networks for NPC Behaviors that Adapt to Changing Design Parameters in Roguelike Games

no code implementations7 Dec 2020 Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov

Recent advances in Deep Reinforcement Learning (DRL) have largely focused on improving the performance of agents with the aim of replacing humans in known and well-defined environments.

Deep Reinforcement Learning Game Design

Demonstration-efficient Inverse Reinforcement Learning in Procedurally Generated Environments

no code implementations4 Dec 2020 Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov

We propose a technique based on Adversarial Inverse Reinforcement Learning which can significantly decrease the need for expert demonstrations in PCG games.

Deep Reinforcement Learning reinforcement-learning +1

DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games

no code implementations3 Dec 2020 Alessandro Sestini, Alexander Kuhnle, Andrew D. Bagdanov

In this paper we introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL).

Deep Reinforcement Learning reinforcement-learning +1

Class-incremental learning: survey and performance evaluation on image classification

1 code implementation28 Oct 2020 Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost Van de Weijer

For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning.

class-incremental learning Class Incremental Learning +4

RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning

2 code implementations NeurIPS 2020 Riccardo Del Chiaro, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer

We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight egularization and knowledge distillation to recurrent continual learning problems.

Continual Learning Image Captioning +1

Visual Question Answering for Cultural Heritage

no code implementations22 Mar 2020 Pietro Bongini, Federico Becattini, Andrew D. Bagdanov, Alberto del Bimbo

This will turn the classic audio guide into a smart personal instructor with which the visitor can interact by asking for explanations focused on specific interests.

Question Answering Visual Question Answering

Exploiting Unlabeled Data in CNNs by Self-supervised Learning to Rank

2 code implementations17 Feb 2019 Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov

Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.

Active Learning Crowd Counting +5

Soft-PHOC Descriptor for End-to-End Word Spotting in Egocentric Scene Images

1 code implementation4 Sep 2018 Dena Bazazian, Dimosthenis Karatzas, Andrew D. Bagdanov

In this paper we propose a technique to create and exploit an intermediate representation of images based on text attributes which are character probability maps.

Attribute Dynamic Time Warping +1

Leveraging Unlabeled Data for Crowd Counting by Learning to Rank

1 code implementation CVPR 2018 Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov

We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework.

Crowd Counting Image Retrieval +2

Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting

2 code implementations8 Feb 2018 Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, Andrew D. Bagdanov

In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios.

Domain-adaptive deep network compression

2 code implementations ICCV 2017 Marc Masana, Joost Van de Weijer, Luis Herranz, Andrew D. Bagdanov, Jose M. Alvarez

We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.

Low-rank compression

Review on Computer Vision Techniques in Emergency Situation

no code implementations24 Aug 2017 Laura Lopez-Fuentes, Joost Van de Weijer, Manuel Gonzalez-Hidalgo, Harald Skinnemoen, Andrew D. Bagdanov

The number of emergencies where computer vision tools has been considered or used is very wide, and there is a great overlap across related emergency research.

RankIQA: Learning from Rankings for No-reference Image Quality Assessment

2 code implementations ICCV 2017 Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov

Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.

Visual attention models for scene text recognition

no code implementations5 Jun 2017 Suman K. Ghosh, Ernest Valveny, Andrew D. Bagdanov

A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image.

Language Modeling Language Modelling +1

Improving Text Proposals for Scene Images with Fully Convolutional Networks

1 code implementation16 Feb 2017 Dena Bazazian, Raul Gomez, Anguelos Nicolaou, Lluis Gomez, Dimosthenis Karatzas, Andrew D. Bagdanov

Text Proposals have emerged as a class-dependent version of object proposals - efficient approaches to reduce the search space of possible text object locations in an image.

Object Scene Text Recognition

Bandwidth limited object recognition in high resolution imagery

no code implementations16 Jan 2017 Laura Lopez-Fuentes, Andrew D. Bagdanov, Joost Van de Weijer, Harald Skinnemoen

This paper proposes a novel method to optimize bandwidth usage for object detection in critical communication scenarios.

Object object-detection +3

Scale Coding Bag of Deep Features for Human Attribute and Action Recognition

no code implementations14 Dec 2016 Fahad Shahbaz Khan, Joost Van de Weijer, Rao Muhammad Anwer, Andrew D. Bagdanov, Michael Felsberg, Jorma Laaksonen

Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding.

Action Recognition In Still Images Attribute

On-the-fly Network Pruning for Object Detection

no code implementations11 May 2016 Marc Masana, Joost Van de Weijer, Andrew D. Bagdanov

Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image.

Network Pruning Object +2

Sparse Radial Sampling LBP for Writer Identification

no code implementations23 Apr 2015 Anguelos Nicolaou, Andrew D. Bagdanov, Marcus Liwicki, Dimosthenis Karatzas

In this paper we present the use of Sparse Radial Sampling Local Binary Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture classification.

Binarization General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.