no code implementations • 25 Feb 2024 • Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale
This study provides a comprehensive benchmark framework for Source-Free Unsupervised Domain Adaptation (SF-UDA) in image classification, aiming to achieve a rigorous empirical understanding of the complex relationships between multiple key design factors in SF-UDA methods.
no code implementations • 10 Feb 2023 • Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale
Fine-tuning and Domain Adaptation emerged as effective strategies for efficiently transferring deep learning models to new target tasks.
1 code implementation • 27 Jun 2022 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Giacomo Meanti, Lorenzo Rosasco, Lorenzo Natale
In this work, we focus on the instance segmentation task and provide a comprehensive study of different techniques that allow adapting an object segmentation model in presence of novel objects or different domains.
1 code implementation • 18 Mar 2022 • Federico Vasile, Elisa Maiettini, Giulia Pasquale, Astrid Florio, Nicolò Boccardo, Lorenzo Natale
In order to overcome the lack of data of this kind and reduce the need for tedious data collection sessions for training the system, we devise a pipeline for rendering synthetic visual sequences of hand trajectories.
no code implementations • 28 Dec 2020 • Elisa Maiettini, Andrea Maracani, Raffaello Camoriano, Giulia Pasquale, Vadim Tikhanoff, Lorenzo Rosasco, Lorenzo Natale
We show that the robot can improve adaptation to novel domains, either by interacting with a human teacher (Active Learning) or with an autonomous supervision (Semi-supervised Learning).
1 code implementation • 25 Nov 2020 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale
Our approach is validated on the YCB-Video dataset which is widely adopted in the computer vision and robotics community, demonstrating that we can achieve and even surpass performance of the state-of-the-art, with a significant reduction (${\sim}6\times$) of the training time.
1 code implementation • 25 Nov 2020 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale
This shortens training time while maintaining state-of-the-art performance.
no code implementations • 23 Mar 2018 • Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale
We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach.