Search Results for author: Marco Toldo

Found 12 papers, 6 papers with code

RECALL+: Adversarial Web-based Replay for Continual Learning in Semantic Segmentation

no code implementations19 Sep 2023 Chang Liu, Giulia Rizzoli, Francesco Barbato, Andrea Maracani, Marco Toldo, Umberto Michieli, Yi Niu, Pietro Zanuttigh

Catastrophic forgetting of previous knowledge is a critical issue in continual learning typically handled through various regularization strategies.

Continual Learning Incremental Learning +1

Asynchronous Federated Continual Learning

1 code implementation7 Apr 2023 Donald Shenaj, Marco Toldo, Alberto Rigon, Pietro Zanuttigh

We introduce a novel federated learning setting (AFCL) where the continual learning of multiple tasks happens at each client with different orderings and in asynchronous time slots.

Continual Learning Federated Learning

Learning with Style: Continual Semantic Segmentation Across Tasks and Domains

no code implementations13 Oct 2022 Marco Toldo, Umberto Michieli, Pietro Zanuttigh

Then, we address the proposed setup by using style transfer techniques to extend knowledge across domains when learning incremental tasks and a robust distillation framework to effectively recollect task knowledge under incremental domain shift.

Autonomous Driving Class Incremental Learning +5

Learning Across Domains and Devices: Style-Driven Source-Free Domain Adaptation in Clustered Federated Learning

1 code implementation5 Oct 2022 Donald Shenaj, Eros Fanì, Marco Toldo, Debora Caldarola, Antonio Tavera, Umberto Michieli, Marco Ciccone, Pietro Zanuttigh, Barbara Caputo

Federated Learning (FL) has recently emerged as a possible way to tackle the domain shift in real-world Semantic Segmentation (SS) without compromising the private nature of the collected data.

Autonomous Driving Federated Learning +2

Bring Evanescent Representations to Life in Lifelong Class Incremental Learning

no code implementations CVPR 2022 Marco Toldo, Mete Ozay

In Class Incremental Learning (CIL), a classification model is progressively trained at each incremental step on an evolving dataset of new classes, while at the same time, it is required to preserve knowledge of all the classes observed so far.

Class Incremental Learning Incremental Learning

Road Scenes Segmentation Across Different Domains by Disentangling Latent Representations

1 code implementation6 Aug 2021 Francesco Barbato, Umberto Michieli, Marco Toldo, Pietro Zanuttigh

Deep learning models obtain impressive accuracy in road scenes understanding, however they need a large quantity of labeled samples for their training.

Domain Adaptation Semantic Segmentation

Latent Space Regularization for Unsupervised Domain Adaptation in Semantic Segmentation

1 code implementation6 Apr 2021 Francesco Barbato, Marco Toldo, Umberto Michieli, Pietro Zanuttigh

Deep convolutional neural networks for semantic segmentation achieve outstanding accuracy, however they also have a couple of major drawbacks: first, they do not generalize well to distributions slightly different from the one of the training data; second, they require a huge amount of labeled data for their optimization.

Autonomous Driving Clustering +3

Unsupervised Domain Adaptation in Semantic Segmentation via Orthogonal and Clustered Embeddings

1 code implementation25 Nov 2020 Marco Toldo, Umberto Michieli, Pietro Zanuttigh

Deep learning frameworks allowed for a remarkable advancement in semantic segmentation, but the data hungry nature of convolutional networks has rapidly raised the demand for adaptation techniques able to transfer learned knowledge from label-abundant domains to unlabeled ones.

Clustering Semantic Segmentation +1

Unsupervised Domain Adaptation in Semantic Segmentation: a Review

no code implementations21 May 2020 Marco Toldo, Andrea Maracani, Umberto Michieli, Pietro Zanuttigh

The aim of this paper is to give an overview of the recent advancements in the Unsupervised Domain Adaptation (UDA) of deep networks for semantic segmentation.

Autonomous Driving Multi-Task Learning +2

Unsupervised Domain Adaptation with Multiple Domain Discriminators and Adaptive Self-Training

no code implementations27 Apr 2020 Teo Spadotto, Marco Toldo, Umberto Michieli, Pietro Zanuttigh

We introduce a novel UDA framework where a standard supervised loss on labeled synthetic data is supported by an adversarial module and a self-training strategy aiming at aligning the two domain distributions.

Segmentation Semantic Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.