Search Results for author: Jary Pomponi

Found 9 papers, 8 papers with code

Centroids Matching: an efficient Continual Learning approach operating in the embedding space

1 code implementation3 Aug 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint.

Continual Learning Incremental Learning +1

Continual Learning with Invertible Generative Models

1 code implementation11 Feb 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.

Continual Learning

Pixle: a fast and effective black-box attack based on rearranging pixels

1 code implementation4 Feb 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample.

Structured Ensembles: an Approach to Reduce the Memory Footprint of Ensemble Methods

2 code implementations6 May 2021 Jary Pomponi, Simone Scardapane, Aurelio Uncini

In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches.

Continual Learning

Pseudo-Rehearsal for Continual Learning with Normalizing Flows

1 code implementation ICML Workshop LifelongML 2020 Jary Pomponi, Simone Scardapane, Aurelio Uncini

We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.

Continual Learning

Bayesian Neural Networks With Maximum Mean Discrepancy Regularization

4 code implementations2 Mar 2020 Jary Pomponi, Simone Scardapane, Aurelio Uncini

Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e. g., interpretability, multi-task learning, and calibration.

Image Classification Multi-Task Learning +1

DeepRICH: Learning Deeply Cherenkov Detectors

no code implementations26 Nov 2019 Cristiano Fanelli, Jary Pomponi

Imaging Cherenkov detectors are largely used for particle identification (PID) in nuclear and particle physics experiments, where developing fast reconstruction algorithms is becoming of paramount importance to allow for near real time calibration and data quality control, as well as to speed up offline analysis of large amount of data.

Efficient Continual Learning in Neural Networks with Embedding Regularization

1 code implementation9 Sep 2019 Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, Aurelio Uncini

Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.