Search Results for author: Jary Pomponi

Found 12 papers, 9 papers with code

Conditional computation in neural networks: principles and research trends

no code implementations12 Mar 2024 Simone Scardapane, Alessandro Baiocchi, Alessio Devoto, Valerio Marsocci, Pasquale Minervini, Jary Pomponi

This article summarizes principles and ideas from the emerging area of applying \textit{conditional computation} methods to the design of neural networks.

Transfer Learning

Cascaded Scaling Classifier: class incremental learning with probability scaling

1 code implementation2 Feb 2024 Jary Pomponi, Alessio Devoto, Simone Scardapane

The latter is a gated incremental classifier, helping the model modify past predictions without directly interfering with them.

Class Incremental Learning Incremental Learning +1

NACHOS: Neural Architecture Search for Hardware Constrained Early Exit Neural Networks

no code implementations24 Jan 2024 Matteo Gambella, Jary Pomponi, Simone Scardapane, Manuel Roveri

To this end, this work presents Neural Architecture Search for Hardware Constrained Early Exit Neural Networks (NACHOS), the first NAS framework for the design of optimal EENNs satisfying constraints on the accuracy and the number of Multiply and Accumulate (MAC) operations performed by the EENNs at inference time.

Neural Architecture Search

Centroids Matching: an efficient Continual Learning approach operating in the embedding space

1 code implementation3 Aug 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint.

Continual Learning Incremental Learning +1

Continual Learning with Invertible Generative Models

1 code implementation11 Feb 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.

Continual Learning

Pixle: a fast and effective black-box attack based on rearranging pixels

1 code implementation4 Feb 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample.

Structured Ensembles: an Approach to Reduce the Memory Footprint of Ensemble Methods

2 code implementations6 May 2021 Jary Pomponi, Simone Scardapane, Aurelio Uncini

In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches.

Continual Learning

Pseudo-Rehearsal for Continual Learning with Normalizing Flows

1 code implementation ICML Workshop LifelongML 2020 Jary Pomponi, Simone Scardapane, Aurelio Uncini

We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.

Continual Learning

Bayesian Neural Networks With Maximum Mean Discrepancy Regularization

4 code implementations2 Mar 2020 Jary Pomponi, Simone Scardapane, Aurelio Uncini

Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e. g., interpretability, multi-task learning, and calibration.

Image Classification Multi-Task Learning +1

DeepRICH: Learning Deeply Cherenkov Detectors

no code implementations26 Nov 2019 Cristiano Fanelli, Jary Pomponi

Imaging Cherenkov detectors are largely used for particle identification (PID) in nuclear and particle physics experiments, where developing fast reconstruction algorithms is becoming of paramount importance to allow for near real time calibration and data quality control, as well as to speed up offline analysis of large amount of data.

Efficient Continual Learning in Neural Networks with Embedding Regularization

1 code implementation9 Sep 2019 Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, Aurelio Uncini

Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.