no code implementations • 27 Dec 2024 • Jary Pomponi, Mattia Merluzzi, Alessio Devoto, Mateus Pontes Mota, Paolo Di Lorenzo, Simone Scardapane
This paper presents a novel framework for goal-oriented semantic communications leveraging recursive early exit models.
no code implementations • 16 Aug 2024 • Alessio Devoto, Federico Alvetreti, Jary Pomponi, Paolo Di Lorenzo, Pasquale Minervini, Simone Scardapane
To this end, in this paper we introduce an efficient fine-tuning method for ViTs called $\textbf{ALaST}$ ($\textit{Adaptive Layer Selection Fine-Tuning for Vision Transformers}$) to speed up the fine-tuning process while reducing computational cost, memory load, and training time.
1 code implementation • 19 Jul 2024 • Bartłomiej Krzepkowski, Monika Michaluk, Franciszek Szarwacki, Piotr Kubaty, Jary Pomponi, Tomasz Trzciński, Bartosz Wójcik, Kamil Adamczewski
Early exits are an important efficiency mechanism integrated into deep neural networks that allows for the termination of the network's forward pass before processing through all its layers.
no code implementations • 25 Apr 2024 • Alessio Devoto, Simone Petruzzi, Jary Pomponi, Paolo Di Lorenzo, Simone Scardapane
In this paper, we propose a novel design for AI-native goal-oriented communications, exploiting transformer neural networks under dynamic inference constraints on bandwidth and computation.
no code implementations • 12 Mar 2024 • Simone Scardapane, Alessandro Baiocchi, Alessio Devoto, Valerio Marsocci, Pasquale Minervini, Jary Pomponi
This article summarizes principles and ideas from the emerging area of applying \textit{conditional computation} methods to the design of neural networks.
2 code implementations • 2 Feb 2024 • Jary Pomponi, Alessio Devoto, Simone Scardapane
The latter is a gated incremental classifier, helping the model modify past predictions without directly interfering with them.
2 code implementations • 24 Jan 2024 • Matteo Gambella, Jary Pomponi, Simone Scardapane, Manuel Roveri
To this end, this work presents Neural Architecture Search for Hardware Constrained Early Exit Neural Networks (NACHOS), the first NAS framework for the design of optimal EENNs satisfying constraints on the accuracy and the number of Multiply and Accumulate (MAC) operations performed by the EENNs at inference time.
1 code implementation • 3 Aug 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint.
1 code implementation • 11 Feb 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.
1 code implementation • 4 Feb 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample.
2 code implementations • 6 May 2021 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
1 code implementation • ICML Workshop LifelongML 2020 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.
4 code implementations • 2 Mar 2020 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e. g., interpretability, multi-task learning, and calibration.
no code implementations • 26 Nov 2019 • Cristiano Fanelli, Jary Pomponi
Imaging Cherenkov detectors are largely used for particle identification (PID) in nuclear and particle physics experiments, where developing fast reconstruction algorithms is becoming of paramount importance to allow for near real time calibration and data quality control, as well as to speed up offline analysis of large amount of data.
1 code implementation • 9 Sep 2019 • Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, Aurelio Uncini
Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures.