1 code implementation • 26 Nov 2024 • Antonio Andrea Gargiulo, Donato Crisostomi, Maria Sofia Bucarelli, Simone Scardapane, Fabrizio Silvestri, Emanuele Rodolà
In this paper, we study task vectors at the layer level, focusing on task layer matrices and their singular value decomposition.
1 code implementation • 17 Oct 2024 • Michele Guerra, Simone Scardapane, Filippo Maria Bianchi
The second relies on sparse identification of nonlinear dynamics (SINDy), a popular method for discovering governing equations, which we use for the first time as a general tool for explainability.
no code implementations • 25 Sep 2024 • Francesco Verdini, Pierfrancesco Melucci, Stefano Perna, Francesco Cariaggi, Marco Gaido, Sara Papi, Szymon Mazurek, Marek Kasztelnik, Luisa Bentivogli, Sébastien Bratières, Paolo Merialdo, Simone Scardapane
The remarkable performance achieved by Large Language Models (LLM) has driven research efforts to leverage them for a wide range of tasks and input modalities.
no code implementations • 18 Sep 2024 • Marco Montagna, Simone Scardapane, Lev Telyatnikov
Graph Neural Networks based on the message-passing (MP) mechanism are a dominant approach for handling graph-structured data.
no code implementations • 16 Aug 2024 • Alessio Devoto, Federico Alvetreti, Jary Pomponi, Paolo Di Lorenzo, Pasquale Minervini, Simone Scardapane
To this end, in this paper we introduce an efficient fine-tuning method for ViTs called $\textbf{ALaST}$ ($\textit{Adaptive Layer Selection Fine-Tuning for Vision Transformers}$) to speed up the fine-tuning process while reducing computational cost, memory load, and training time.
1 code implementation • 17 Jun 2024 • Alessio Devoto, Yu Zhao, Simone Scardapane, Pasquale Minervini
Existing approaches to reduce the KV cache size involve either fine-tuning the model to learn a compression strategy or leveraging attention scores to reduce the sequence length.
2 code implementations • 9 Jun 2024 • Lev Telyatnikov, Guillermo Bernardez, Marco Montagna, Pavlo Vasylenko, Ghada Zamzmi, Mustafa Hajij, Michael T Schaub, Nina Miolane, Simone Scardapane, Theodore Papamarkou
This work introduces TopoBenchmarkX, a modular open-source library designed to standardize benchmarking and accelerate research in Topological Deep Learning (TDL).
no code implementations • 26 Apr 2024 • Simone Scardapane
Neural networks surround us, in the form of large language models, speech transcription systems, molecular discovery algorithms, robotics, and much more.
no code implementations • 25 Apr 2024 • Alessio Devoto, Simone Petruzzi, Jary Pomponi, Paolo Di Lorenzo, Simone Scardapane
In this paper, we propose a novel design for AI-native goal-oriented communications, exploiting transformer neural networks under dynamic inference constraints on bandwidth and computation.
no code implementations • 5 Apr 2024 • Tommaso Torda, Andrea Ciardiello, Simona Gargiulo, Greta Grillo, Simone Scardapane, Cecilia Voena, Stefano Giagu
In recent years Artificial Intelligence has emerged as a fundamental tool in medical applications.
no code implementations • 12 Mar 2024 • Simone Scardapane, Alessandro Baiocchi, Alessio Devoto, Valerio Marsocci, Pasquale Minervini, Jary Pomponi
This article summarizes principles and ideas from the emerging area of applying \textit{conditional computation} methods to the design of neural networks.
no code implementations • 14 Feb 2024 • Theodore Papamarkou, Tolga Birdal, Michael Bronstein, Gunnar Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Liò, Paolo Di Lorenzo, Vasileios Maroulas, Nina Miolane, Farzana Nasrin, Karthikeyan Natesan Ramamurthy, Bastian Rieck, Simone Scardapane, Michael T. Schaub, Petar Veličković, Bei Wang, Yusu Wang, Guo-Wei Wei, Ghada Zamzmi
At the same time, this paper serves as an invitation to the scientific community to actively participate in TDL research to unlock the potential of this emerging field.
no code implementations • 12 Feb 2024 • Emilio Calvanese Strinati, Paolo Di Lorenzo, Vincenzo Sciancalepore, Adnan Aijaz, Marios Kountouris, Deniz Gündüz, Petar Popovski, Mohamed Sana, Photios A. Stavrou, Beatriz Soret, Nicola Cordeschi, Simone Scardapane, Mattia Merluzzi, Lanfranco Zanzi, Mauro Boldi Renato, Tony Quek, Nicola di Pietro, Olivier Forceville, Francesca Costanzo, Peizheng Li
Recent advances in AI technologies have notably expanded device intelligence, fostering federation and cooperation among distributed AI agents.
1 code implementation • 4 Feb 2024 • Mustafa Hajij, Mathilde Papillon, Florian Frantzen, Jens Agerberg, Ibrahem AlJabea, Ruben Ballester, Claudio Battiloro, Guillermo Bernárdez, Tolga Birdal, Aiden Brent, Peter Chin, Sergio Escalera, Simone Fiorellino, Odin Hoff Gardaa, Gurusankar Gopalakrishnan, Devendra Govil, Josef Hoppe, Maneel Reddy Karri, Jude Khouja, Manuel Lecha, Neal Livesay, Jan Meißner, Soham Mukherjee, Alexander Nikitin, Theodore Papamarkou, Jaro Prílepok, Karthikeyan Natesan Ramamurthy, Paul Rosen, Aldo Guzmán-Sáenz, Alessandro Salatiello, Shreyas N. Samaga, Simone Scardapane, Michael T. Schaub, Luca Scofano, Indro Spinelli, Lev Telyatnikov, Quang Truong, Robin Walters, Maosheng Yang, Olga Zaghen, Ghada Zamzmi, Ali Zia, Nina Miolane
We introduce TopoX, a Python software suite that provides reliable and user-friendly building blocks for computing and machine learning on topological domains that extend graphs: hypergraphs, simplicial, cellular, path and combinatorial complexes.
2 code implementations • 2 Feb 2024 • Jary Pomponi, Alessio Devoto, Simone Scardapane
The latter is a gated incremental classifier, helping the model modify past predictions without directly interfering with them.
no code implementations • 26 Jan 2024 • Alessandro Baiocchi, Indro Spinelli, Alessandro Nicolosi, Simone Scardapane
The recent surge in 3D data acquisition has spurred the development of geometric deep learning models for point cloud processing, boosted by the remarkable success of transformers in natural language processing.
2 code implementations • 24 Jan 2024 • Matteo Gambella, Jary Pomponi, Simone Scardapane, Manuel Roveri
To this end, this work presents Neural Architecture Search for Hardware Constrained Early Exit Neural Networks (NACHOS), the first NAS framework for the design of optimal EENNs satisfying constraints on the accuracy and the number of Multiply and Accumulate (MAC) operations performed by the EENNs at inference time.
1 code implementation • 15 Dec 2023 • Bartosz Wójcik, Alessio Devoto, Karol Pustelnik, Pasquale Minervini, Simone Scardapane
The computational cost of transformer models makes them inefficient in low-latency or low-power applications.
no code implementations • 11 Oct 2023 • Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga Zaghen, Simone Scardapane, Pietro Lio
Most of the current hypergraph learning methodologies and benchmarking datasets in the hypergraph realm are obtained by lifting procedures from their graph analogs, leading to overshadowing specific characteristics of hypergraphs.
2 code implementations • 6 Oct 2023 • Filip Szatkowski, Bartosz Wójcik, Mikołaj Piórczyński, Simone Scardapane
We demonstrate that the efficiency of the conversion can be significantly enhanced by a proper regularization of the activation sparsity of the base model.
1 code implementation • 26 Sep 2023 • Mathilde Papillon, Mustafa Hajij, Helen Jenne, Johan Mathe, Audun Myers, Theodore Papamarkou, Tolga Birdal, Tamal Dey, Tim Doster, Tegan Emerson, Gurusankar Gopalakrishnan, Devendra Govil, Aldo Guzmán-Sáenz, Henry Kvinge, Neal Livesay, Soham Mukherjee, Shreyas N. Samaga, Karthikeyan Natesan Ramamurthy, Maneel Reddy Karri, Paul Rosen, Sophia Sanborn, Robin Walters, Jens Agerberg, Sadrodin Barikbin, Claudio Battiloro, Gleb Bazhenov, Guillermo Bernardez, Aiden Brent, Sergio Escalera, Simone Fiorellino, Dmitrii Gavrilev, Mohammed Hassanin, Paul Häusner, Odin Hoff Gardaa, Abdelwahed Khamis, Manuel Lecha, German Magai, Tatiana Malygina, Rubén Ballester, Kalyan Nadimpalli, Alexander Nikitin, Abraham Rabinowitz, Alessandro Salatiello, Simone Scardapane, Luca Scofano, Suraj Singh, Jens Sjölund, Pavel Snopov, Indro Spinelli, Lev Telyatnikov, Lucia Testa, Maosheng Yang, Yixiao Yue, Olga Zaghen, Ali Zia, Nina Miolane
This paper presents the computational challenge on topological deep learning that was hosted within the ICML 2023 Workshop on Topology and Geometry in Machine Learning.
no code implementations • 24 Aug 2023 • Michele Guerra, Simone Scardapane, Filippo Maria Bianchi
For this reason, point forecasts are not enough hence it is necessary to adopt methods that provide an uncertainty quantification.
no code implementations • 25 May 2023 • Claudio Battiloro, Indro Spinelli, Lev Telyatnikov, Michael Bronstein, Simone Scardapane, Paolo Di Lorenzo
Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it.
1 code implementation • 16 Apr 2023 • Valerio Marsocci, Nicolas Gonthier, Anatol Garioud, Simone Scardapane, Clément Mallet
This approach is the first to use geographical metadata for UDA in semantic segmentation.
no code implementations • 14 Apr 2023 • Indro Spinelli, Michele Guerra, Filippo Maria Bianchi, Simone Scardapane
Subgraph-enhanced graph neural networks (SGNN) can increase the expressive power of the standard message-passing framework.
no code implementations • 22 Feb 2023 • Indro Spinelli, Riccardo Bianchini, Simone Scardapane
One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning.
no code implementations • 19 Oct 2022 • Lev Telyatnikov, Simone Scardapane
Missing data imputation (MDI) is crucial when dealing with tabular datasets across various domains.
1 code implementation • 16 Sep 2022 • Michele Guerra, Indro Spinelli, Simone Scardapane, Filippo Maria Bianchi
Recently, subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of Graph Neural Networks (GNNs), which was proved to be not higher than the 1-dimensional Weisfeiler-Leman isomorphism test.
1 code implementation • 3 Aug 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint.
no code implementations • 31 May 2022 • Valerio Marsocci, Virginia Coletta, Roberta Ravanelli, Simone Scardapane, Mattia Crespi
Our work goes one step further, proposing two novel networks, able to solve simultaneously the 2D and 3D CD tasks, and the 3DCD dataset, a novel and freely available dataset precisely designed for this multitask.
no code implementations • 23 May 2022 • Valerio Marsocci, Simone Scardapane
In the field of Earth Observation (EO), Continual Learning (CL) algorithms have been proposed to deal with large datasets by decomposing them into several subsets and processing them incrementally.
1 code implementation • 8 Apr 2022 • Onur Copur, Mert Nakıp, Simone Scardapane, Jürgen Slowack
Recognition of user interaction, in particular engagement detection, became highly crucial for online working and learning environments, especially during the COVID-19 outbreak.
1 code implementation • 5 Apr 2022 • Eric Guizzo, Tillman Weyde, Simone Scardapane, Danilo Comminiello
On the one hand, the classifier permits to optimize each latent axis of the embeddings for the classification of a specific emotion-related characteristic: valence, arousal, dominance and overall emotion.
1 code implementation • 21 Mar 2022 • Arya Farkhondeh, Cristina Palmero, Simone Scardapane, Sergio Escalera
Recent joint embedding-based self-supervised methods have surpassed standard supervised approaches on various image recognition tasks such as image classification.
1 code implementation • 11 Feb 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.
1 code implementation • 4 Feb 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample.
1 code implementation • 20 Sep 2021 • Indro Spinelli, Simone Scardapane, Aurelio Uncini
Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms.
2 code implementations • 6 May 2021 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches.
1 code implementation • 29 Apr 2021 • Indro Spinelli, Simone Scardapane, Amir Hussain, Aurelio Uncini
Furthermore, to better evaluate the gains, we propose a new dyadic group definition to measure the bias of a link prediction task when paired with group-based fairness metrics.
no code implementations • 19 Apr 2021 • Danilo Comminiello, Alireza Nezamdoust, Simone Scardapane, Michele Scarpiniti, Amir Hussain, Aurelio Uncini
In order to make this class of functional link adaptive filters (FLAFs) efficient, we propose low-complexity expansions and frequency-domain adaptation of the parameters.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
no code implementations • 24 Jul 2020 • Danilo Comminiello, Michele Scarpiniti, Simone Scardapane, Luis A. Azpicueta-Ruiz, Aurelio Uncini
Nonlinear adaptive filters often show some sparse behavior due to the fact that not all the coefficients are equally useful for the modeling of any nonlinearity.
no code implementations • 13 Jul 2020 • Simone Scardapane, Indro Spinelli, Paolo Di Lorenzo
After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents.
1 code implementation • ICML Workshop LifelongML 2020 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.
no code implementations • 30 Apr 2020 • Paolo Di Lorenzo, Simone Scardapane
We study distributed stochastic nonconvex optimization in multi-agent networks.
no code implementations • 27 Apr 2020 • Simone Scardapane, Michele Scarpiniti, Enzo Baccarelli, Aurelio Uncini
Deep neural networks are generally designed as a stack of differentiable layers, in which a prediction is obtained only after running the full stack.
4 code implementations • 2 Mar 2020 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e. g., interpretability, multi-task learning, and calibration.
no code implementations • 27 Feb 2020 • Claudio Gallicchio, Simone Scardapane
For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains.
1 code implementation • 24 Feb 2020 • Indro Spinelli, Simone Scardapane, Aurelio Uncini
Graph convolutional networks (GCNs) are a family of neural network models that perform inference on graph data by interleaving vertex-wise operations and message-passing exchanges across nodes.
1 code implementation • 9 Sep 2019 • Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, Aurelio Uncini
Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures.
no code implementations • 8 Aug 2019 • Antonio Falvo, Danilo Comminiello, Simone Scardapane, Michele Scarpiniti, Aurelio Uncini
In this paper, we present a deep learning method that is able to reconstruct subsampled MR images obtained by reducing the k-space data, while maintaining a high image quality that can be used to observe brain lesions.
no code implementations • 26 Jul 2019 • Riccardo Vecchi, Simone Scardapane, Danilo Comminiello, Aurelio Uncini
To this end, we investigate two extensions of l1 and structured regularization to the quaternion domain.
no code implementations • 20 Jun 2019 • Indro Spinelli, Simone Scardapane, Michele Scarpiniti, Aurelio Uncini
Recently, data augmentation in the semi-supervised regime, where unlabeled data vastly outnumbers labeled data, has received a considerable attention.
1 code implementation • 6 May 2019 • Indro Spinelli, Simone Scardapane, Aurelio Uncini
We also explore a few extensions to the basic architecture involving the use of residual connections between layers, and of global statistics computed from the data set to improve the accuracy.
no code implementations • 28 Mar 2019 • Michele Cirillo, Simone Scardapane, Steven Van Vaerenbergh, Aurelio Uncini
In this brief we investigate the generalization properties of a recently-proposed class of non-parametric activation functions, the kernel activation functions (KAFs).
no code implementations • 6 Feb 2019 • Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Aurelio Uncini
Complex-valued neural networks (CVNNs) have been shown to be powerful nonlinear approximators when the input data can be properly modeled in the complex domain.
no code implementations • 29 Jan 2019 • Simone Scardapane, Elena Nieddu, Donatella Firmani, Paolo Merialdo
In this paper we focus on the kernel activation function (KAF), a recently proposed framework wherein each function is modeled as a one-dimensional kernel model, whose weights are adapted through standard backpropagation-based optimization.
no code implementations • 17 Dec 2018 • Danilo Comminiello, Marco Lella, Simone Scardapane, Aurelio Uncini
Learning from data in the quaternion domain enables us to exploit internal dependencies of 4D signals and treating them as a single entity.
no code implementations • 11 Jul 2018 • Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Simone Totaro, Aurelio Uncini
Gated recurrent neural networks have achieved remarkable results in the analysis of sequential data.
3 code implementations • 21 Mar 2018 • Filippo Maria Bianchi, Simone Scardapane, Sigurd Løkse, Robert Jenssen
The architectures are compared to other MTS classifiers, including deep learning models and time series kernels.
no code implementations • 26 Feb 2018 • Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Aurelio Uncini
Graph neural networks (GNNs) are a class of neural networks that allow to efficiently perform inference on data that is associated to a graph structure, such as, e. g., citation networks or knowledge graphs.
2 code implementations • 22 Feb 2018 • Simone Scardapane, Steven Van Vaerenbergh, Amir Hussain, Aurelio Uncini
Complex-valued neural networks (CVNNs) are a powerful modeling tool for domains where data can be naturally interpreted in terms of complex numbers.
2 code implementations • 17 Nov 2017 • Filippo Maria Bianchi, Simone Scardapane, Sigurd Løkse, Robert Jenssen
We propose a deep architecture for the classification of multivariate time series.
2 code implementations • 13 Jul 2017 • Simone Scardapane, Steven Van Vaerenbergh, Simone Totaro, Aurelio Uncini
Neural networks are generally built by interleaving (adaptable) linear layers with (fixed) nonlinear activation functions.
1 code implementation • 15 Jun 2017 • Simone Scardapane, Paolo Di Lorenzo
Additionally, we show how the algorithm can be easily parallelized over multiple computational units without hindering its performance.
no code implementations • 12 Jun 2017 • Steven Van Vaerenbergh, Simone Scardapane, Ignacio Santamaria
In kernel methods, temporal information on the data is commonly included by using time-delayed embeddings as inputs.
no code implementations • 28 Apr 2017 • Simone Scardapane, Jie Chen, Cédric Richard
In this chapter, we analyze nonlinear filtering problems in distributed environments, e. g., sensor networks or peer-to-peer protocols.
1 code implementation • 24 Oct 2016 • Simone Scardapane, Paolo Di Lorenzo
The aim of this paper is to develop a general framework for training neural networks (NNs) in a distributed environment, where training data is partitioned over a set of agents that communicate with each other through a sparse, possibly time-varying, connectivity pattern.
no code implementations • 21 Jul 2016 • Simone Scardapane
Distributed learning is the problem of inferring a function in the case where training data is distributed among multiple geographically separated sources.
1 code implementation • 2 Jul 2016 • Simone Scardapane, Danilo Comminiello, Amir Hussain, Aurelio Uncini
In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i. e., feature selection).
no code implementations • 25 May 2016 • Michele Scarpiniti, Simone Scardapane, Danilo Comminiello, Raffaele Parisi, Aurelio Uncini
In this paper, we derive a modified InfoMax algorithm for the solution of Blind Signal Separation (BSS) problems by using advanced stochastic methods.
no code implementations • 18 May 2016 • Simone Scardapane, Michele Scarpiniti, Danilo Comminiello, Aurelio Uncini
Neural networks require a careful design in order to perform properly on a given task.