Search Results for author: Simone Scardapane

Found 72 papers, 34 papers with code

Interpreting Temporal Graph Neural Networks with Koopman Theory

1 code implementation17 Oct 2024 Michele Guerra, Simone Scardapane, Filippo Maria Bianchi

The second relies on sparse identification of nonlinear dynamics (SINDy), a popular method for discovering governing equations, which we use for the first time as a general tool for explainability.

Dimensionality Reduction Epidemiology

Topological Deep Learning with State-Space Models: A Mamba Approach for Simplicial Complexes

no code implementations18 Sep 2024 Marco Montagna, Simone Scardapane, Lev Telyatnikov

Graph Neural Networks based on the message-passing (MP) mechanism are a dominant approach for handling graph-structured data.

Mamba State Space Models

Adaptive Layer Selection for Efficient Vision Transformer Fine-Tuning

no code implementations16 Aug 2024 Alessio Devoto, Federico Alvetreti, Jary Pomponi, Paolo Di Lorenzo, Pasquale Minervini, Simone Scardapane

To this end, in this paper we introduce an efficient fine-tuning method for ViTs called $\textbf{ALaST}$ ($\textit{Adaptive Layer Selection Fine-Tuning for Vision Transformers}$) to speed up the fine-tuning process while reducing computational cost, memory load, and training time.

parameter-efficient fine-tuning

A Simple and Effective $L_2$ Norm-Based Strategy for KV Cache Compression

1 code implementation17 Jun 2024 Alessio Devoto, Yu Zhao, Simone Scardapane, Pasquale Minervini

Existing approaches to reduce the KV cache size involve either fine-tuning the model to learn a compression strategy or leveraging attention scores to reduce the sequence length.

Decoder Language Modelling

TopoBenchmarkX: A Framework for Benchmarking Topological Deep Learning

2 code implementations9 Jun 2024 Lev Telyatnikov, Guillermo Bernardez, Marco Montagna, Pavlo Vasylenko, Ghada Zamzmi, Mustafa Hajij, Michael T Schaub, Nina Miolane, Simone Scardapane, Theodore Papamarkou

This work introduces TopoBenchmarkX, a modular open-source library designed to standardize benchmarking and accelerate research in Topological Deep Learning (TDL).

Benchmarking Deep Learning

Alice's Adventures in a Differentiable Wonderland -- Volume I, A Tour of the Land

no code implementations26 Apr 2024 Simone Scardapane

Neural networks surround us, in the form of large language models, speech transcription systems, molecular discovery algorithms, robotics, and much more.

Adaptive Semantic Token Selection for AI-native Goal-oriented Communications

no code implementations25 Apr 2024 Alessio Devoto, Simone Petruzzi, Jary Pomponi, Paolo Di Lorenzo, Simone Scardapane

In this paper, we propose a novel design for AI-native goal-oriented communications, exploiting transformer neural networks under dynamic inference constraints on bandwidth and computation.

Conditional computation in neural networks: principles and research trends

no code implementations12 Mar 2024 Simone Scardapane, Alessandro Baiocchi, Alessio Devoto, Valerio Marsocci, Pasquale Minervini, Jary Pomponi

This article summarizes principles and ideas from the emerging area of applying \textit{conditional computation} methods to the design of neural networks.

scientific discovery Semantic Communication +1

Class incremental learning with probability dampening and cascaded gated classifier

2 code implementations2 Feb 2024 Jary Pomponi, Alessio Devoto, Simone Scardapane

The latter is a gated incremental classifier, helping the model modify past predictions without directly interfering with them.

class-incremental learning Class Incremental Learning +2

Adaptive Point Transformer

no code implementations26 Jan 2024 Alessandro Baiocchi, Indro Spinelli, Alessandro Nicolosi, Simone Scardapane

The recent surge in 3D data acquisition has spurred the development of geometric deep learning models for point cloud processing, boosted by the remarkable success of transformers in natural language processing.

Point Cloud Classification

NACHOS: Neural Architecture Search for Hardware Constrained Early Exit Neural Networks

2 code implementations24 Jan 2024 Matteo Gambella, Jary Pomponi, Simone Scardapane, Manuel Roveri

To this end, this work presents Neural Architecture Search for Hardware Constrained Early Exit Neural Networks (NACHOS), the first NAS framework for the design of optimal EENNs satisfying constraints on the accuracy and the number of Multiply and Accumulate (MAC) operations performed by the EENNs at inference time.

Neural Architecture Search

Hypergraph Neural Networks through the Lens of Message Passing: A Common Perspective to Homophily and Architecture Design

no code implementations11 Oct 2023 Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga Zaghen, Simone Scardapane, Pietro Lio

Most of the current hypergraph learning methodologies and benchmarking datasets in the hypergraph realm are obtained by lifting procedures from their graph analogs, leading to overshadowing specific characteristics of hypergraphs.

Benchmarking Representation Learning

Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion

2 code implementations6 Oct 2023 Filip Szatkowski, Bartosz Wójcik, Mikołaj Piórczyński, Simone Scardapane

We demonstrate that the efficiency of the conversion can be significantly enhanced by a proper regularization of the activation sparsity of the base model.

Probabilistic load forecasting with Reservoir Computing

no code implementations24 Aug 2023 Michele Guerra, Simone Scardapane, Filippo Maria Bianchi

For this reason, point forecasts are not enough hence it is necessary to adopt methods that provide an uncertainty quantification.

Computational Efficiency Load Forecasting +4

From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module

no code implementations25 May 2023 Claudio Battiloro, Indro Spinelli, Lev Telyatnikov, Michael Bronstein, Simone Scardapane, Paolo Di Lorenzo

Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it.

Combining Stochastic Explainers and Subgraph Neural Networks can Increase Expressivity and Interpretability

no code implementations14 Apr 2023 Indro Spinelli, Michele Guerra, Filippo Maria Bianchi, Simone Scardapane

Subgraph-enhanced graph neural networks (SGNN) can increase the expressive power of the standard message-passing framework.

EGG-GAE: scalable graph neural networks for tabular data imputation

no code implementations19 Oct 2022 Lev Telyatnikov, Simone Scardapane

Missing data imputation (MDI) is crucial when dealing with tabular datasets across various domains.

Imputation Missing Values

Explainability in subgraphs-enhanced Graph Neural Networks

1 code implementation16 Sep 2022 Michele Guerra, Indro Spinelli, Simone Scardapane, Filippo Maria Bianchi

Recently, subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of Graph Neural Networks (GNNs), which was proved to be not higher than the 1-dimensional Weisfeiler-Leman isomorphism test.

Graph Classification

Centroids Matching: an efficient Continual Learning approach operating in the embedding space

1 code implementation3 Aug 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint.

Continual Learning Incremental Learning +1

Inferring 3D change detection from bitemporal optical images

no code implementations31 May 2022 Valerio Marsocci, Virginia Coletta, Roberta Ravanelli, Simone Scardapane, Mattia Crespi

Our work goes one step further, proposing two novel networks, able to solve simultaneously the 2D and 3D CD tasks, and the 3DCD dataset, a novel and freely available dataset precisely designed for this multitask.

Change Detection

Continual Barlow Twins: continual self-supervised learning for remote sensing semantic segmentation

no code implementations23 May 2022 Valerio Marsocci, Simone Scardapane

In the field of Earth Observation (EO), Continual Learning (CL) algorithms have been proposed to deal with large datasets by decomposing them into several subsets and processing them incrementally.

Continual Learning Continual Self-Supervised Learning +3

Engagement Detection with Multi-Task Training in E-Learning Environments

1 code implementation8 Apr 2022 Onur Copur, Mert Nakıp, Simone Scardapane, Jürgen Slowack

Recognition of user interaction, in particular engagement detection, became highly crucial for online working and learning environments, especially during the COVID-19 outbreak.

Emotion Recognition Triplet

Learning Speech Emotion Representations in the Quaternion Domain

1 code implementation5 Apr 2022 Eric Guizzo, Tillman Weyde, Simone Scardapane, Danilo Comminiello

On the one hand, the classifier permits to optimize each latent axis of the embeddings for the classification of a specific emotion-related characteristic: valence, arousal, dominance and overall emotion.

Speech Emotion Recognition

Towards Self-Supervised Gaze Estimation

1 code implementation21 Mar 2022 Arya Farkhondeh, Cristina Palmero, Simone Scardapane, Sergio Escalera

Recent joint embedding-based self-supervised methods have surpassed standard supervised approaches on various image recognition tasks such as image classification.

Gaze Estimation Image Classification +1

Continual Learning with Invertible Generative Models

1 code implementation11 Feb 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.

Continual Learning

Pixle: a fast and effective black-box attack based on rearranging pixels

1 code implementation4 Feb 2022 Jary Pomponi, Simone Scardapane, Aurelio Uncini

Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample.

A Meta-Learning Approach for Training Explainable Graph Neural Networks

1 code implementation20 Sep 2021 Indro Spinelli, Simone Scardapane, Aurelio Uncini

Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms.

Graph Classification Meta-Learning +1

Structured Ensembles: an Approach to Reduce the Memory Footprint of Ensemble Methods

2 code implementations6 May 2021 Jary Pomponi, Simone Scardapane, Aurelio Uncini

In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches.

Continual Learning Diversity

FairDrop: Biased Edge Dropout for Enhancing Fairness in Graph Representation Learning

1 code implementation29 Apr 2021 Indro Spinelli, Simone Scardapane, Amir Hussain, Aurelio Uncini

Furthermore, to better evaluate the gains, we propose a new dyadic group definition to measure the bias of a link prediction task when paired with group-based fairness metrics.

Fairness Graph Representation Learning +1

A New Class of Efficient Adaptive Filters for Online Nonlinear Modeling

no code implementations19 Apr 2021 Danilo Comminiello, Alireza Nezamdoust, Simone Scardapane, Michele Scarpiniti, Amir Hussain, Aurelio Uncini

In order to make this class of functional link adaptive filters (FLAFs) efficient, we propose low-complexity expansions and frequency-domain adaptation of the parameters.

Acoustic echo cancellation Domain Adaptation

Combined Sparse Regularization for Nonlinear Adaptive Filters

no code implementations24 Jul 2020 Danilo Comminiello, Michele Scarpiniti, Simone Scardapane, Luis A. Azpicueta-Ruiz, Aurelio Uncini

Nonlinear adaptive filters often show some sparse behavior due to the fact that not all the coefficients are equally useful for the modeling of any nonlinearity.

Distributed Training of Graph Convolutional Networks

no code implementations13 Jul 2020 Simone Scardapane, Indro Spinelli, Paolo Di Lorenzo

After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents.

Distributed Optimization

Pseudo-Rehearsal for Continual Learning with Normalizing Flows

1 code implementation ICML Workshop LifelongML 2020 Jary Pomponi, Simone Scardapane, Aurelio Uncini

We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.

Continual Learning

Why should we add early exits to neural networks?

no code implementations27 Apr 2020 Simone Scardapane, Michele Scarpiniti, Enzo Baccarelli, Aurelio Uncini

Deep neural networks are generally designed as a stack of differentiable layers, in which a prediction is obtained only after running the full stack.

Bayesian Neural Networks With Maximum Mean Discrepancy Regularization

4 code implementations2 Mar 2020 Jary Pomponi, Simone Scardapane, Aurelio Uncini

Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e. g., interpretability, multi-task learning, and calibration.

Image Classification Multi-Task Learning +1

Deep Randomized Neural Networks

no code implementations27 Feb 2020 Claudio Gallicchio, Simone Scardapane

For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains.

Adaptive Propagation Graph Convolutional Network

1 code implementation24 Feb 2020 Indro Spinelli, Simone Scardapane, Aurelio Uncini

Graph convolutional networks (GCNs) are a family of neural network models that perform inference on graph data by interleaving vertex-wise operations and message-passing exchanges across nodes.

Efficient Continual Learning in Neural Networks with Embedding Regularization

1 code implementation9 Sep 2019 Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, Aurelio Uncini

Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures.

Continual Learning

A Multimodal Deep Network for the Reconstruction of T2W MR Images

no code implementations8 Aug 2019 Antonio Falvo, Danilo Comminiello, Simone Scardapane, Michele Scarpiniti, Aurelio Uncini

In this paper, we present a deep learning method that is able to reconstruct subsampled MR images obtained by reducing the k-space data, while maintaining a high image quality that can be used to observe brain lesions.

Compressing deep quaternion neural networks with targeted regularization

no code implementations26 Jul 2019 Riccardo Vecchi, Simone Scardapane, Danilo Comminiello, Aurelio Uncini

To this end, we investigate two extensions of l1 and structured regularization to the quaternion domain.

Image Reconstruction

Efficient data augmentation using graph imputation neural networks

no code implementations20 Jun 2019 Indro Spinelli, Simone Scardapane, Michele Scarpiniti, Aurelio Uncini

Recently, data augmentation in the semi-supervised regime, where unlabeled data vastly outnumbers labeled data, has received a considerable attention.

Data Augmentation Imputation

Missing Data Imputation with Adversarially-trained Graph Convolutional Networks

1 code implementation6 May 2019 Indro Spinelli, Simone Scardapane, Aurelio Uncini

We also explore a few extensions to the basic architecture involving the use of residual connections between layers, and of global statistics computed from the data set to improve the accuracy.

Denoising Imputation +1

On the Stability and Generalization of Learning with Kernel Activation Functions

no code implementations28 Mar 2019 Michele Cirillo, Simone Scardapane, Steven Van Vaerenbergh, Aurelio Uncini

In this brief we investigate the generalization properties of a recently-proposed class of non-parametric activation functions, the kernel activation functions (KAFs).

Widely Linear Kernels for Complex-Valued Kernel Activation Functions

no code implementations6 Feb 2019 Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Aurelio Uncini

Complex-valued neural networks (CVNNs) have been shown to be powerful nonlinear approximators when the input data can be properly modeled in the complex domain.

Image Classification

Multikernel activation functions: formulation and a case study

no code implementations29 Jan 2019 Simone Scardapane, Elena Nieddu, Donatella Firmani, Paolo Merialdo

In this paper we focus on the kernel activation function (KAF), a recently proposed framework wherein each function is modeled as a one-dimensional kernel model, whose weights are adapted through standard backpropagation-based optimization.

Optical Character Recognition (OCR)

Quaternion Convolutional Neural Networks for Detection and Localization of 3D Sound Events

no code implementations17 Dec 2018 Danilo Comminiello, Marco Lella, Simone Scardapane, Aurelio Uncini

Learning from data in the quaternion domain enables us to exploit internal dependencies of 4D signals and treating them as a single entity.

Event Detection Sound Event Detection

Improving Graph Convolutional Networks with Non-Parametric Activation Functions

no code implementations26 Feb 2018 Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Aurelio Uncini

Graph neural networks (GNNs) are a class of neural networks that allow to efficiently perform inference on data that is associated to a graph structure, such as, e. g., citation networks or knowledge graphs.

Knowledge Graphs

Complex-valued Neural Networks with Non-parametric Activation Functions

2 code implementations22 Feb 2018 Simone Scardapane, Steven Van Vaerenbergh, Amir Hussain, Aurelio Uncini

Complex-valued neural networks (CVNNs) are a powerful modeling tool for domains where data can be naturally interpreted in terms of complex numbers.

Kafnets: kernel-based non-parametric activation functions for neural networks

2 code implementations13 Jul 2017 Simone Scardapane, Steven Van Vaerenbergh, Simone Totaro, Aurelio Uncini

Neural networks are generally built by interleaving (adaptable) linear layers with (fixed) nonlinear activation functions.

Stochastic Training of Neural Networks via Successive Convex Approximations

1 code implementation15 Jun 2017 Simone Scardapane, Paolo Di Lorenzo

Additionally, we show how the algorithm can be easily parallelized over multiple computational units without hindering its performance.

Recursive Multikernel Filters Exploiting Nonlinear Temporal Structure

no code implementations12 Jun 2017 Steven Van Vaerenbergh, Simone Scardapane, Ignacio Santamaria

In kernel methods, temporal information on the data is commonly included by using time-delayed embeddings as inputs.

Adaptation and learning over networks for nonlinear system modeling

no code implementations28 Apr 2017 Simone Scardapane, Jie Chen, Cédric Richard

In this chapter, we analyze nonlinear filtering problems in distributed environments, e. g., sensor networks or peer-to-peer protocols.

A Framework for Parallel and Distributed Training of Neural Networks

1 code implementation24 Oct 2016 Simone Scardapane, Paolo Di Lorenzo

The aim of this paper is to develop a general framework for training neural networks (NNs) in a distributed environment, where training data is partitioned over a set of agents that communicate with each other through a sparse, possibly time-varying, connectivity pattern.

Distributed Supervised Learning using Neural Networks

no code implementations21 Jul 2016 Simone Scardapane

Distributed learning is the problem of inferring a function in the case where training data is distributed among multiple geographically separated sources.

Time Series Analysis

Group Sparse Regularization for Deep Neural Networks

1 code implementation2 Jul 2016 Simone Scardapane, Danilo Comminiello, Amir Hussain, Aurelio Uncini

In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i. e., feature selection).

feature selection Handwritten Digit Recognition

Effective Blind Source Separation Based on the Adam Algorithm

no code implementations25 May 2016 Michele Scarpiniti, Simone Scardapane, Danilo Comminiello, Raffaele Parisi, Aurelio Uncini

In this paper, we derive a modified InfoMax algorithm for the solution of Blind Signal Separation (BSS) problems by using advanced stochastic methods.

blind source separation Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.