1 code implementation • 19 Feb 2024 • Giovanni De Felice, Andrea Cini, Daniele Zambon, Vladimir V. Gusev, Cesare Alippi
Virtual sensing techniques allow for inferring signals at new unmonitored locations by exploiting spatio-temporal measurements coming from physical sensors at different locations.
no code implementations • 16 Feb 2024 • Ivan Marisca, Cesare Alippi, Filippo Maria Bianchi
The input time series are progressively coarsened over time and space, obtaining a pool of representations that capture heterogeneous temporal and spatial dynamics.
no code implementations • 24 Oct 2023 • Andrea Cini, Ivan Marisca, Daniele Zambon, Cesare Alippi
The conditioning can take the form of an architectural inductive bias on the neural forecasting architecture, resulting in a family of deep learning models called spatiotemporal graph neural networks.
1 code implementation • 7 Jul 2023 • Ming Jin, Huan Yee Koh, Qingsong Wen, Daniele Zambon, Cesare Alippi, Geoffrey I. Webb, Irwin King, Shirui Pan
In this survey, we provide a comprehensive review of graph neural networks for time series analysis (GNN4TS), encompassing four fundamental dimensions: forecasting, classification, anomaly detection, and imputation.
no code implementations • 30 May 2023 • Andrea Cini, Danilo Mandic, Cesare Alippi
Existing relationships among time series can be exploited as inductive biases in learning effective forecasting models.
no code implementations • 11 Apr 2023 • Tommaso Marzi, Arshjot Khehra, Andrea Cini, Cesare Alippi
In this work, we propose a novel methodology, named Feudal Graph Reinforcement Learning (FGRL), that addresses such challenges by relying on hierarchical RL and a pyramidal message-passing architecture.
no code implementations • 26 Mar 2023 • Luca Butera, Andrea Cini, Alberto Ferrante, Cesare Alippi
Conditioning image generation on specific features of the desired output is a key ingredient of modern generative models.
no code implementations • 21 Mar 2023 • Cesare Alippi, Daniele Zambon
The well-known Kalman filters model dynamical systems by relying on state-space representations with the next state updated, and its uncertainty controlled, by fresh information associated with newly observed system outputs.
1 code implementation • NeurIPS 2023 • Andrea Cini, Ivan Marisca, Daniele Zambon, Cesare Alippi
Spatiotemporal graph neural networks have shown to be effective in time series forecasting applications, achieving better performance than standard univariate predictors in several settings.
no code implementations • 3 Feb 2023 • Daniele Zambon, Cesare Alippi
The proposed AZ-analysis constitutes a valuable asset for discovering and highlighting those space-time regions where the model can be improved with respect to performance.
no code implementations • 4 Jan 2023 • Daniele Zambon, Andrea Cini, Lorenzo Livi, Cesare Alippi
State-space models constitute an effective modeling tool to describe multivariate time series and operate by maintaining an updated representation of the system state from which predictions are made.
no code implementations • 10 Oct 2022 • Kleanthis Malialis, Manuel Roveri, Cesare Alippi, Christos G. Panayiotou, Marios M. Polycarpou
In real-world applications, the process generating the data might suffer from nonstationary effects (e. g., due to seasonality, faults affecting sensors or actuators, and changes in the users' behaviour).
1 code implementation • 14 Sep 2022 • Andrea Cini, Ivan Marisca, Filippo Maria Bianchi, Cesare Alippi
The training procedure can then be parallelized node-wise by sampling the node embeddings without breaking any dependency, thus enabling scalability to large networks.
2 code implementations • 26 May 2022 • Ivan Marisca, Andrea Cini, Cesare Alippi
In particular, we propose a novel class of attention-based architectures that, given a set of highly sparse discrete observations, learn a representation for points in time and space by exploiting a spatiotemporal propagation architecture aligned with the imputation task.
1 code implementation • NeurIPS 2023 • Andrea Cini, Daniele Zambon, Cesare Alippi
Outstanding achievements of graph neural networks for spatiotemporal time series analysis show that relational constraints introduce an effective inductive bias into neural forecasting architectures.
1 code implementation • 23 Apr 2022 • Daniele Zambon, Cesare Alippi
We present the first whiteness test for graphs, i. e., a whiteness test for multivariate time series associated with the nodes of a dynamic graph.
no code implementations • 29 Nov 2021 • Lorenzo Ferretti, Andrea Cini, Georgios Zacharopoulos, Cesare Alippi, Laura Pozzi
The design of efficient hardware accelerators for high-throughput data-processing applications, e. g., deep neural networks, is a challenging task in computer architecture design.
no code implementations • 16 Nov 2021 • Zhiwen Chen, Jiamin Xu, Cesare Alippi, Steven X. Ding, Yuri Shardt, Tao Peng, Chunhua Yang
Graph neural network (GNN)-based fault diagnosis (FD) has received increasing attention in recent years, due to the fact that data coming from several application domains can be advantageously represented as graphs.
1 code implementation • NeurIPS 2021 • Daniele Grattarola, Lorenzo Livi, Cesare Alippi
Cellular automata (CA) are a class of computational models that exhibit rich dynamics emerging from the local interaction of cells arranged in a regular lattice.
2 code implementations • 11 Oct 2021 • Daniele Grattarola, Daniele Zambon, Filippo Maria Bianchi, Cesare Alippi
Inspired by the conventional pooling layers in convolutional neural networks, many recent works in the field of graph machine learning have introduced pooling operators to reduce the size of graphs.
2 code implementations • ICLR 2022 • Andrea Cini, Ivan Marisca, Cesare Alippi
In particular, we introduce a novel graph neural network architecture, named GRIN, which aims at reconstructing missing data in the different channels of a multivariate time series by learning spatio-temporal representations through message passing.
Ranked #1 on Traffic Data Imputation on PEMS-BAY Point Missing
1 code implementation • ICLR 2021 • Benjamin Paassen, Daniele Grattarola, Daniele Zambon, Cesare Alippi, Barbara Eva Hammer
With this result, we hope to provide a firm theoretical basis for a next generation of time series prediction models.
no code implementations • 6 Oct 2020 • Pietro Verzelli, Cesare Alippi, Lorenzo Livi
In recent years, the machine learning community has seen a continuous growing interest in research aimed at investigating dynamical aspects of both training procedures and machine learning models.
1 code implementation • 22 Jun 2020 • Daniele Grattarola, Cesare Alippi
In this paper we present Spektral, an open-source Python library for building graph neural networks with TensorFlow and the Keras application programming interface.
no code implementations • 24 Mar 2020 • Pietro Verzelli, Cesare Alippi, Lorenzo Livi, Peter Tino
Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance.
no code implementations • 20 Mar 2020 • Andrea Cini, Carlo D'Eramo, Jan Peters, Cesare Alippi
In this regard, Weighted Q-Learning (WQL) effectively reduces bias and shows remarkable results in stochastic environments.
1 code implementation • 24 Oct 2019 • Filippo Maria Bianchi, Daniele Grattarola, Lorenzo Livi, Cesare Alippi
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations.
Ranked #1 on Graph Classification on Bench-hard
no code implementations • 25 Sep 2019 • Filippo Maria Bianchi, Daniele Grattarola, Cesare Alippi
For each node, our method learns a soft cluster assignment vector that depends on the node features, the target inference task (e. g., a graph classification loss), and, thanks to the minCut objective, also on the connectivity structure of the graph.
1 code implementation • ICML 2020 • Daniele Zambon, Cesare Alippi, Lorenzo Livi
We present Graph Random Neural Features (GRNF), a novel embedding method from graph-structured data to real vectors based on a family of graph neural networks.
no code implementations • 2 Aug 2019 • Simone Disabato, Manuel Roveri, Cesare Alippi
Severe constraints on memory and computation characterizing the Internet-of-Things (IoT) units may prevent the execution of Deep Learning (DL)-based solutions, which typically demand large memory and high processing load.
2 code implementations • 22 Jul 2019 • Alberto Gasparin, Slobodan Lukovic, Cesare Alippi
Management and efficient operations in critical infrastructure such as Smart Grids take huge advantage of accurate power load forecasting which, due to its nonlinear nature, remains a challenging task.
4 code implementations • ICML 2020 • Filippo Maria Bianchi, Daniele Grattarola, Cesare Alippi
Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph.
1 code implementation • 27 Mar 2019 • Pietro Verzelli, Cesare Alippi, Lorenzo Livi
Finding such a region requires searching in hyper-parameter space in a sensible way: hyper-parameter configurations marginally outside such a region might yield networks exhibiting fully developed chaos, hence producing unreliable computations.
2 code implementations • 18 Mar 2019 • Daniele Zambon, Daniele Grattarola, Lorenzo Livi, Cesare Alippi
This paper proposes an autoregressive (AR) model for sequences of graphs, which generalises traditional AR models.
1 code implementation • 5 Jan 2019 • Filippo Maria Bianchi, Daniele Grattarola, Lorenzo Livi, Cesare Alippi
Popular graph neural networks implement convolution operations on graphs based on polynomial spectral filters.
Ranked #4 on Skeleton Based Action Recognition on SBU
1 code implementation • 11 Dec 2018 • Daniele Grattarola, Lorenzo Livi, Cesare Alippi
Constant-curvature Riemannian manifolds (CCMs) have been shown to be ideal embedding spaces in many application domains, as their non-Euclidean geometry can naturally account for some relevant properties of data, like hierarchy and circularity.
no code implementations • 3 Oct 2018 • Pietro Verzelli, Lorenzo Livi, Cesare Alippi
Echo State Networks (ESNs) are simplified recurrent neural network models composed of a reservoir and a linear, trainable readout layer.
no code implementations • 18 May 2018 • Daniele Zambon, Cesare Alippi, Lorenzo Livi
Given a finite sequence of graphs, e. g., coming from technological, biological, and social networks, the paper proposes a methodology to identify possible changes in stationarity in the stochastic process generating the graphs.
1 code implementation • 16 May 2018 • Daniele Grattarola, Daniele Zambon, Cesare Alippi, Lorenzo Livi
A common approach is to use embedding techniques to represent graphs as points in a conventional Euclidean space, but non-Euclidean spaces have often been shown to be better suited for embedding graphs.
no code implementations • 3 May 2018 • Daniele Zambon, Lorenzo Livi, Cesare Alippi
The proposed methodology consists in embedding graphs into a geometric space and perform change detection there by means of conventional methods for numerical streams.
1 code implementation • 21 Jun 2017 • Daniele Zambon, Cesare Alippi, Lorenzo Livi
Graph representations offer powerful and intuitive ways to describe data in a multitude of application domains.
no code implementations • 10 Sep 2016 • Filippo Maria Bianchi, Lorenzo Livi, Cesare Alippi, Robert Jenssen
We show that topological properties of such a multiplex reflect important features of RNN dynamics and are used to guide the tuning procedure.
no code implementations • 8 Apr 2016 • Lorenzo Livi, Cesare Alippi
The final partition is derived by exploiting a criterion based on mutual information minimization.
no code implementations • 11 Mar 2016 • Lorenzo Livi, Filippo Maria Bianchi, Cesare Alippi
In this paper, we aim at addressing this issue by proposing a theoretically motivated, unsupervised method based on Fisher information for determining the edge of criticality in recurrent neural networks.
no code implementations • 26 Jan 2016 • Filippo Maria Bianchi, Lorenzo Livi, Cesare Alippi
We verify that the determination of the edge of stability provided by such RQA measures is more accurate than two well-known criteria based on the Jacobian matrix of the reservoir.
no code implementations • 16 Oct 2015 • Cesare Alippi, Giacomo Boracchi, Diego Carrera, Manuel Roveri
We address the problem of detecting changes in multivariate datastreams, and we investigate the intrinsic difficulty that change-detection methods have to face when the data dimension scales.