no code implementations • 2 May 2024 • Alessio Gravina, Moshe Eliasof, Claudio Gallicchio, Davide Bacciu, Carola-Bibiane Schönlieb

A common problem in Message-Passing Neural Networks is oversquashing -- the limited ability to facilitate effective information flow between distant nodes.

1 code implementation • 30 Apr 2024 • Alessio Gravina, Daniele Zambon, Davide Bacciu, Cesare Alippi

Modern graph representation learning works mostly under the assumption of dealing with regularly sampled temporal graph snapshots, which is far from realistic, e. g., social networks and physical systems are characterized by continuous dynamics and sporadic observations.

no code implementations • 23 Apr 2024 • Alessandro Trenta, Davide Bacciu, Andrea Cossu, Pietro Ferrero

We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.

2 code implementations • 11 Apr 2024 • Lanpei Li, Elia Piccoli, Andrea Cossu, Davide Bacciu, Vincenzo Lomonaco

Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data.

1 code implementation • 19 Mar 2024 • Michele Resta, Davide Bacciu

In this work, we leverage a key property of encoder-decoder Transformers, i. e. their generative ability, to propose a novel approach to continually learning Neural Machine Translation systems.

no code implementations • 17 Mar 2024 • Asma Sattar, Georgios Deligiorgis, Marco Trincavelli, Davide Bacciu

Dynamic multi-relational graphs are an expressive relational representation for data enclosing entities and relations of different types, and where relationships are allowed to vary in time.

no code implementations • 9 Mar 2024 • Rudy Semola, Julio Hurtado, Vincenzo Lomonaco, Davide Bacciu

This paper aims to explore the role of hyperparameter selection in continual learning and the necessity of continually and automatically tuning them according to the complexity of the task at hand.

no code implementations • 28 Dec 2023 • Matteo Ninniri, Marco Podda, Davide Bacciu

This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development.

no code implementations • 11 Dec 2023 • Marco Lepri, Davide Bacciu, Cosimo Della Santina

This work concerns control-oriented and structure-preserving learning of low-dimensional approximations of high-dimensional physical systems, with a focus on mechanical systems.

no code implementations • 15 Sep 2023 • Riccardo Massidda, Francesco Landolfi, Martina Cinquini, Davide Bacciu

The structure learning problem consists of fitting data generated by a Directed Acyclic Graph (DAG) to correctly reconstruct its arcs.

1 code implementation • 17 Aug 2023 • Daniele Atzeni, Federico Errica, Davide Bacciu, Alessio Micheli

We propose an extension of the Contextual Graph Markov Model, a deep and probabilistic machine learning model for graphs, to model the distribution of edge features.

1 code implementation • 27 Jul 2023 • Emanuele Cosenza, Andrea Valenti, Davide Bacciu

Graphs can be leveraged to model polyphonic multitrack symbolic music, where notes, chords and entire sections may be linked at different levels of the musical hierarchy by tonal and rhythmic relationships.

1 code implementation • 12 Jul 2023 • Alessio Gravina, Davide Bacciu

Recent progress in research on Deep Graph Networks (DGNs) has led to a maturation of the domain of learning on graphs.

1 code implementation • 19 Jun 2023 • Hamed Hemati, Vincenzo Lomonaco, Davide Bacciu, Damian Borth

Inspired by latent replay methods in CL, we propose partial weight generation for the final layers of a model using hypernetworks while freezing the initial layers.

1 code implementation • 12 Jun 2023 • Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, Davide Bacciu

Continual Learning trains models on a stream of data, with the aim of learning new information without forgetting previous knowledge.

no code implementations • 25 May 2023 • Dario Balboni, Davide Bacciu

We also compare the novel approximation with the Gauss-Newton approximation.

1 code implementation • 18 May 2023 • Dobrik Georgiev, Danilo Numeroso, Davide Bacciu, Pietro Liò

Solving NP-hard/complete combinatorial problems with neural networks is a challenging research area that aims to surpass classical approximate algorithms.

1 code implementation • 28 Mar 2023 • Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu, Joost Van de Weijer

We formalize this problem as a Distributed Continual Learning scenario, where SCD adapt to local tasks and a CL model consolidates the knowledge from the resulting stream of models without looking at the SCD's private data.

no code implementations • 9 Feb 2023 • Danilo Numeroso, Davide Bacciu, Petar Veličković

We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions.

1 code implementation • 26 Jan 2023 • Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth

We propose two stochastic stream generators that produce a wide range of CIR streams starting from a single dataset and a few interpretable control parameters.

no code implementations • 23 Jan 2023 • Lorenzo Simone, Davide Bacciu

High-quality synthetic data can support the development of effective predictive models for biomedical tasks, especially in rare diseases or when subject to compelling privacy constraints.

no code implementations • 22 Nov 2022 • Riccardo Massidda, Atticus Geiger, Thomas Icard, Davide Bacciu

Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail.

1 code implementation • 18 Oct 2022 • Alessio Gravina, Davide Bacciu, Claudio Gallicchio

Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes.

no code implementations • 5 Oct 2022 • Andrea Valenti, Davide Bacciu, Antonio Vergari

Measuring the robustness of reasoning in machine learning models is challenging as one needs to provide a task that cannot be easily shortcut by exploiting spurious statistical correlations in the data, while operating on complex objects and constraints.

no code implementations • 12 Sep 2022 • Andrea Valenti, Davide Bacciu

However, at the moment, weak disentanglement can only be achieved by increasing the amount of supervision as the number of factors of variations of the data increase.

1 code implementation • 6 Aug 2022 • Davide Bacciu, Alessio Conte, Francesco Landolfi

Downsampling produces coarsened, multi-resolution representations of data and it is used, for example, to produce lossy compression and visualization of large images, reduce computational costs, and boost deep neural representation learning.

1 code implementation • 4 Jul 2022 • Julio Hurtado, Alain Raymond-Saez, Vladimir Araujo, Vincenzo Lomonaco, Alvaro Soto, Davide Bacciu

This paper introduces Memory Outlier Elimination (MOE), a method for identifying and eliminating outliers in the memory buffer by choosing samples from label-homogeneous subpopulations.

1 code implementation • 1 Jul 2022 • Francesco Corti, Rahim Entezari, Sara Hooker, Davide Bacciu, Olga Saukh

We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions.

1 code implementation • 29 Jun 2022 • Federico Matteoni, Andrea Cossu, Claudio Gallicchio, Vincenzo Lomonaco, Davide Bacciu

Continual Learning (CL) on time series data represents a promising but under-studied avenue for real-world applications.

1 code implementation • 23 Jun 2022 • Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu

A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time.

no code implementations • 14 Jun 2022 • Rudy Semola, Vincenzo Lomonaco, Davide Bacciu

The two main future trends for companies that want to build machine learning-based applications and systems are real-time inference and continual updating.

no code implementations • 25 May 2022 • Valerio De Caro, Claudio Gallicchio, Davide Bacciu

We propose a novel algorithm for performing federated learning with Echo State Networks (ESNs) in a client-server scenario.

1 code implementation • 20 May 2022 • Andrea Valenti, Davide Bacciu

This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation.

1 code implementation • 19 May 2022 • Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, Davide Bacciu

We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks.

no code implementations • 18 May 2022 • Gabriele Lagani, Davide Bacciu, Claudio Gallicchio, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR).

no code implementations • 11 Apr 2022 • Danilo Numeroso, Davide Bacciu, Petar Veličković

At training time, we exploit multi-task learning to learn jointly the Dijkstra's algorithm and a consistent heuristic function for the A* search algorithm.

no code implementations • 19 Mar 2022 • Gabriele Merlin, Vincenzo Lomonaco, Andrea Cossu, Antonio Carta, Davide Bacciu

Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.

1 code implementation • 28 Feb 2022 • Nicolò Lucchesi, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu

Continual Reinforcement Learning (CRL) is a challenging setting where an agent learns to interact with an environment that is constantly changing over time (the stream of experiences).

no code implementations • 3 Feb 2022 • Valerio De Caro, Saira Bano, Achilles Machumilane, Alberto Gotta, Pietro Cassará, Antonio Carta, Rudy Semola, Christos Sardianos, Christos Chronis, Iraklis Varlamis, Konstantinos Tserpes, Vincenzo Lomonaco, Claudio Gallicchio, Davide Bacciu

This paper presents a proof-of-concept implementation of the AI-as-a-Service toolkit developed within the H2020 TEACHING project and designed to implement an autonomous driving personalization system according to the output of an automatic driver's stress recognition algorithm, both of them realizing a Cyber-Physical System of Systems.

1 code implementation • 13 Dec 2021 • Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years.

no code implementations • 6 Dec 2021 • Andrea Cossu, Gabriele Graffieti, Lorenzo Pellegrini, Davide Maltoni, Davide Bacciu, Antonio Carta, Vincenzo Lomonaco

The ability of a model to learn continually can be empirically assessed in different continual learning scenarios.

1 code implementation • 3 Nov 2021 • Giacomo Lanciano, Filippo Galli, Tommaso Cucinotta, Davide Bacciu, Andrea Passarella

Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e. g., the average CPU usage among instances, exceeds a predefined threshold.

no code implementations • 4 Oct 2021 • Haris Dukic, Georgios Deligiorgis, Pierpaolo Sepe, Davide Bacciu, Marco Trincavelli

Global retailers have assortments that contain hundreds of thousands of products that can be linked by several types of relationships like style compatibility, "bought together", "watched together", etc.

no code implementations • 29 Sep 2021 • Dario Balboni, Davide Bacciu

We present a framework based on the theory of Polyak-Łojasiewicz functions to explain the properties of convergence and generalization of overparameterized feed-forward neural networks.

no code implementations • 29 Sep 2021 • Riccardo Massidda, Davide Bacciu

Semantic alignment methods attempt to establish a link between human-level concepts and the units of an artificial neural network.

no code implementations • 29 Sep 2021 • Daniele Castellana, Federico Errica, Davide Bacciu, Alessio Micheli

The Contextual Graph Markov Model is a deep, unsupervised, and probabilistic model for graphs that is trained incrementally on a layer-by-layer basis.

1 code implementation • 18 Jul 2021 • Marco Podda, Davide Bacciu

Several approaches have been proposed in the literature, most of which require to transform the graphs into sequences that encode their structure and labels and to learn the distribution of such sequences through an auto-regressive generative model.

no code implementations • 14 Jul 2021 • Davide Bacciu, Siranush Akarmazyan, Eric Armengaud, Manlio Bacco, George Bravos, Calogero Calandra, Emanuele Carlini, Antonio Carta, Pietro Cassara, Massimo Coppola, Charalampos Davalas, Patrizio Dazzi, Maria Carmela Degennaro, Daniele Di Sarli, Jürgen Dobaj, Claudio Gallicchio, Sylvain Girbal, Alberto Gotta, Riccardo Groppo, Vincenzo Lomonaco, Georg Macher, Daniele Mazzei, Gabriele Mencagli, Dimitrios Michail, Alessio Micheli, Roberta Peroglio, Salvatore Petroni, Rosaria Potenza, Farank Pourdanesh, Christos Sardianos, Konstantinos Tserpes, Fulvio Tagliabò, Jakob Valtl, Iraklis Varlamis, Omar Veledar

This paper discusses the perspective of the H2020 TEACHING project on the next generation of autonomous applications running in a distributed and highly heterogeneous environment comprising both virtual and physical resources spanning the edge-cloud continuum.

no code implementations • 8 Jul 2021 • Andrea Valenti, Stefano Berti, Davide Bacciu

The polyphonic nature of music makes the application of deep learning to music modelling a challenging task.

1 code implementation • 17 May 2021 • Andrea Cossu, Davide Bacciu, Antonio Carta, Claudio Gallicchio, Vincenzo Lomonaco

Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge.

no code implementations • 14 May 2021 • Elisa Ferrari, Luna Gargani, Greta Barbieri, Lorenzo Ghiadoni, Francesco Faita, Davide Bacciu

We present a workflow for clinical data analysis that relies on Bayesian Structure Learning (BSL), an unsupervised learning approach, robust to noise and biases, that allows to incorporate prior medical knowledge into the learning process and that provides explainable results in the form of a graph showing the causal connections among the analyzed features.

1 code implementation • 13 May 2021 • Elisa Ferrari, Davide Bacciu

Resilience to class imbalance and confounding biases, together with the assurance of fairness guarantees are highly desirable properties of autonomous decision-making systems with real-life impact.

1 code implementation • 16 Apr 2021 • Danilo Numeroso, Davide Bacciu

Explainable AI (XAI) is a research area whose objective is to increase trustworthiness and to enlighten the hidden mechanism of opaque machine learning techniques.

4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni

Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.

2 code implementations • 29 Mar 2021 • Andrea Rosasco, Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experiences, which are interleaved with new data during training.

1 code implementation • 22 Mar 2021 • Antonio Carta, Andrea Cossu, Federico Errica, Davide Bacciu

In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario.

no code implementations • 12 Mar 2021 • Andrea Cossu, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu

We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.

1 code implementation • 5 Dec 2020 • Federico Errica, Davide Bacciu, Alessio Micheli

We introduce the Graph Mixture Density Networks, a new family of machine learning models that can fit multimodal output distributions conditioned on graphs of arbitrary topology.

1 code implementation • 9 Nov 2020 • Danilo Numeroso, Davide Bacciu

We present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction tasks, named MEG (Molecular Explanation Generator).

1 code implementation • 5 Nov 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu

Training RNNs to learn long-term dependencies is difficult due to vanishing gradients.

1 code implementation • COLING 2020 • Daniele Castellana, Davide Bacciu

Finally, we introduce a Tree-LSTM model which takes advantage of this composition function and we experimentally assess its performance on different NLP tasks.

no code implementations • 26 Oct 2020 • Matteo Ronchetti, Davide Bacciu

We propose an end-to-end differentiable architecture for tomography reconstruction that directly maps a noisy sinogram into a denoised reconstruction.

no code implementations • 18 Oct 2020 • Francesco Crecchi, Marco Melis, Angelo Sotgiu, Davide Bacciu, Battista Biggio

As a second main contribution of this work, we introduce FADER, a novel technique for speeding up detection-based methods.

no code implementations • NeurIPS Workshop LMCA 2020 • Davide Bacciu, Alessio Conte, Roberto Grossi, Francesco Landolfi, Andrea Marino

We introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity pattern.

1 code implementation • 3 Oct 2020 • Francesco Crecchi, Cyril de Bodt, Michel Verleysen, John A. Lee, Davide Bacciu

The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method.

no code implementations • 31 Aug 2020 • Andrea Valenti, Michele Barsotti, Raffaello Brondi, Davide Bacciu, Luca Ascari

Typical EEG-based BCI applications require the computation of complex functions over the noisy EEG channels to be carried out in an efficient way.

no code implementations • 14 Jul 2020 • Federico Errica, Marco Giulini, Davide Bacciu, Roberto Menichetti, Alessio Micheli, Raffaello Potestio

The method relies on deep graph networks, which provide extreme flexibility in the input format.

1 code implementation • 29 Jun 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu

The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales.

1 code implementation • 18 Jun 2020 • Daniele Castellana, Davide Bacciu

The paper introduces two new aggregation functions to encode structural knowledge from tree-structured data.

1 code implementation • 17 Jun 2020 • Daniele Castellana, Davide Bacciu

This approximation allows limiting the parameters space size, decoupling it from its strict relation with the size of the hidden encoding space.

1 code implementation • 8 Apr 2020 • Andrea Cossu, Antonio Carta, Davide Bacciu

The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions.

1 code implementation • 28 Feb 2020 • Marco Podda, Davide Bacciu, Alessio Micheli

Molecule generation is a challenging open problem in cheminformatics.

no code implementations • 26 Feb 2020 • Davide Bacciu, Danilo P. Mandic

The paper surveys the topic of tensor decompositions in modern machine learning applications.

1 code implementation • 31 Jan 2020 • Davide Bacciu, Alessio Micheli, Marco Podda

Graph generation with Machine Learning is an open problem with applications in various research fields.

no code implementations • 31 Jan 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu

The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem.

no code implementations • 24 Jan 2020 • Federico Errica, Davide Bacciu, Alessio Micheli

We propose a new Graph Neural Network that combines recent advancements in the field.

2 code implementations • 15 Jan 2020 • Andrea Valenti, Antonio Carta, Davide Bacciu

Through the paper, we show how Gaussian mixtures taking into account music metadata information can be used as an effective prior for the autoencoder latent space, introducing the first Music Adversarial Autoencoder (MusAE).

2 code implementations • 29 Dec 2019 • Davide Bacciu, Federico Errica, Alessio Micheli, Marco Podda

The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community.

4 code implementations • ICLR 2020 • Federico Errica, Marco Podda, Davide Bacciu, Alessio Micheli

We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.

Ranked #1 on Graph Classification on REDDIT-MULTI-5k

no code implementations • 25 Sep 2019 • Antonio Carta, Alessandro Sperduti, Davide Bacciu

We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.

no code implementations • 7 Sep 2019 • Davide Bacciu, Luigi Di Sotto

The paper discusses a pooling mechanism to induce subsampling in graph structured data and introduces it as a component of a graph convolutional neural network.

Ranked #33 on Graph Classification on COLLAB

no code implementations • 31 May 2019 • Daniele Castellana, Davide Bacciu

Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data.

no code implementations • 21 May 2019 • Elisa Ferrari, Alessandra Retico, Davide Bacciu

Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing.

1 code implementation • 30 Apr 2019 • Francesco Crecchi, Davide Bacciu, Battista Biggio

Deep neural networks are vulnerable to adversarial examples, i. e., carefully-perturbed inputs aimed to mislead classification.

no code implementations • 5 Feb 2019 • Davide Bacciu, Antonio Bruno

The paper surveys recent extensions of the Long-Short Term Memory networks to handle tree structures from the perspective of learning non-trivial forms of isomorph structured transductions.

no code implementations • 8 Nov 2018 • Davide Bacciu, Antonio Carta, Alessandro Sperduti

By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component.

no code implementations • 24 Sep 2018 • Davide Bacciu, Antonio Bruno

Extractive compression is a challenging natural language processing problem.

no code implementations • 31 May 2018 • Davide Bacciu, Daniele Castellana

Hidden tree Markov models allow learning distributions for tree structured data while being interpretable as nondeterministic automata.

1 code implementation • ICML 2018 • Davide Bacciu, Federico Errica, Alessio Micheli

We introduce the Contextual Graph Markov Model, an approach combining ideas from generative models and neural networks for the processing of graph data.

no code implementations • 23 May 2018 • Davide Bacciu, Andrea Bongiorno

The paper introduces concentric Echo State Network, an approach to design reservoir topologies that tries to bridge the gap between deterministically constructed simple cycle models and deep reservoir computing approaches.

no code implementations • 27 Feb 2018 • Davide Bacciu, Paulo J. G. Lisboa, José D. Martín, Ruxandra Stoean, Alfredo Vellido

Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery.

no code implementations • 21 Nov 2017 • Davide Bacciu

The paper introduces the Hidden Tree Markov Network (HTN), a neuro-probabilistic hybrid fusing the representation power of generative models for trees with the incremental and discriminative learning capabilities of neural networks.

1 code implementation • 7 May 2017 • Davide Bacciu, Francesco Crecchi, Davide Morelli

The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time.

no code implementations • 17 Aug 2015 • Davide Bacciu, Stefania Gnesi, Laura Semini

Bike-sharing systems are a means of smart transportation in urban environments with the benefit of a positive impact on urban mobility.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.