Search Results for author: Davide Bacciu

Found 75 papers, 32 papers with code

Class-Incremental Learning with Repetition

no code implementations26 Jan 2023 Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth

We focus on the family of Class-Incremental with Repetition (CIR) scenarios, where repetition is embedded in the definition of the stream.

class-incremental learning Incremental Learning

ECGAN: Self-supervised generative adversarial network for electrocardiography

no code implementations23 Jan 2023 Lorenzo Simone, Davide Bacciu

High-quality synthetic data can support the development of effective predictive models for biomedical tasks, especially in rare diseases or when subject to compelling privacy constraints.

Time Series

Causal Abstraction with Soft Interventions

no code implementations22 Nov 2022 Riccardo Massidda, Atticus Geiger, Thomas Icard, Davide Bacciu

Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail.

Anti-Symmetric DGN: a stable architecture for Deep Graph Networks

no code implementations18 Oct 2022 Alessio Gravina, Davide Bacciu, Claudio Gallicchio

Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes.

ChemAlgebra: Algebraic Reasoning on Chemical Reactions

no code implementations5 Oct 2022 Andrea Valenti, Davide Bacciu, Antonio Vergari

Measuring the robustness of reasoning in machine learning models is challenging as one needs to provide a task that cannot be easily shortcut by exploiting spurious statistical correlations in the data, while operating on complex objects and constraints.

Modular Representations for Weak Disentanglement

no code implementations12 Sep 2022 Andrea Valenti, Davide Bacciu

However, at the moment, weak disentanglement can only be achieved by increasing the amount of supervision as the number of factors of variations of the data increase.

Disentanglement

Generalizing Downsampling from Regular Data to Graphs

no code implementations6 Aug 2022 Davide Bacciu, Alessio Conte, Francesco Landolfi

Downsampling produces coarsened, multi-resolution representations of data and it is used, for example, to produce lossy compression and visualization of large images, reduce computational costs, and boost deep neural representation learning.

Graph Classification Representation Learning

Populating Memory in Continual Learning with Consistency Aware Sampling

no code implementations4 Jul 2022 Julio Hurtado, Alain Raymond-Saez, Vladimir Araujo, Vincenzo Lomonaco, Alvaro Soto, Davide Bacciu

Based on these insights, we propose CAWS (Consistency AWare Sampling), an original storage policy that leverages a learning consistency score (C-Score) to populate the memory with elements that are easy to learn and representative of previous tasks.

Continual Learning

Studying the impact of magnitude pruning on contrastive learning methods

1 code implementation1 Jul 2022 Francesco Corti, Rahim Entezari, Sara Hooker, Davide Bacciu, Olga Saukh

We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions.

Contrastive Learning Network Pruning

Continual Learning for Human State Monitoring

1 code implementation29 Jun 2022 Federico Matteoni, Andrea Cossu, Claudio Gallicchio, Vincenzo Lomonaco, Davide Bacciu

Continual Learning (CL) on time series data represents a promising but under-studied avenue for real-world applications.

Continual Learning Time Series

Sample Condensation in Online Continual Learning

1 code implementation23 Jun 2022 Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu

A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time.

Continual Learning

Continual-Learning-as-a-Service (CLaaS): On-Demand Efficient Adaptation of Predictive Models

no code implementations14 Jun 2022 Rudy Semola, Vincenzo Lomonaco, Davide Bacciu

The two main future trends for companies that want to build machine learning-based applications and systems are real-time inference and continual updating.

BIG-bench Machine Learning Continual Learning +1

Federated Adaptation of Reservoirs via Intrinsic Plasticity

no code implementations25 May 2022 Valerio De Caro, Claudio Gallicchio, Davide Bacciu

We propose a novel algorithm for performing federated learning with Echo State Networks (ESNs) in a client-server scenario.

Federated Learning

Leveraging Relational Information for Learning Weakly Disentangled Representations

1 code implementation20 May 2022 Andrea Valenti, Davide Bacciu

This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation.

Disentanglement Relational Reasoning

Continual Pre-Training Mitigates Forgetting in Language and Vision

1 code implementation19 May 2022 Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, Davide Bacciu

We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks.

Continual Learning Continual Pretraining

Deep Features for CBIR with Scarce Data using Hebbian Learning

no code implementations18 May 2022 Gabriele Lagani, Davide Bacciu, Claudio Gallicchio, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR).

Content-Based Image Retrieval Retrieval +1

Learning heuristics for A*

no code implementations11 Apr 2022 Danilo Numeroso, Davide Bacciu, Petar Veličković

At training time, we exploit multi-task learning to learn jointly the Dijkstra's algorithm and a consistent heuristic function for the A* search algorithm.

Multi-Task Learning

Practical Recommendations for Replay-based Continual Learning Methods

no code implementations19 Mar 2022 Gabriele Merlin, Vincenzo Lomonaco, Andrea Cossu, Antonio Carta, Davide Bacciu

Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.

Continual Learning Data Augmentation

Avalanche RL: a Continual Reinforcement Learning Library

1 code implementation28 Feb 2022 Nicolò Lucchesi, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu

Continual Reinforcement Learning (CRL) is a challenging setting where an agent learns to interact with an environment that is constantly changing over time (the stream of experiences).

Continual Learning OpenAI Gym +2

AI-as-a-Service Toolkit for Human-Centered Intelligence in Autonomous Driving

no code implementations3 Feb 2022 Valerio De Caro, Saira Bano, Achilles Machumilane, Alberto Gotta, Pietro Cassará, Antonio Carta, Rudy Semola, Christos Sardianos, Christos Chronis, Iraklis Varlamis, Konstantinos Tserpes, Vincenzo Lomonaco, Claudio Gallicchio, Davide Bacciu

This paper presents a proof-of-concept implementation of the AI-as-a-Service toolkit developed within the H2020 TEACHING project and designed to implement an autonomous driving personalization system according to the output of an automatic driver's stress recognition algorithm, both of them realizing a Cyber-Physical System of Systems.

Autonomous Driving reinforcement-learning +1

Ex-Model: Continual Learning from a Stream of Trained Models

1 code implementation13 Dec 2021 Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years.

Continual Learning

Is Class-Incremental Enough for Continual Learning?

no code implementations6 Dec 2021 Andrea Cossu, Gabriele Graffieti, Lorenzo Pellegrini, Davide Maltoni, Davide Bacciu, Antonio Carta, Vincenzo Lomonaco

The ability of a model to learn continually can be empirically assessed in different continual learning scenarios.

Continual Learning

Predictive Auto-scaling with OpenStack Monasca

1 code implementation3 Nov 2021 Giacomo Lanciano, Filippo Galli, Tommaso Cucinotta, Davide Bacciu, Andrea Passarella

Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e. g., the average CPU usage among instances, exceeds a predefined threshold.

Time Series Forecasting

Inductive learning for product assortment graph completion

no code implementations4 Oct 2021 Haris Dukic, Georgios Deligiorgis, Pierpaolo Sepe, Davide Bacciu, Marco Trincavelli

Global retailers have assortments that contain hundreds of thousands of products that can be linked by several types of relationships like style compatibility, "bought together", "watched together", etc.

The Infinite Contextual Graph Markov Model

no code implementations29 Sep 2021 Daniele Castellana, Federico Errica, Davide Bacciu, Alessio Micheli

The Contextual Graph Markov Model is a deep, unsupervised, and probabilistic model for graphs that is trained incrementally on a layer-by-layer basis.

Graph Classification Model Selection

Ontology-Driven Semantic Alignment of Artificial Neurons and Visual Concepts

no code implementations29 Sep 2021 Riccardo Massidda, Davide Bacciu

Semantic alignment methods attempt to establish a link between human-level concepts and the units of an artificial neural network.

Image Classification

A partial theory of Wide Neural Networks using WC functions and its practical implications

no code implementations29 Sep 2021 Dario Balboni, Davide Bacciu

We present a framework based on the theory of Polyak-Łojasiewicz functions to explain the properties of convergence and generalization of overparameterized feed-forward neural networks.

GraphGen-Redux: a Fast and Lightweight Recurrent Model for labeled Graph Generation

1 code implementation18 Jul 2021 Marco Podda, Davide Bacciu

Several approaches have been proposed in the literature, most of which require to transform the graphs into sequences that encode their structure and labels and to learn the distribution of such sequences through an auto-regressive generative model.

Graph Generation

Calliope -- A Polyphonic Music Transformer

no code implementations8 Jul 2021 Andrea Valenti, Stefano Berti, Davide Bacciu

The polyphonic nature of music makes the application of deep learning to music modelling a challenging task.

Continual Learning with Echo State Networks

1 code implementation17 May 2021 Andrea Cossu, Davide Bacciu, Antonio Carta, Claudio Gallicchio, Vincenzo Lomonaco

Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge.

Continual Learning

A causal learning framework for the analysis and interpretation of COVID-19 clinical data

no code implementations14 May 2021 Elisa Ferrari, Luna Gargani, Greta Barbieri, Lorenzo Ghiadoni, Francesco Faita, Davide Bacciu

We present a workflow for clinical data analysis that relies on Bayesian Structure Learning (BSL), an unsupervised learning approach, robust to noise and biases, that allows to incorporate prior medical knowledge into the learning process and that provides explainable results in the form of a graph showing the causal connections among the analyzed features.

Addressing Fairness, Bias and Class Imbalance in Machine Learning: the FBI-loss

1 code implementation13 May 2021 Elisa Ferrari, Davide Bacciu

Resilience to class imbalance and confounding biases, together with the assurance of fairness guarantees are highly desirable properties of autonomous decision-making systems with real-life impact.

BIG-bench Machine Learning Decision Making +1

MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks

1 code implementation16 Apr 2021 Danilo Numeroso, Davide Bacciu

Explainable AI (XAI) is a research area whose objective is to increase trustworthiness and to enlighten the hidden mechanism of opaque machine learning techniques.

Distilled Replay: Overcoming Forgetting through Synthetic Samples

2 code implementations29 Mar 2021 Andrea Rosasco, Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experiences, which are interleaved with new data during training.

Continual Learning

Continual Learning for Recurrent Neural Networks: an Empirical Evaluation

no code implementations12 Mar 2021 Andrea Cossu, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu

We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.

Continual Learning

Graph Mixture Density Networks

1 code implementation5 Dec 2020 Federico Errica, Davide Bacciu, Alessio Micheli

We introduce the Graph Mixture Density Networks, a new family of machine learning models that can fit multimodal output distributions conditioned on graphs of arbitrary topology.

Density Estimation Graph Representation Learning

Explaining Deep Graph Networks with Molecular Counterfactuals

1 code implementation9 Nov 2020 Danilo Numeroso, Davide Bacciu

We present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction tasks, named MEG (Molecular Explanation Generator).

Learning from Non-Binary Constituency Trees via Tensor Decomposition

1 code implementation COLING 2020 Daniele Castellana, Davide Bacciu

Finally, we introduce a Tree-LSTM model which takes advantage of this composition function and we experimentally assess its performance on different NLP tasks.

Tensor Decomposition

Generative Tomography Reconstruction

no code implementations26 Oct 2020 Matteo Ronchetti, Davide Bacciu

We propose an end-to-end differentiable architecture for tomography reconstruction that directly maps a noisy sinogram into a denoised reconstruction.

FADER: Fast Adversarial Example Rejection

no code implementations18 Oct 2020 Francesco Crecchi, Marco Melis, Angelo Sotgiu, Davide Bacciu, Battista Biggio

As a second main contribution of this work, we introduce FADER, a novel technique for speeding up detection-based methods.

Adversarial Robustness

K-plex Cover Pooling for Graph Neural Networks

no code implementations NeurIPS Workshop LMCA 2020 Davide Bacciu, Alessio Conte, Roberto Grossi, Francesco Landolfi, Andrea Marino

We introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity pattern.

Graph Classification

Perplexity-free Parametric t-SNE

1 code implementation3 Oct 2020 Francesco Crecchi, Cyril de Bodt, Michel Verleysen, John A. Lee, Davide Bacciu

The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method.

Dimensionality Reduction

ROS-Neuro Integration of Deep Convolutional Autoencoders for EEG Signal Compression in Real-time BCIs

no code implementations31 Aug 2020 Andrea Valenti, Michele Barsotti, Raffaello Brondi, Davide Bacciu, Luca Ascari

Typical EEG-based BCI applications require the computation of complex functions over the noisy EEG channels to be carried out in an efficient way.

EEG

Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory

1 code implementation29 Jun 2020 Antonio Carta, Alessandro Sperduti, Davide Bacciu

The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales.

speech-recognition Speech Recognition

Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data

1 code implementation18 Jun 2020 Daniele Castellana, Davide Bacciu

The paper introduces two new aggregation functions to encode structural knowledge from tree-structured data.

General Classification

Generalising Recursive Neural Models by Tensor Decomposition

1 code implementation17 Jun 2020 Daniele Castellana, Davide Bacciu

This approximation allows limiting the parameters space size, decoupling it from its strict relation with the size of the hidden encoding space.

Tensor Decomposition

Continual Learning with Gated Incremental Memories for sequential data processing

1 code implementation8 Apr 2020 Andrea Cossu, Antonio Carta, Davide Bacciu

The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions.

Continual Learning reinforcement Learning

Tensor Decompositions in Deep Learning

no code implementations26 Feb 2020 Davide Bacciu, Danilo P. Mandic

The paper surveys the topic of tensor decompositions in modern machine learning applications.

BIG-bench Machine Learning

Edge-based sequential graph generation with recurrent neural networks

1 code implementation31 Jan 2020 Davide Bacciu, Alessio Micheli, Marco Podda

Graph generation with Machine Learning is an open problem with applications in various research fields.

Graph Generation

Encoding-based Memory Modules for Recurrent Neural Networks

no code implementations31 Jan 2020 Antonio Carta, Alessandro Sperduti, Davide Bacciu

The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem.

Memorization

Theoretically Expressive and Edge-aware Graph Learning

no code implementations24 Jan 2020 Federico Errica, Davide Bacciu, Alessio Micheli

We propose a new Graph Neural Network that combines recent advancements in the field.

Graph Learning

Learning Style-Aware Symbolic Music Representations by Adversarial Autoencoders

2 code implementations15 Jan 2020 Andrea Valenti, Antonio Carta, Davide Bacciu

Through the paper, we show how Gaussian mixtures taking into account music metadata information can be used as an effective prior for the autoencoder latent space, introducing the first Music Adversarial Autoencoder (MusAE).

Music Modeling

A Gentle Introduction to Deep Learning for Graphs

2 code implementations29 Dec 2019 Davide Bacciu, Federico Errica, Alessio Micheli, Marco Podda

The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community.

Graph Representation Learning

A Fair Comparison of Graph Neural Networks for Graph Classification

3 code implementations ICLR 2020 Federico Errica, Marco Podda, Davide Bacciu, Alessio Micheli

We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.

General Classification Graph Classification +2

Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory

no code implementations25 Sep 2019 Antonio Carta, Alessandro Sperduti, Davide Bacciu

We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.

Memorization Permuted-MNIST

A Non-Negative Factorization approach to node pooling in Graph Convolutional Neural Networks

no code implementations7 Sep 2019 Davide Bacciu, Luigi Di Sotto

The paper discusses a pooling mechanism to induce subsampling in graph structured data and introduces it as a component of a graph convolutional neural network.

Graph Classification

Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models

no code implementations31 May 2019 Daniele Castellana, Davide Bacciu

Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data.

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

1 code implementation30 Apr 2019 Francesco Crecchi, Davide Bacciu, Battista Biggio

Deep neural networks are vulnerable to adversarial examples, i. e., carefully-perturbed inputs aimed to mislead classification.

Density Estimation Dimensionality Reduction +1

Deep Tree Transductions - A Short Survey

no code implementations5 Feb 2019 Davide Bacciu, Antonio Bruno

The paper surveys recent extensions of the Long-Short Term Memory networks to handle tree structures from the perspective of learning non-trivial forms of isomorph structured transductions.

Linear Memory Networks

no code implementations8 Nov 2018 Davide Bacciu, Antonio Carta, Alessandro Sperduti

By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component.

Text Summarization as Tree Transduction by Top-Down TreeLSTM

no code implementations24 Sep 2018 Davide Bacciu, Antonio Bruno

Extractive compression is a challenging natural language processing problem.

Sentence Compression

Learning Tree Distributions by Hidden Markov Models

no code implementations31 May 2018 Davide Bacciu, Daniele Castellana

Hidden tree Markov models allow learning distributions for tree structured data while being interpretable as nondeterministic automata.

Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing

1 code implementation ICML 2018 Davide Bacciu, Federico Errica, Alessio Micheli

We introduce the Contextual Graph Markov Model, an approach combining ideas from generative models and neural networks for the processing of graph data.

General Classification

Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs

no code implementations23 May 2018 Davide Bacciu, Andrea Bongiorno

The paper introduces concentric Echo State Network, an approach to design reservoir topologies that tries to bridge the gap between deterministically constructed simple cycle models and deep reservoir computing approaches.

Bioinformatics and Medicine in the Era of Deep Learning

no code implementations27 Feb 2018 Davide Bacciu, Paulo J. G. Lisboa, José D. Martín, Ruxandra Stoean, Alfredo Vellido

Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery.

Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data

no code implementations21 Nov 2017 Davide Bacciu

The paper introduces the Hidden Tree Markov Network (HTN), a neuro-probabilistic hybrid fusing the representation power of generative models for trees with the incremental and discriminative learning capabilities of neural networks.

DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout

1 code implementation7 May 2017 Davide Bacciu, Francesco Crecchi, Davide Morelli

The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time.

Using a Machine Learning Approach to Implement and Evaluate Product Line Features

no code implementations17 Aug 2015 Davide Bacciu, Stefania Gnesi, Laura Semini

Bike-sharing systems are a means of smart transportation in urban environments with the benefit of a positive impact on urban mobility.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.