Search Results for author: Davide Bacciu

Found 94 papers, 46 papers with code

Tackling Graph Oversquashing by Global and Local Non-Dissipativity

no code implementations2 May 2024 Alessio Gravina, Moshe Eliasof, Claudio Gallicchio, Davide Bacciu, Carola-Bibiane Schönlieb

A common problem in Message-Passing Neural Networks is oversquashing -- the limited ability to facilitate effective information flow between distant nodes.

Temporal Graph ODEs for Irregularly-Sampled Time Series

1 code implementation30 Apr 2024 Alessio Gravina, Daniele Zambon, Davide Bacciu, Cesare Alippi

Modern graph representation learning works mostly under the assumption of dealing with regularly sampled temporal graph snapshots, which is far from realistic, e. g., social networks and physical systems are characterized by continuous dynamics and sporadic observations.

Graph Representation Learning Time Series

MultiSTOP: Solving Functional Equations with Reinforcement Learning

no code implementations23 Apr 2024 Alessandro Trenta, Davide Bacciu, Andrea Cossu, Pietro Ferrero

We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.


Calibration of Continual Learning Models

2 code implementations11 Apr 2024 Lanpei Li, Elia Piccoli, Andrea Cossu, Davide Bacciu, Vincenzo Lomonaco

Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data.

Continual Learning

Self-generated Replay Memories for Continual Neural Machine Translation

1 code implementation19 Mar 2024 Michele Resta, Davide Bacciu

In this work, we leverage a key property of encoder-decoder Transformers, i. e. their generative ability, to propose a novel approach to continually learning Neural Machine Translation systems.

Decoder Machine Translation +2

Multi-Relational Graph Neural Network for Out-of-Domain Link Prediction

no code implementations17 Mar 2024 Asma Sattar, Georgios Deligiorgis, Marco Trincavelli, Davide Bacciu

Dynamic multi-relational graphs are an expressive relational representation for data enclosing entities and relations of different types, and where relationships are allowed to vary in time.

Domain Generalization Link Prediction

Adaptive Hyperparameter Optimization for Continual Learning Scenarios

no code implementations9 Mar 2024 Rudy Semola, Julio Hurtado, Vincenzo Lomonaco, Davide Bacciu

This paper aims to explore the role of hyperparameter selection in continual learning and the necessity of continually and automatically tuning them according to the complexity of the task at hand.

Continual Learning Hyperparameter Optimization

Classifier-free graph diffusion for molecular property targeting

no code implementations28 Dec 2023 Matteo Ninniri, Marco Podda, Davide Bacciu

This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development.

Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems

no code implementations11 Dec 2023 Marco Lepri, Davide Bacciu, Cosimo Della Santina

This work concerns control-oriented and structure-preserving learning of low-dimensional approximations of high-dimensional physical systems, with a focus on mechanical systems.

Total Energy

Constraint-Free Structure Learning with Smooth Acyclic Orientations

no code implementations15 Sep 2023 Riccardo Massidda, Francesco Landolfi, Martina Cinquini, Davide Bacciu

The structure learning problem consists of fitting data generated by a Directed Acyclic Graph (DAG) to correctly reconstruct its arcs.

Graph Reconstruction

Modeling Edge Features with Deep Bayesian Graph Networks

1 code implementation17 Aug 2023 Daniele Atzeni, Federico Errica, Davide Bacciu, Alessio Micheli

We propose an extension of the Contextual Graph Markov Model, a deep and probabilistic machine learning model for graphs, to model the distribution of edge features.

Graph Classification Graph Regression +1

Graph-based Polyphonic Multitrack Music Generation

1 code implementation27 Jul 2023 Emanuele Cosenza, Andrea Valenti, Davide Bacciu

Graphs can be leveraged to model polyphonic multitrack symbolic music, where notes, chords and entire sections may be linked at different levels of the musical hierarchy by tonal and rhythmic relationships.

Music Generation

Deep learning for dynamic graphs: models and benchmarks

1 code implementation12 Jul 2023 Alessio Gravina, Davide Bacciu

Recent progress in research on Deep Graph Networks (DGNs) has led to a maturation of the domain of learning on graphs.

Model Selection Representation Learning

Partial Hypernetworks for Continual Learning

1 code implementation19 Jun 2023 Hamed Hemati, Vincenzo Lomonaco, Davide Bacciu, Damian Borth

Inspired by latent replay methods in CL, we propose partial weight generation for the final layers of a model using hypernetworks while freezing the initial layers.

Continual Learning

A Protocol for Continual Explanation of SHAP

1 code implementation12 Jun 2023 Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, Davide Bacciu

Continual Learning trains models on a stream of data, with the aim of learning new information without forgetting previous knowledge.

Continual Learning

ADLER -- An efficient Hessian-based strategy for adaptive learning rate

no code implementations25 May 2023 Dario Balboni, Davide Bacciu

We also compare the novel approximation with the Gauss-Newton approximation.

Neural Algorithmic Reasoning for Combinatorial Optimisation

1 code implementation18 May 2023 Dobrik Georgiev, Danilo Numeroso, Davide Bacciu, Pietro Liò

Solving NP-hard/complete combinatorial problems with neural networks is a challenging research area that aims to surpass classical approximate algorithms.

Projected Latent Distillation for Data-Agnostic Consolidation in Distributed Continual Learning

1 code implementation28 Mar 2023 Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu, Joost Van de Weijer

We formalize this problem as a Distributed Continual Learning scenario, where SCD adapt to local tasks and a CL model consolidates the knowledge from the resulting stream of models without looking at the SCD's private data.

Continual Learning Knowledge Distillation

Dual Algorithmic Reasoning

no code implementations9 Feb 2023 Danilo Numeroso, Davide Bacciu, Petar Veličković

We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions.

Class-Incremental Learning with Repetition

1 code implementation26 Jan 2023 Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth

We propose two stochastic stream generators that produce a wide range of CIR streams starting from a single dataset and a few interpretable control parameters.

Class Incremental Learning Incremental Learning

ECGAN: Self-supervised generative adversarial network for electrocardiography

no code implementations23 Jan 2023 Lorenzo Simone, Davide Bacciu

High-quality synthetic data can support the development of effective predictive models for biomedical tasks, especially in rare diseases or when subject to compelling privacy constraints.

Audio Synthesis Generative Adversarial Network +2

Causal Abstraction with Soft Interventions

no code implementations22 Nov 2022 Riccardo Massidda, Atticus Geiger, Thomas Icard, Davide Bacciu

Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail.

Anti-Symmetric DGN: a stable architecture for Deep Graph Networks

1 code implementation18 Oct 2022 Alessio Gravina, Davide Bacciu, Claudio Gallicchio

Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes.

ChemAlgebra: Algebraic Reasoning on Chemical Reactions

no code implementations5 Oct 2022 Andrea Valenti, Davide Bacciu, Antonio Vergari

Measuring the robustness of reasoning in machine learning models is challenging as one needs to provide a task that cannot be easily shortcut by exploiting spurious statistical correlations in the data, while operating on complex objects and constraints.

Modular Representations for Weak Disentanglement

no code implementations12 Sep 2022 Andrea Valenti, Davide Bacciu

However, at the moment, weak disentanglement can only be achieved by increasing the amount of supervision as the number of factors of variations of the data increase.


Generalizing Downsampling from Regular Data to Graphs

1 code implementation6 Aug 2022 Davide Bacciu, Alessio Conte, Francesco Landolfi

Downsampling produces coarsened, multi-resolution representations of data and it is used, for example, to produce lossy compression and visualization of large images, reduce computational costs, and boost deep neural representation learning.

Graph Classification Representation Learning

Memory Population in Continual Learning via Outlier Elimination

1 code implementation4 Jul 2022 Julio Hurtado, Alain Raymond-Saez, Vladimir Araujo, Vincenzo Lomonaco, Alvaro Soto, Davide Bacciu

This paper introduces Memory Outlier Elimination (MOE), a method for identifying and eliminating outliers in the memory buffer by choosing samples from label-homogeneous subpopulations.

Continual Learning

Studying the impact of magnitude pruning on contrastive learning methods

1 code implementation1 Jul 2022 Francesco Corti, Rahim Entezari, Sara Hooker, Davide Bacciu, Olga Saukh

We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions.

Contrastive Learning Network Pruning

Continual Learning for Human State Monitoring

1 code implementation29 Jun 2022 Federico Matteoni, Andrea Cossu, Claudio Gallicchio, Vincenzo Lomonaco, Davide Bacciu

Continual Learning (CL) on time series data represents a promising but under-studied avenue for real-world applications.

Continual Learning Time Series +1

Sample Condensation in Online Continual Learning

1 code implementation23 Jun 2022 Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu

A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time.

Continual Learning

Continual-Learning-as-a-Service (CLaaS): On-Demand Efficient Adaptation of Predictive Models

no code implementations14 Jun 2022 Rudy Semola, Vincenzo Lomonaco, Davide Bacciu

The two main future trends for companies that want to build machine learning-based applications and systems are real-time inference and continual updating.

Attribute BIG-bench Machine Learning +2

Federated Adaptation of Reservoirs via Intrinsic Plasticity

no code implementations25 May 2022 Valerio De Caro, Claudio Gallicchio, Davide Bacciu

We propose a novel algorithm for performing federated learning with Echo State Networks (ESNs) in a client-server scenario.

Federated Learning

Leveraging Relational Information for Learning Weakly Disentangled Representations

1 code implementation20 May 2022 Andrea Valenti, Davide Bacciu

This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation.

Disentanglement Relational Reasoning

Continual Pre-Training Mitigates Forgetting in Language and Vision

1 code implementation19 May 2022 Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, Davide Bacciu

We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks.

Continual Learning Continual Pretraining

Deep Features for CBIR with Scarce Data using Hebbian Learning

no code implementations18 May 2022 Gabriele Lagani, Davide Bacciu, Claudio Gallicchio, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR).

Content-Based Image Retrieval Retrieval +1

Learning heuristics for A*

no code implementations11 Apr 2022 Danilo Numeroso, Davide Bacciu, Petar Veličković

At training time, we exploit multi-task learning to learn jointly the Dijkstra's algorithm and a consistent heuristic function for the A* search algorithm.

Multi-Task Learning

Practical Recommendations for Replay-based Continual Learning Methods

no code implementations19 Mar 2022 Gabriele Merlin, Vincenzo Lomonaco, Andrea Cossu, Antonio Carta, Davide Bacciu

Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.

Continual Learning Data Augmentation

Avalanche RL: a Continual Reinforcement Learning Library

1 code implementation28 Feb 2022 Nicolò Lucchesi, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu

Continual Reinforcement Learning (CRL) is a challenging setting where an agent learns to interact with an environment that is constantly changing over time (the stream of experiences).

Continual Learning OpenAI Gym +2

AI-as-a-Service Toolkit for Human-Centered Intelligence in Autonomous Driving

no code implementations3 Feb 2022 Valerio De Caro, Saira Bano, Achilles Machumilane, Alberto Gotta, Pietro Cassará, Antonio Carta, Rudy Semola, Christos Sardianos, Christos Chronis, Iraklis Varlamis, Konstantinos Tserpes, Vincenzo Lomonaco, Claudio Gallicchio, Davide Bacciu

This paper presents a proof-of-concept implementation of the AI-as-a-Service toolkit developed within the H2020 TEACHING project and designed to implement an autonomous driving personalization system according to the output of an automatic driver's stress recognition algorithm, both of them realizing a Cyber-Physical System of Systems.

Autonomous Driving reinforcement-learning +1

Ex-Model: Continual Learning from a Stream of Trained Models

1 code implementation13 Dec 2021 Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years.

Continual Learning

Predictive Auto-scaling with OpenStack Monasca

1 code implementation3 Nov 2021 Giacomo Lanciano, Filippo Galli, Tommaso Cucinotta, Davide Bacciu, Andrea Passarella

Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e. g., the average CPU usage among instances, exceeds a predefined threshold.

Time Series Forecasting

Inductive learning for product assortment graph completion

no code implementations4 Oct 2021 Haris Dukic, Georgios Deligiorgis, Pierpaolo Sepe, Davide Bacciu, Marco Trincavelli

Global retailers have assortments that contain hundreds of thousands of products that can be linked by several types of relationships like style compatibility, "bought together", "watched together", etc.

A partial theory of Wide Neural Networks using WC functions and its practical implications

no code implementations29 Sep 2021 Dario Balboni, Davide Bacciu

We present a framework based on the theory of Polyak-Łojasiewicz functions to explain the properties of convergence and generalization of overparameterized feed-forward neural networks.

Ontology-Driven Semantic Alignment of Artificial Neurons and Visual Concepts

no code implementations29 Sep 2021 Riccardo Massidda, Davide Bacciu

Semantic alignment methods attempt to establish a link between human-level concepts and the units of an artificial neural network.

Image Classification

The Infinite Contextual Graph Markov Model

no code implementations29 Sep 2021 Daniele Castellana, Federico Errica, Davide Bacciu, Alessio Micheli

The Contextual Graph Markov Model is a deep, unsupervised, and probabilistic model for graphs that is trained incrementally on a layer-by-layer basis.

Graph Classification Model Selection

GraphGen-Redux: a Fast and Lightweight Recurrent Model for labeled Graph Generation

1 code implementation18 Jul 2021 Marco Podda, Davide Bacciu

Several approaches have been proposed in the literature, most of which require to transform the graphs into sequences that encode their structure and labels and to learn the distribution of such sequences through an auto-regressive generative model.

Graph Generation

Calliope -- A Polyphonic Music Transformer

no code implementations8 Jul 2021 Andrea Valenti, Stefano Berti, Davide Bacciu

The polyphonic nature of music makes the application of deep learning to music modelling a challenging task.

Continual Learning with Echo State Networks

1 code implementation17 May 2021 Andrea Cossu, Davide Bacciu, Antonio Carta, Claudio Gallicchio, Vincenzo Lomonaco

Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge.

Continual Learning

A causal learning framework for the analysis and interpretation of COVID-19 clinical data

no code implementations14 May 2021 Elisa Ferrari, Luna Gargani, Greta Barbieri, Lorenzo Ghiadoni, Francesco Faita, Davide Bacciu

We present a workflow for clinical data analysis that relies on Bayesian Structure Learning (BSL), an unsupervised learning approach, robust to noise and biases, that allows to incorporate prior medical knowledge into the learning process and that provides explainable results in the form of a graph showing the causal connections among the analyzed features.

Addressing Fairness, Bias and Class Imbalance in Machine Learning: the FBI-loss

1 code implementation13 May 2021 Elisa Ferrari, Davide Bacciu

Resilience to class imbalance and confounding biases, together with the assurance of fairness guarantees are highly desirable properties of autonomous decision-making systems with real-life impact.

BIG-bench Machine Learning Decision Making +1

MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks

1 code implementation16 Apr 2021 Danilo Numeroso, Davide Bacciu

Explainable AI (XAI) is a research area whose objective is to increase trustworthiness and to enlighten the hidden mechanism of opaque machine learning techniques.

counterfactual Explainable Artificial Intelligence (XAI) +2

Distilled Replay: Overcoming Forgetting through Synthetic Samples

2 code implementations29 Mar 2021 Andrea Rosasco, Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experiences, which are interleaved with new data during training.

Continual Learning

Continual Learning for Recurrent Neural Networks: an Empirical Evaluation

no code implementations12 Mar 2021 Andrea Cossu, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu

We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.

Continual Learning

Graph Mixture Density Networks

1 code implementation5 Dec 2020 Federico Errica, Davide Bacciu, Alessio Micheli

We introduce the Graph Mixture Density Networks, a new family of machine learning models that can fit multimodal output distributions conditioned on graphs of arbitrary topology.

Density Estimation Graph Representation Learning

Explaining Deep Graph Networks with Molecular Counterfactuals

1 code implementation9 Nov 2020 Danilo Numeroso, Davide Bacciu

We present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction tasks, named MEG (Molecular Explanation Generator).

counterfactual Property Prediction +1

Learning from Non-Binary Constituency Trees via Tensor Decomposition

1 code implementation COLING 2020 Daniele Castellana, Davide Bacciu

Finally, we introduce a Tree-LSTM model which takes advantage of this composition function and we experimentally assess its performance on different NLP tasks.

Sentence Tensor Decomposition

Generative Tomography Reconstruction

no code implementations26 Oct 2020 Matteo Ronchetti, Davide Bacciu

We propose an end-to-end differentiable architecture for tomography reconstruction that directly maps a noisy sinogram into a denoised reconstruction.

FADER: Fast Adversarial Example Rejection

no code implementations18 Oct 2020 Francesco Crecchi, Marco Melis, Angelo Sotgiu, Davide Bacciu, Battista Biggio

As a second main contribution of this work, we introduce FADER, a novel technique for speeding up detection-based methods.

Adversarial Robustness

K-plex Cover Pooling for Graph Neural Networks

no code implementations NeurIPS Workshop LMCA 2020 Davide Bacciu, Alessio Conte, Roberto Grossi, Francesco Landolfi, Andrea Marino

We introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity pattern.

Graph Classification

Perplexity-free Parametric t-SNE

1 code implementation3 Oct 2020 Francesco Crecchi, Cyril de Bodt, Michel Verleysen, John A. Lee, Davide Bacciu

The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method.

Dimensionality Reduction

ROS-Neuro Integration of Deep Convolutional Autoencoders for EEG Signal Compression in Real-time BCIs

no code implementations31 Aug 2020 Andrea Valenti, Michele Barsotti, Raffaello Brondi, Davide Bacciu, Luca Ascari

Typical EEG-based BCI applications require the computation of complex functions over the noisy EEG channels to be carried out in an efficient way.


Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory

1 code implementation29 Jun 2020 Antonio Carta, Alessandro Sperduti, Davide Bacciu

The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales.

speech-recognition Speech Recognition

Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data

1 code implementation18 Jun 2020 Daniele Castellana, Davide Bacciu

The paper introduces two new aggregation functions to encode structural knowledge from tree-structured data.

General Classification

Generalising Recursive Neural Models by Tensor Decomposition

1 code implementation17 Jun 2020 Daniele Castellana, Davide Bacciu

This approximation allows limiting the parameters space size, decoupling it from its strict relation with the size of the hidden encoding space.

Tensor Decomposition

Continual Learning with Gated Incremental Memories for sequential data processing

1 code implementation8 Apr 2020 Andrea Cossu, Antonio Carta, Davide Bacciu

The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions.

Continual Learning Reinforcement Learning (RL)

Tensor Decompositions in Deep Learning

no code implementations26 Feb 2020 Davide Bacciu, Danilo P. Mandic

The paper surveys the topic of tensor decompositions in modern machine learning applications.

BIG-bench Machine Learning

Edge-based sequential graph generation with recurrent neural networks

1 code implementation31 Jan 2020 Davide Bacciu, Alessio Micheli, Marco Podda

Graph generation with Machine Learning is an open problem with applications in various research fields.

Graph Generation

Encoding-based Memory Modules for Recurrent Neural Networks

no code implementations31 Jan 2020 Antonio Carta, Alessandro Sperduti, Davide Bacciu

The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem.


Theoretically Expressive and Edge-aware Graph Learning

no code implementations24 Jan 2020 Federico Errica, Davide Bacciu, Alessio Micheli

We propose a new Graph Neural Network that combines recent advancements in the field.

Graph Learning

Learning Style-Aware Symbolic Music Representations by Adversarial Autoencoders

2 code implementations15 Jan 2020 Andrea Valenti, Antonio Carta, Davide Bacciu

Through the paper, we show how Gaussian mixtures taking into account music metadata information can be used as an effective prior for the autoencoder latent space, introducing the first Music Adversarial Autoencoder (MusAE).

Music Modeling

A Gentle Introduction to Deep Learning for Graphs

2 code implementations29 Dec 2019 Davide Bacciu, Federico Errica, Alessio Micheli, Marco Podda

The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community.

Graph Representation Learning

A Fair Comparison of Graph Neural Networks for Graph Classification

4 code implementations ICLR 2020 Federico Errica, Marco Podda, Davide Bacciu, Alessio Micheli

We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.

General Classification Graph Classification +2

Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory

no code implementations25 Sep 2019 Antonio Carta, Alessandro Sperduti, Davide Bacciu

We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.

Memorization Permuted-MNIST

A Non-Negative Factorization approach to node pooling in Graph Convolutional Neural Networks

no code implementations7 Sep 2019 Davide Bacciu, Luigi Di Sotto

The paper discusses a pooling mechanism to induce subsampling in graph structured data and introduces it as a component of a graph convolutional neural network.

Graph Classification

Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models

no code implementations31 May 2019 Daniele Castellana, Davide Bacciu

Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data.

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

1 code implementation30 Apr 2019 Francesco Crecchi, Davide Bacciu, Battista Biggio

Deep neural networks are vulnerable to adversarial examples, i. e., carefully-perturbed inputs aimed to mislead classification.

Density Estimation Dimensionality Reduction +1

Deep Tree Transductions - A Short Survey

no code implementations5 Feb 2019 Davide Bacciu, Antonio Bruno

The paper surveys recent extensions of the Long-Short Term Memory networks to handle tree structures from the perspective of learning non-trivial forms of isomorph structured transductions.

Linear Memory Networks

no code implementations8 Nov 2018 Davide Bacciu, Antonio Carta, Alessandro Sperduti

By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component.

Learning Tree Distributions by Hidden Markov Models

no code implementations31 May 2018 Davide Bacciu, Daniele Castellana

Hidden tree Markov models allow learning distributions for tree structured data while being interpretable as nondeterministic automata.

Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing

1 code implementation ICML 2018 Davide Bacciu, Federico Errica, Alessio Micheli

We introduce the Contextual Graph Markov Model, an approach combining ideas from generative models and neural networks for the processing of graph data.

General Classification

Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs

no code implementations23 May 2018 Davide Bacciu, Andrea Bongiorno

The paper introduces concentric Echo State Network, an approach to design reservoir topologies that tries to bridge the gap between deterministically constructed simple cycle models and deep reservoir computing approaches.

Bioinformatics and Medicine in the Era of Deep Learning

no code implementations27 Feb 2018 Davide Bacciu, Paulo J. G. Lisboa, José D. Martín, Ruxandra Stoean, Alfredo Vellido

Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery.

Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data

no code implementations21 Nov 2017 Davide Bacciu

The paper introduces the Hidden Tree Markov Network (HTN), a neuro-probabilistic hybrid fusing the representation power of generative models for trees with the incremental and discriminative learning capabilities of neural networks.

DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout

1 code implementation7 May 2017 Davide Bacciu, Francesco Crecchi, Davide Morelli

The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time.

Using a Machine Learning Approach to Implement and Evaluate Product Line Features

no code implementations17 Aug 2015 Davide Bacciu, Stefania Gnesi, Laura Semini

Bike-sharing systems are a means of smart transportation in urban environments with the benefit of a positive impact on urban mobility.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.