1 code implementation • 23 Aug 2024 • Samuel Cognolato, Alessandro Sperduti, Luciano Serafini
An insertion process learns to reverse this removal process by inserting arcs and nodes according to the specified sequentiality degree.
no code implementations • 5 Jun 2024 • Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti
Large Language Models (LLMs) achieve impressive performance in a wide range of tasks, even if they are often trained with the only objective of chatting fluently with users.
no code implementations • 27 Feb 2024 • Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti
Large Language Models (LLMs) have revolutionized the field of Natural Language Processing thanks to their ability to reuse knowledge acquired on massive text corpora on a wide variety of downstream tasks, with minimal (if any) tuning steps.
no code implementations • 27 Feb 2024 • Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti
In this work, we focus on formula simplification problems, a class of synthetic benchmarks used to study the systematic generalization capabilities of neural architectures.
1 code implementation • 29 Jun 2023 • Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti
Solving symbolic reasoning problems that require compositionality and systematicity is considered one of the key ingredients of human intelligence.
1 code implementation • 19 May 2023 • Davide Rigoni, Nicolò Navarin, Alessandro Sperduti
Identifying molecules that exhibit some pre-specified properties is a difficult problem to solve.
1 code implementation • 18 May 2023 • Davide Rigoni, Luca Parolari, Luciano Serafini, Alessandro Sperduti, Lamberto Ballan
The first untrained module aims to return a rough alignment between textual phrases and bounding boxes.
1 code implementation • 11 Aug 2021 • Davide Rigoni, Luciano Serafini, Alessandro Sperduti
Given a textual phrase and an image, the visual grounding problem is the task of locating the content of the image referenced by the sentence.
no code implementations • 10 Jun 2021 • Luca Pasa, Nicolò Navarin, Wolfgang Erb, Alessandro Sperduti
Many neural networks for graphs are based on the graph convolution operator, proposed more than a decade ago.
1 code implementation • ICCV 2021 • Yunrui Guo, Guglielmo Camporese, Wenjing Yang, Alessandro Sperduti, Lamberto Ballan
In this way, we are able to control the compactness of the features of the same class around the center of the gaussians, thus controlling the ability of the classifier in detecting samples from unknown classes.
no code implementations • 1 Jan 2021 • Luca Pasa, Nicolò Navarin, Alessandro Sperduti
In this paper, we propose a different strategy, considering a single graph convolution layer that independently exploits neighbouring nodes at different topological distances, generating decoupled representations for each of them.
1 code implementation • 5 Nov 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
Training RNNs to learn long-term dependencies is difficult due to vanishing gradients.
1 code implementation • 1 Sep 2020 • Davide Rigoni, Nicolò Navarin, Alessandro Sperduti
In recent years, deep generative models for graphs have been used to generate new molecules.
1 code implementation • 20 Aug 2020 • Davide Rigoni, Nicolò Navarin, Alessandro Sperduti
In recent years the scientific community has devoted much effort in the development of deep learning models for the generation of new molecules with desirable properties (i. e. drugs).
1 code implementation • 29 Jun 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales.
no code implementations • 31 Jan 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem.
no code implementations • 25 Sep 2019 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.
no code implementations • 15 May 2019 • Benjamin Paaßen, Claudio Gallicchio, Alessio Micheli, Alessandro Sperduti
Performing machine learning on structured data is complicated by the fact that such data does not have vectorial form.
1 code implementation • 23 Nov 2018 • Dinh Van Tran, Nicolò Navarin, Alessandro Sperduti
Recently, many researchers have been focusing on the definition of neural networks for graphs.
no code implementations • 16 Nov 2018 • Nicolò Navarin, Dinh V. Tran, Alessandro Sperduti
Many machine learning techniques have been proposed in the last few years to process data represented in graph-structured form.
no code implementations • 8 Nov 2018 • Davide Bacciu, Antonio Carta, Alessandro Sperduti
By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component.
1 code implementation • 10 Nov 2017 • Nicolò Navarin, Beatrice Vincenzi, Mirko Polato, Alessandro Sperduti
Predicting the completion time of business process instances would be a very helpful aid when managing processes under service level agreement constraints.
no code implementations • 24 Feb 2016 • Mirko Polato, Alessandro Sperduti, Andrea Burattin, Massimiliano de Leoni
However, in real cases this assumption is not always true.
no code implementations • 22 Sep 2015 • Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti
In this paper we present a novel graph kernel framework inspired the by the Weisfeiler-Lehman (WL) isomorphism tests.
no code implementations • 3 Sep 2015 • Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti
While existing kernel methods are effective techniques for dealing with graphs having discrete node labels, their adaptation to non-discrete or continuous node attributes has been limited, mainly for computational issues.
no code implementations • 13 Jul 2015 • Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti
In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel framework, a framework that allows the definition of graph kernels from tree kernels, allows to easily define new state-of-the-art graph kernels.
no code implementations • 8 Jul 2015 • Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti
It turns out that, when strict memory budget constraints have to be enforced, working in feature space, given the current state of the art on graph kernels, is more than a viable alternative to dual approaches, both in terms of speed and classification performance.
no code implementations • 8 Jul 2015 • Nicolò Navarin, Alessandro Sperduti, Riccardo Tesselli
Different kernels consider different types of substructures.
no code implementations • NeurIPS 2014 • Luca Pasa, Alessandro Sperduti
We propose a pre-training technique for recurrent neural networks based on linear autoencoder networks for sequences, i. e. linear dynamical systems modelling the target sequences.