Search Results for author: Alessandro Sperduti

Found 27 papers, 11 papers with code

Benchmarking GPT-4 on Algorithmic Problems: A Systematic Evaluation of Prompting Strategies

no code implementations27 Feb 2024 Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti

Large Language Models (LLMs) have revolutionized the field of Natural Language Processing thanks to their ability to reuse knowledge acquired on massive text corpora on a wide variety of downstream tasks, with minimal (if any) tuning steps.

Benchmarking Systematic Generalization

A Neural Rewriting System to Solve Algorithmic Problems

no code implementations27 Feb 2024 Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti

Modern neural network architectures still struggle to learn algorithmic procedures that require to systematically apply compositional rules to solve out-of-distribution problem instances.

Language Modelling Large Language Model +1

A Hybrid System for Systematic Generalization in Simple Arithmetic Problems

1 code implementation29 Jun 2023 Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti

Solving symbolic reasoning problems that require compositionality and systematicity is considered one of the key ingredients of human intelligence.

Language Modelling Large Language Model +1

RGCVAE: Relational Graph Conditioned Variational Autoencoder for Molecule Design

1 code implementation19 May 2023 Davide Rigoni, Nicolò Navarin, Alessandro Sperduti

Identifying molecules that exhibit some pre-specified properties is a difficult problem to solve.

Weakly-Supervised Visual-Textual Grounding with Semantic Prior Refinement

1 code implementation18 May 2023 Davide Rigoni, Luca Parolari, Luciano Serafini, Alessandro Sperduti, Lamberto Ballan

The first untrained module aims to return a rough alignment between textual phrases and bounding boxes.

Sentence

A Better Loss for Visual-Textual Grounding

1 code implementation11 Aug 2021 Davide Rigoni, Luciano Serafini, Alessandro Sperduti

Given a textual phrase and an image, the visual grounding problem is the task of locating the content of the image referenced by the sentence.

Sentence Visual Grounding

Simple Graph Convolutional Networks

no code implementations10 Jun 2021 Luca Pasa, Nicolò Navarin, Wolfgang Erb, Alessandro Sperduti

Many neural networks for graphs are based on the graph convolution operator, proposed more than a decade ago.

Conditional Variational Capsule Network for Open Set Recognition

1 code implementation ICCV 2021 Yunrui Guo, Guglielmo Camporese, Wenjing Yang, Alessandro Sperduti, Lamberto Ballan

In this way, we are able to control the compactness of the features of the same class around the center of the gaussians, thus controlling the ability of the classifier in detecting samples from unknown classes.

Open Set Learning

Polynomial Graph Convolutional Networks

no code implementations1 Jan 2021 Luca Pasa, Nicolò Navarin, Alessandro Sperduti

In this paper, we propose a different strategy, considering a single graph convolution layer that independently exploits neighbouring nodes at different topological distances, generating decoupled representations for each of them.

Graph Classification

Conditional Constrained Graph Variational Autoencoders for Molecule Design

1 code implementation1 Sep 2020 Davide Rigoni, Nicolò Navarin, Alessandro Sperduti

In recent years, deep generative models for graphs have been used to generate new molecules.

A Systematic Assessment of Deep Learning Models for Molecule Generation

1 code implementation20 Aug 2020 Davide Rigoni, Nicolò Navarin, Alessandro Sperduti

In recent years the scientific community has devoted much effort in the development of deep learning models for the generation of new molecules with desirable properties (i. e. drugs).

Drug Discovery

Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory

1 code implementation29 Jun 2020 Antonio Carta, Alessandro Sperduti, Davide Bacciu

The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales.

speech-recognition Speech Recognition

Encoding-based Memory Modules for Recurrent Neural Networks

no code implementations31 Jan 2020 Antonio Carta, Alessandro Sperduti, Davide Bacciu

The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem.

Memorization

Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory

no code implementations25 Sep 2019 Antonio Carta, Alessandro Sperduti, Davide Bacciu

We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.

Memorization Permuted-MNIST

Embeddings and Representation Learning for Structured Data

no code implementations15 May 2019 Benjamin Paaßen, Claudio Gallicchio, Alessio Micheli, Alessandro Sperduti

Performing machine learning on structured data is complicated by the fact that such data does not have vectorial form.

BIG-bench Machine Learning Metric Learning +1

On Filter Size in Graph Convolutional Networks

1 code implementation23 Nov 2018 Dinh Van Tran, Nicolò Navarin, Alessandro Sperduti

Recently, many researchers have been focusing on the definition of neural networks for graphs.

Pre-training Graph Neural Networks with Kernels

no code implementations16 Nov 2018 Nicolò Navarin, Dinh V. Tran, Alessandro Sperduti

Many machine learning techniques have been proposed in the last few years to process data represented in graph-structured form.

Linear Memory Networks

no code implementations8 Nov 2018 Davide Bacciu, Antonio Carta, Alessandro Sperduti

By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component.

LSTM Networks for Data-Aware Remaining Time Prediction of Business Process Instances

1 code implementation10 Nov 2017 Nicolò Navarin, Beatrice Vincenzi, Mirko Polato, Alessandro Sperduti

Predicting the completion time of business process instances would be a very helpful aid when managing processes under service level agreement constraints.

Graph Kernels exploiting Weisfeiler-Lehman Graph Isomorphism Test Extensions

no code implementations22 Sep 2015 Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti

In this paper we present a novel graph kernel framework inspired the by the Weisfeiler-Lehman (WL) isomorphism tests.

A tree-based kernel for graphs with continuous attributes

no code implementations3 Sep 2015 Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti

While existing kernel methods are effective techniques for dealing with graphs having discrete node labels, their adaptation to non-discrete or continuous node attributes has been limited, mainly for computational issues.

Computational Efficiency

Ordered Decompositional DAG Kernels Enhancements

no code implementations13 Jul 2015 Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti

In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel framework, a framework that allows the definition of graph kernels from tree kernels, allows to easily define new state-of-the-art graph kernels.

General Classification

An Empirical Study on Budget-Aware Online Kernel Algorithms for Streams of Graphs

no code implementations8 Jul 2015 Giovanni Da San Martino, Nicolò Navarin, Alessandro Sperduti

It turns out that, when strict memory budget constraints have to be enforced, working in feature space, given the current state of the art on graph kernels, is more than a viable alternative to dual approaches, both in terms of speed and classification performance.

Pre-training of Recurrent Neural Networks via Linear Autoencoders

no code implementations NeurIPS 2014 Luca Pasa, Alessandro Sperduti

We propose a pre-training technique for recurrent neural networks based on linear autoencoder networks for sequences, i. e. linear dynamical systems modelling the target sequences.

Cannot find the paper you are looking for? You can Submit a new open access paper.