Search Results for author: Davide Mottin

Found 19 papers, 9 papers with code

Benchmarking Large Language Models for Molecule Prediction Tasks

1 code implementation8 Mar 2024 Zhiqiang Zhong, Kuangyu Zhou, Davide Mottin

Our investigation reveals several key insights: Firstly, LLMs generally lag behind ML models in achieving competitive performance on molecule tasks, particularly when compared to models adept at capturing the geometric structure of molecules, highlighting the constrained ability of LLMs to comprehend graph data.

Benchmarking

EvolMPNN: Predicting Mutational Effect on Homologous Proteins by Evolution Encoding

no code implementations20 Feb 2024 Zhiqiang Zhong, Davide Mottin

Predicting protein properties is paramount for biological and medical advancements.

Harnessing Large Language Models as Post-hoc Correctors

no code implementations20 Feb 2024 Zhiqiang Zhong, Kuangyu Zhou, Davide Mottin

We show that, through our proposed training-free framework LlmCorr, an LLM can work as a post-hoc corrector to propose corrections for the predictions of an arbitrary ML model.

In-Context Learning

Autonomous microARPES

no code implementations16 Feb 2024 Steinn Ymir Agustsson, Alfred J. H. Jones, Davide Curcio, Søren Ulstrup, Jill Miwa, Davide Mottin, Panagiotis Karras, Philip Hofmann

Angle-resolved photoemission spectroscopy (ARPES) is a technique used to map the occupied electronic structure of solids.

On the Robustness of Post-hoc GNN Explainers to Label Noise

no code implementations4 Sep 2023 Zhiqiang Zhong, Yangqianzi Jiang, Davide Mottin

Proposed as a solution to the inherent black-box limitations of graph neural networks (GNNs), post-hoc GNN explainers aim to provide precise and insightful explanations of the behaviours exhibited by trained GNNs.

How Faithful are Self-Explainable GNNs?

no code implementations29 Aug 2023 Marc Christiansen, Lea Villadsen, Zhiqiang Zhong, Stefano Teso, Davide Mottin

Self-explainable deep neural networks are a recent class of models that can output ante-hoc local explanations that are faithful to the model's reasoning, and as such represent a step forward toward filling the gap between expressiveness and interpretability.

ActUp: Analyzing and Consolidating tSNE and UMAP

1 code implementation12 May 2023 Andrew Draganov, Jakob Rødsgaard Jørgensen, Katrine Scheel Nellemann, Davide Mottin, Ira Assent, Tyrus Berry, Cigdem Aslay

tSNE and UMAP are popular dimensionality reduction algorithms due to their speed and interpretable low-dimensional embeddings.

Dimensionality Reduction

Knowledge-augmented Graph Machine Learning for Drug Discovery: A Survey from Precision to Interpretability

1 code implementation16 Feb 2023 Zhiqiang Zhong, Anastasia Barkova, Davide Mottin

The integration of Artificial Intelligence (AI) into the field of drug discovery has been a growing area of interdisciplinary scientific research.

Drug Discovery

GRASP: Graph Alignment through Spectral Signatures

no code implementations10 Jun 2021 Judith Hermanns, Anton Tsitsulin, Marina Munkhoeva, Alex Bronstein, Davide Mottin, Panagiotis Karras

In this paper, we transfer the shape-analysis concept of functional maps from the continuous to the discrete case, and treat the graph alignment problem as a special case of the problem of finding a mapping between functions on graphs.

SOFOS: Demonstrating the Challenges of Materialized View Selection on Knowledge Graphs

no code implementations11 Mar 2021 Georgia Troullinou, Haridimos Kondylakis, Matteo Lissandrini, Davide Mottin

Although popular in relational databases, view materialization for RDF and SPARQL has not yet transitioned into practice, due to the non-trivial application to the RDF graph model.

Knowledge Graphs Databases

Bandits for Learning to Explain from Explanations

no code implementations7 Feb 2021 Freya Behrens, Stefano Teso, Davide Mottin

We introduce Explearn, an online algorithm that learns to jointly output predictions and explanations for those predictions.

Gaussian Processes Multi-Armed Bandits

FREDE: Anytime Graph Embeddings

no code implementations8 Jun 2020 Anton Tsitsulin, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Ivan Oseledets, Emmanuel Müller

Low-dimensional representations, or embeddings, of a graph's nodes facilitate several practical data science and data engineering tasks.

Graph Embedding

The Shape of Data: Intrinsic Distance for Data Distributions

2 code implementations ICLR 2020 Anton Tsitsulin, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Alex Bronstein, Ivan Oseledets, Emmanuel Müller

The ability to represent and compare machine learning models is crucial in order to quantify subtle model changes, evaluate generative models, and gather insights on neural network architectures.

SGR: Self-Supervised Spectral Graph Representation Learning

no code implementations15 Nov 2018 Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Alex Bronstein, Emmanuel Müller

Representing a graph as a vector is a challenging task; ideally, the representation should be easily computable and conducive to efficient comparisons among graphs, tailored to the particular data and analytical task at hand.

Graph Representation Learning

NetLSD: Hearing the Shape of a Graph

1 code implementation27 May 2018 Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Alex Bronstein, Emmanuel Müller

However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation.

Social and Information Networks

VERSE: Versatile Graph Embeddings from Similarity Measures

2 code implementations13 Mar 2018 Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Emmanuel Müller

Embedding a web-scale information network into a low-dimensional vector space facilitates tasks such as link prediction, classification, and visualization.

Link Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.