1 code implementation • European Conference on Information Retrieval 2024 • Juan Manuel Rodriguez, Nima Tavassoli, Eliezer Levy, Gil Lederman, Dima Sivov, Matteo Lissandrini, Davide Mottin
ConQA comprises 30 descriptive and 50 conceptual queries on 43k images with more than 100 manually annotated images per query.
Ranked #1 on Image Retrieval on ConQA Conceptual
1 code implementation • 8 Mar 2024 • Zhiqiang Zhong, Kuangyu Zhou, Davide Mottin
Our investigation reveals several key insights: Firstly, LLMs generally lag behind ML models in achieving competitive performance on molecule tasks, particularly when compared to models adept at capturing the geometric structure of molecules, highlighting the constrained ability of LLMs to comprehend graph data.
no code implementations • 20 Feb 2024 • Zhiqiang Zhong, Kuangyu Zhou, Davide Mottin
We show that, through our proposed training-free framework LlmCorr, an LLM can work as a post-hoc corrector to propose corrections for the predictions of an arbitrary ML model.
no code implementations • 20 Feb 2024 • Zhiqiang Zhong, Davide Mottin
Predicting protein properties is paramount for biological and medical advancements.
no code implementations • 16 Feb 2024 • Steinn Ymir Agustsson, Alfred J. H. Jones, Davide Curcio, Søren Ulstrup, Jill Miwa, Davide Mottin, Panagiotis Karras, Philip Hofmann
Angle-resolved photoemission spectroscopy (ARPES) is a technique used to map the occupied electronic structure of solids.
no code implementations • 4 Sep 2023 • Zhiqiang Zhong, Yangqianzi Jiang, Davide Mottin
Proposed as a solution to the inherent black-box limitations of graph neural networks (GNNs), post-hoc GNN explainers aim to provide precise and insightful explanations of the behaviours exhibited by trained GNNs.
no code implementations • 29 Aug 2023 • Marc Christiansen, Lea Villadsen, Zhiqiang Zhong, Stefano Teso, Davide Mottin
Self-explainable deep neural networks are a recent class of models that can output ante-hoc local explanations that are faithful to the model's reasoning, and as such represent a step forward toward filling the gap between expressiveness and interpretability.
1 code implementation • 12 May 2023 • Andrew Draganov, Jakob Rødsgaard Jørgensen, Katrine Scheel Nellemann, Davide Mottin, Ira Assent, Tyrus Berry, Cigdem Aslay
tSNE and UMAP are popular dimensionality reduction algorithms due to their speed and interpretable low-dimensional embeddings.
1 code implementation • 16 Feb 2023 • Zhiqiang Zhong, Anastasia Barkova, Davide Mottin
The integration of Artificial Intelligence (AI) into the field of drug discovery has been a growing area of interdisciplinary scientific research.
1 code implementation • 20 Jun 2022 • Andrew Draganov, Tyrus Berry, Jakob Rødsgaard Jørgensen, Katrine Scheel Nellemann, Ira Assent, Davide Mottin
In this work, we show that this is indeed possible by combining the two approaches into a single method.
no code implementations • 10 Jun 2021 • Judith Hermanns, Anton Tsitsulin, Marina Munkhoeva, Alex Bronstein, Davide Mottin, Panagiotis Karras
In this paper, we transfer the shape-analysis concept of functional maps from the continuous to the discrete case, and treat the graph alignment problem as a special case of the problem of finding a mapping between functions on graphs.
no code implementations • 11 Mar 2021 • Georgia Troullinou, Haridimos Kondylakis, Matteo Lissandrini, Davide Mottin
Although popular in relational databases, view materialization for RDF and SPARQL has not yet transitioned into practice, due to the non-trivial application to the RDF graph model.
Knowledge Graphs Databases
no code implementations • 7 Feb 2021 • Freya Behrens, Stefano Teso, Davide Mottin
We introduce Explearn, an online algorithm that learns to jointly output predictions and explanations for those predictions.
1 code implementation • NeurIPS 2020 • Alexander Mathiasen, Frederik Hvilshøj, Jakob Rødsgaard Jørgensen, Anshul Nasery, Davide Mottin
We present an algorithm that is fast enough to speed up several matrix operations.
no code implementations • 8 Jun 2020 • Anton Tsitsulin, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Ivan Oseledets, Emmanuel Müller
Low-dimensional representations, or embeddings, of a graph's nodes facilitate several practical data science and data engineering tasks.
2 code implementations • ICLR 2020 • Anton Tsitsulin, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Alex Bronstein, Ivan Oseledets, Emmanuel Müller
The ability to represent and compare machine learning models is crucial in order to quantify subtle model changes, evaluate generative models, and gather insights on neural network architectures.
no code implementations • 15 Nov 2018 • Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Alex Bronstein, Emmanuel Müller
Representing a graph as a vector is a challenging task; ideally, the representation should be easily computable and conducive to efficient comparisons among graphs, tailored to the particular data and analytical task at hand.
1 code implementation • 27 May 2018 • Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Alex Bronstein, Emmanuel Müller
However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation.
Social and Information Networks
2 code implementations • 13 Mar 2018 • Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Emmanuel Müller
Embedding a web-scale information network into a low-dimensional vector space facilitates tasks such as link prediction, classification, and visualization.