Search Results for author: Petar Veličković

Found 40 papers, 16 papers with code

Learning heuristics for A*

no code implementations11 Apr 2022 Danilo Numeroso, Davide Bacciu, Petar Veličković

At training time, we exploit multi-task learning to learn jointly the Dijkstra's algorithm and a consistent heuristic function for the A* search algorithm.

Multi-Task Learning

Graph Neural Networks are Dynamic Programmers

no code implementations29 Mar 2022 Andrew Dudzik, Petar Veličković

Recent advances in neural algorithmic reasoning with graph neural networks (GNNs) are propped up by the notion of algorithmic alignment.

Learning to Execute

Message passing all the way up

no code implementations22 Feb 2022 Petar Veličković

The message passing framework is the foundation of the immense success enjoyed by graph neural networks (GNNs) in recent years.

Graph Representation Learning

Neural Algorithmic Reasoners are Implicit Planners

no code implementations NeurIPS 2021 Andreea Deac, Petar Veličković, Ognjen Milinković, Pierre-Luc Bacon, Jian Tang, Mladen Nikolić

We find that prior approaches either assume that the environment is provided in such a tabular form -- which is highly restrictive -- or infer "local neighbourhoods" of states to run value iteration over -- for which we discover an algorithmic bottleneck effect.

Self-Supervised Learning

Simple GNN Regularisation for 3D Molecular Property Prediction and Beyond

no code implementations ICLR 2022 Jonathan Godwin, Michael Schaarschmidt, Alexander L Gaunt, Alvaro Sanchez-Gonzalez, Yulia Rubanova, Petar Veličković, James Kirkpatrick, Peter Battaglia

We introduce “Noisy Nodes”, a very simple technique for improved training of GNNs, in which we corrupt the input graph with noise, and add a noise correcting node-level loss.

Molecular Property Prediction

Neural Distance Embeddings for Biological Sequences

1 code implementation NeurIPS 2021 Gabriele Corso, Rex Ying, Michal Pándy, Petar Veličković, Jure Leskovec, Pietro Liò

The development of data-dependent heuristics and representations for biological sequences that reflect their evolutionary distance is critical for large-scale biological research.

Multiple Sequence Alignment

Relating Graph Neural Networks to Structural Causal Models

no code implementations9 Sep 2021 Matej Zečević, Devendra Singh Dhami, Petar Veličković, Kristian Kersting

Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations.

Causal Inference

ETA Prediction with Graph Neural Networks in Google Maps

no code implementations25 Aug 2021 Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, Peter W. Battaglia, Vishal Gupta, Ang Li, Zhongwen Xu, Alvaro Sanchez-Gonzalez, Yujia Li, Petar Veličković

Travel-time prediction constitutes a task of high importance in transportation networks, with web mapping services like Google Maps regularly serving vast quantities of travel time queries from users and enterprises alike.

Graph Representation Learning

Algorithmic Concept-based Explainable Reasoning

no code implementations15 Jul 2021 Dobrik Georgiev, Pietro Barbiero, Dmitry Kazhdan, Petar Veličković, Pietro Liò

Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems.

Simple GNN Regularisation for 3D Molecular Property Prediction & Beyond

1 code implementation15 Jun 2021 Jonathan Godwin, Michael Schaarschmidt, Alexander Gaunt, Alvaro Sanchez-Gonzalez, Yulia Rubanova, Petar Veličković, James Kirkpatrick, Peter Battaglia

From this observation we derive "Noisy Nodes", a simple technique in which we corrupt the input graph with noise, and add a noise correcting node-level loss.

Denoising Graph Property Prediction +2

Neural message passing for joint paratope-epitope prediction

no code implementations31 May 2021 Alice Del Vecchio, Andreea Deac, Pietro Liò, Petar Veličković

Antibodies are proteins in the immune system which bind to antigens to detect and neutralise them.

Neural Algorithmic Reasoning

no code implementations6 May 2021 Petar Veličković, Charles Blundell

Algorithms have been fundamental to recent global technological advances and, in particular, they have been the cornerstone of technical advances in one field rapidly being applied to another.

Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges

5 code implementations27 Apr 2021 Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković

The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods.

Protein Folding

Persistent Message Passing

no code implementations ICLR Workshop GTRL 2021 Heiko Strathmann, Mohammadamin Barekatain, Charles Blundell, Petar Veličković

Graph neural networks (GNNs) are a powerful inductive bias for modelling algorithmic reasoning procedures and data structures.

Large-Scale Representation Learning on Graphs via Bootstrapping

3 code implementations ICLR 2022 Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L. Dyer, Rémi Munos, Petar Veličković, Michal Valko

To address these challenges, we introduce Bootstrapped Graph Latents (BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input.

Contrastive Learning Graph Representation Learning +1

Predicting Patient Outcomes with Graph Representation Learning

1 code implementation11 Jan 2021 Emma Rocheteau, Catherine Tong, Petar Veličković, Nicholas Lane, Pietro Liò

Recent work on predicting patient outcomes in the Intensive Care Unit (ICU) has focused heavily on the physiological time series data, largely ignoring sparse data such as diagnoses and medications.

Graph Representation Learning Length-of-Stay prediction +2

A step towards neural genome assembly

1 code implementation NeurIPS Workshop LMCA 2020 Lovro Vrček, Petar Veličković, Mile Šikić

De novo genome assembly focuses on finding connections between a vast amount of short sequences in order to reconstruct the original genome.

Graph Representation Learning

XLVIN: eXecuted Latent Value Iteration Nets

no code implementations25 Oct 2020 Andreea Deac, Petar Veličković, Ognjen Milinković, Pierre-Luc Bacon, Jian Tang, Mladen Nikolić

Value Iteration Networks (VINs) have emerged as a popular method to incorporate planning algorithms within deep reinforcement learning, enabling performance improvements on tasks requiring long-range reasoning and understanding of environment dynamics.

Graph Representation Learning reinforcement-learning +1

Hierachial Protein Function Prediction with Tails-GNNs

no code implementations24 Jul 2020 Stefan Spalević, Petar Veličković, Jovana Kovačević, Mladen Nikolić

Protein function prediction may be framed as predicting subgraphs (with certain closure properties) of a directed acyclic graph describing the hierarchy of protein functions.

Protein Function Prediction

Pointer Graph Networks

no code implementations NeurIPS 2020 Petar Veličković, Lars Buesing, Matthew C. Overlan, Razvan Pascanu, Oriol Vinyals, Charles Blundell

This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving.

The PlayStation Reinforcement Learning Environment (PSXLE)

1 code implementation12 Dec 2019 Carlos Purves, Cătălina Cangea, Petar Veličković

We propose a new benchmark environment for evaluating Reinforcement Learning (RL) algorithms: the PlayStation Learning Environment (PSXLE), a PlayStation emulator modified to expose a simple control API that enables rich game-state representations.

OpenAI Gym reinforcement-learning

Neural Execution of Graph Algorithms

no code implementations ICLR 2020 Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, Charles Blundell

Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs.

Drug-Drug Adverse Effect Prediction with Graph Co-Attention

1 code implementation2 May 2019 Andreea Deac, Yu-Hsiang Huang, Petar Veličković, Pietro Liò, Jian Tang

Complex or co-existing diseases are commonly treated using drug combinations, which can lead to higher risk of adverse side effects.

Spatio-Temporal Deep Graph Infomax

no code implementations12 Apr 2019 Felix L. Opolka, Aaron Solomon, Cătălina Cangea, Petar Veličković, Pietro Liò, R. Devon Hjelm

Spatio-temporal graphs such as traffic networks or gene regulatory systems present challenges for the existing deep learning methods due to the complexity of structural changes over time.

Representation Learning Traffic Prediction

ChronoMID - Cross-Modal Neural Networks for 3-D Temporal Medical Imaging Data

no code implementations12 Jan 2019 Alexander G. Rakowski, Petar Veličković, Enrico Dall'Ara, Pietro Liò

ChronoMID builds on the success of cross-modal convolutional neural networks (X-CNNs), making the novel application of the technique to medical imaging data.

General Classification

Towards Sparse Hierarchical Graph Classifiers

1 code implementation3 Nov 2018 Cătălina Cangea, Petar Veličković, Nikola Jovanović, Thomas Kipf, Pietro Liò

Recent advances in representation learning on graphs, mainly leveraging graph convolutional networks, have brought a substantial improvement on many graph-based benchmark tasks.

Classification General Classification +4

Deep Graph Infomax

11 code implementations ICLR 2019 Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, R. Devon Hjelm

We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner.

General Classification Node Classification

Attentive cross-modal paratope prediction

no code implementations12 Jun 2018 Andreea Deac, Petar Veličković, Pietro Sormanni

Antibodies are a critical part of the immune system, having the function of directly neutralising or tagging undesirable objects (the antigens) for future destruction.

Automatic Inference of Cross-modal Connection Topologies for X-CNNs

1 code implementation2 May 2018 Laurynas Karazija, Petar Veličković, Pietro Liò

The base approach learns the topology in a data-driven manner, by using measurements performed on the base CNN and supplied data.

Quantifying the Effects of Enforcing Disentanglement on Variational Autoencoders

1 code implementation24 Nov 2017 Momchil Peychev, Petar Veličković, Pietro Liò

In this paper we quantify the effects of the parameter $\beta$ on the model performance and disentanglement.

Disentanglement

Graph Attention Networks

73 code implementations ICLR 2018 Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.

Document Classification Graph Attention +8

XFlow: Cross-modal Deep Neural Networks for Audiovisual Classification

1 code implementation2 Sep 2017 Cătălina Cangea, Petar Veličković, Pietro Liò

Our work improves on existing multimodal deep learning algorithms in two essential ways: (1) it presents a novel method for performing cross-modality (before features are learned from individual modalities) and (2) extends the previously proposed cross-connections which only transfer information between streams that process compatible data.

Classification General Classification +2

X-CNN: Cross-modal Convolutional Neural Networks for Sparse Datasets

no code implementations1 Oct 2016 Petar Veličković, Duo Wang, Nicholas D. Lane, Pietro Liò

In this paper we propose cross-modal convolutional neural networks (X-CNNs), a novel biologically inspired type of CNN architectures, treating gradient descent-specialised CNNs as individual units of processing in a larger-scale network topology, while allowing for unconstrained information flow and/or weight sharing between analogous hidden layers of the network---thus generalising the already well-established concept of neural network ensembles (where information typically may flow only between the output layers of the individual networks).

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.