You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 26 May 2022 • Peter Müller, Lukas Faber, Karolis Martinkus, Roger Wattenhofer

We propose the fully explainable Decision Tree Graph Neural Network (DT+GNN) architecture.

no code implementations • 24 May 2022 • Lukas Faber, Roger Wattenhofer

This paper studies asynchronous message passing (AMP), a new paradigm for applying neural network based learning to graphs.

1 code implementation • NeurIPS 2021 • Pál András Papp, Karolis Martinkus, Lukas Faber, Roger Wattenhofer

In DropGNNs, we execute multiple runs of a GNN on the input graph, with some of the nodes randomly and independently dropped in each of these runs.

Ranked #8 on Graph Classification on IMDb-B

no code implementations • 11 Mar 2021 • Lukas Faber, Yifan Lu, Roger Wattenhofer

We find that for graph classification, a GNN is not more than the sum of its parts.

no code implementations • 25 Feb 2021 • Nikola Jovanović, Zhao Meng, Lukas Faber, Roger Wattenhofer

We study the problem of adversarially robust self-supervised learning on graphs.

1 code implementation • 26 Oct 2020 • Lukas Faber, Amin K. Moghaddam, Roger Wattenhofer

Graph Neural Networks achieve remarkable results on problems with structured data but come as black-box predictors.

no code implementations • NeurIPS Workshop LMCA 2020 • Jorel Elmiger, Lukas Faber, Pankaj Khanchandani, Oliver Paul Richter, Roger Wattenhofer

Given there are quadratically many possible edges in a graph and each subset of edges is a possible solution, this yields unfeasibly large search spaces even for few nodes.

no code implementations • 25 Aug 2020 • Lukas Faber, Sandro Luck, Damian Pascual, Andreas Roth, Gino Brunner, Roger Wattenhofer

The automatic generation of medleys, i. e., musical pieces formed by different songs concatenated via smooth transitions, is not well studied in the current literature.

no code implementations • 15 Apr 2020 • Lukas Faber, Roger Wattenhofer

Standard Neural Networks can learn mathematical operations, but they do not extrapolate.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.