no code implementations • 5 Feb 2024 • Arthur da Cunha, Kasper Green Larsen, Martin Ritzert
At the center of this paradigm lies the concept of building the strong learner as a voting classifier, which outputs a weighted majority vote of the weak learners.
1 code implementation • 1 Sep 2023 • Jan Tönshoff, Martin Ritzert, Eran Rosenbluth, Martin Grohe
The recent Long-Range Graph Benchmark (LRGB, Dwivedi et al. 2022) introduced a set of graph learning tasks strongly dependent on long-range interaction between vertices.
Ranked #1 on Link Prediction on PCQM-Contact (MRR-ext-filtered metric)
no code implementations • 27 Jan 2023 • Mikael Møller Høgsgaard, Kasper Green Larsen, Martin Ritzert
AdaBoost is a classic boosting algorithm for combining multiple inaccurate classifiers produced by a weak learner, to produce a strong learner with arbitrarily high accuracy when given enough training data.
no code implementations • 3 Jun 2022 • Kasper Green Larsen, Martin Ritzert
The classic algorithm AdaBoost allows to convert a weak learner, that is an algorithm that produces a hypothesis which is slightly better than chance, into a strong learner, achieving arbitrarily high accuracy when given enough training data.
no code implementations • 1 Jun 2022 • Jan G. Rittig, Martin Ritzert, Artur M. Schweidtmann, Stefanie Winkler, Jana M. Weber, Philipp Morsch, K. Alexander Heufer, Martin Grohe, Alexander Mitsos, Manuel Dahmen
We propose a modular graph-ML CAMD framework that integrates generative graph-ML models with graph neural networks and optimization, enabling the design of molecules with desired ignition properties in a continuous molecular space.
no code implementations • 24 Feb 2021 • Steffen van Bergerem, Martin Grohe, Martin Ritzert
We analyse the complexity of learning first-order queries in a model-theoretic framework for supervised learning introduced by (Grohe and Tur\'an, TOCS 2004).
Logic in Computer Science
1 code implementation • 17 Feb 2021 • Jan Tönshoff, Martin Ritzert, Hinrikus Wolf, Martin Grohe
As the theoretical basis for our approach, we prove a theorem stating that the expressiveness of CRaWl is incomparable with that of the Weisfeiler Leman algorithm and hence with graph neural networks.
Ranked #1 on Graph Classification on REDDIT-B
2 code implementations • 20 May 2020 • Tobias Schumacher, Hinrikus Wolf, Martin Ritzert, Florian Lemmerich, Jan Bachmann, Florian Frantzen, Max Klabunde, Martin Grohe, Markus Strohmaier
We systematically evaluate the (in-)stability of state-of-the-art node embedding algorithms due to randomness, i. e., the random variation of their outcomes given identical algorithms and graphs.
no code implementations • 24 Sep 2019 • Emilie Grienenberger, Martin Ritzert
We study the problem of learning properties of nodes in tree structures.
1 code implementation • 18 Sep 2019 • Jan Toenshoff, Martin Ritzert, Hinrikus Wolf, Martin Grohe
Many combinatorial optimization problems can be phrased in the language of constraint satisfaction problems.
1 code implementation • 4 Oct 2018 • Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe
We show that GNNs have the same expressiveness as the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs.
Ranked #4 on Graph Classification on NCI1
no code implementations • 27 Aug 2017 • Martin Grohe, Christof Löding, Martin Ritzert
We study the classification problems over string data for hypotheses specified by formulas of monadic second-order logic MSO.
no code implementations • 19 Jan 2017 • Martin Grohe, Martin Ritzert
We consider a declarative framework for machine learning where concepts and hypotheses are defined by formulas of a logic over some background structure.