You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 26 Jan 2022 • Nitesh Kumar, Ondrej Kuzelka, Luc De Raedt

Three types of independencies are important to represent and exploit for scalable inference in hybrid models: conditional independencies elegantly modeled in Bayesian networks, context-specific independencies naturally represented by logical rules, and independencies amongst attributes of related objects in relational models succinctly expressed by combining rules.

1 code implementation • 6 Nov 2020 • Gustav Sourek, Filip Zelezny, Ondrej Kuzelka

We demonstrate a deep learning framework which is inherently based in the highly expressive language of relational logic, enabling to, among other things, capture arbitrarily complex graph structures.

2 code implementations • ICLR 2021 • Gustav Sourek, Filip Zelezny, Ondrej Kuzelka

The computation graphs themselves then reflect the symmetries of the underlying data, similarly to the lifted graphical models.

2 code implementations • 13 Jul 2020 • Gustav Sourek, Filip Zelezny, Ondrej Kuzelka

We demonstrate a declarative differentiable programming framework based on the language of Lifted Relational Neural Networks, where small parameterized logic programs are used to encode relational learning scenarios.

no code implementations • 10 Jul 2020 • Ondrej Kuzelka

It is known due to the work of Van den Broeck et al [KR, 2014] that weighted first-order model counting (WFOMC) in the two-variable fragment of first-order logic can be solved in time polynomial in the number of domain elements.

no code implementations • 4 Jun 2020 • Ondrej Kuzelka

In this paper we show that inference in 2-variable Markov logic networks (MLNs) with cardinality and function constraints is domain-liftable.

no code implementations • 24 Feb 2020 • Ondrej Kuzelka

We study expressivity of Markov logic networks (MLNs).

no code implementations • 15 Jan 2020 • Timothy van Bremen, Ondrej Kuzelka

We study the symmetric weighted first-order model counting task and present ApproxWFOMC, a novel anytime method for efficiently bounding the weighted first-order model count in the presence of an unweighted first-order model counting oracle.

no code implementations • 15 Jan 2020 • Ondrej Kuzelka, Yuyi Wang

We study computational aspects of relational marginal polytopes which are statistical relational learning counterparts of marginal polytopes, well-known from probabilistic graphical models.

no code implementations • 25 Sep 2019 • Ondrej Kuzelka, Yuyi Wang

We study theoretical properties of embedding methods for knowledge graph completion under the missing completely at random assumption.

no code implementations • 7 Mar 2019 • Ondrej Kuzelka, Vyacheslav Kungurtsev

We study lifted weight learning of Markov logic networks.

1 code implementation • AKBC 2019 • Arcchit Jain, Tal Friedman, Ondrej Kuzelka, Guy Van Den Broeck, Luc De Raedt

In this paper, we present SafeLearner -- a scalable solution to probabilistic KB completion that performs probabilistic rule learning using lifted probabilistic inference -- as faster approach instead of grounding.

no code implementations • 3 Jul 2018 • Víctor Gutiérrez-Basulto, Jean Christoph Jung, Ondrej Kuzelka

Markov Logic Networks (MLNs) are well-suited for expressing statistics such as "with high probability a smoker knows another smoker" but not for expressing statements such as "there is a smoker who knows most other smokers", which is necessary for modeling, e. g. influencers in social networks.

no code implementations • 17 Apr 2018 • Ondrej Kuzelka, Yuyi Wang, Steven Schockaert

In many applications of relational learning, the available data can be seen as a sample from a larger relational structure (e. g. we may be given a small fragment from some social network).

no code implementations • 15 Mar 2018 • Ondrej Kuzelka, Yuyi Wang, Jesse Davis, Steven Schockaert

We consider the problem of predicting plausible missing facts in relational data, given a set of imperfect logical rules.

no code implementations • 5 Oct 2017 • Gustav Sourek, Martin Svatos, Filip Zelezny, Steven Schockaert, Ondrej Kuzelka

Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks.

no code implementations • 18 Sep 2017 • Ondrej Kuzelka, Yuyi Wang, Jesse Davis, Steven Schockaert

In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals.

no code implementations • 19 May 2017 • Ondrej Kuzelka, Jesse Davis, Steven Schockaert

Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.

no code implementations • 18 Nov 2016 • Ondrej Kuzelka, Jesse Davis, Steven Schockaert

In this paper, we advocate the use of stratified logical theories for representing probabilistic models.

no code implementations • 18 Apr 2016 • Ondrej Kuzelka, Jesse Davis, Steven Schockaert

We introduce a setting for learning possibilistic logic theories from defaults of the form "if alpha then typically beta".

1 code implementation • 20 Aug 2015 • Gustav Sourek, Vojtech Aschenbrenner, Filip Zelezny, Ondrej Kuzelka

We propose a method combining relational-logic representations with neural network learning.

1 code implementation • 3 Jun 2015 • Ondrej Kuzelka, Jesse Davis, Steven Schockaert

Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.