Search Results for author: Ondrej Kuzelka

Found 22 papers, 7 papers with code

First-Order Context-Specific Likelihood Weighting in Hybrid Probabilistic Logic Programs

1 code implementation26 Jan 2022 Nitesh Kumar, Ondrej Kuzelka, Luc De Raedt

Three types of independencies are important to represent and exploit for scalable inference in hybrid models: conditional independencies elegantly modeled in Bayesian networks, context-specific independencies naturally represented by logical rules, and independencies amongst attributes of related objects in relational models succinctly expressed by combining rules.

Learning with Molecules beyond Graph Neural Networks

1 code implementation6 Nov 2020 Gustav Sourek, Filip Zelezny, Ondrej Kuzelka

We demonstrate a deep learning framework which is inherently based in the highly expressive language of relational logic, enabling to, among other things, capture arbitrarily complex graph structures.

Lossless Compression of Structured Convolutional Models via Lifting

2 code implementations ICLR 2021 Gustav Sourek, Filip Zelezny, Ondrej Kuzelka

The computation graphs themselves then reflect the symmetries of the underlying data, similarly to the lifted graphical models.

Knowledge Base Completion

Beyond Graph Neural Networks with Lifted Relational Neural Networks

2 code implementations13 Jul 2020 Gustav Sourek, Filip Zelezny, Ondrej Kuzelka

We demonstrate a declarative differentiable programming framework based on the language of Lifted Relational Neural Networks, where small parameterized logic programs are used to encode relational learning scenarios.

Relational Reasoning

Weighted First-Order Model Counting in the Two-Variable Fragment With Counting Quantifiers

no code implementations10 Jul 2020 Ondrej Kuzelka

It is known due to the work of Van den Broeck et al [KR, 2014] that weighted first-order model counting (WFOMC) in the two-variable fragment of first-order logic can be solved in time polynomial in the number of domain elements.

Lifted Inference in 2-Variable Markov Logic Networks with Function and Cardinality Constraints Using Discrete Fourier Transform

no code implementations4 Jun 2020 Ondrej Kuzelka

In this paper we show that inference in 2-variable Markov logic networks (MLNs) with cardinality and function constraints is domain-liftable.

Complex Markov Logic Networks: Expressivity and Liftability

no code implementations24 Feb 2020 Ondrej Kuzelka

We study expressivity of Markov logic networks (MLNs).

Approximate Weighted First-Order Model Counting: Exploiting Fast Approximate Model Counters and Symmetry

no code implementations15 Jan 2020 Timothy van Bremen, Ondrej Kuzelka

We study the symmetric weighted first-order model counting task and present ApproxWFOMC, a novel anytime method for efficiently bounding the weighted first-order model count in the presence of an unweighted first-order model counting oracle.

Domain-Liftability of Relational Marginal Polytopes

no code implementations15 Jan 2020 Ondrej Kuzelka, Yuyi Wang

We study computational aspects of relational marginal polytopes which are statistical relational learning counterparts of marginal polytopes, well-known from probabilistic graphical models.

Relational Reasoning

Knowledge Graph Embedding: A Probabilistic Perspective and Generalization Bounds

no code implementations25 Sep 2019 Ondrej Kuzelka, Yuyi Wang

We study theoretical properties of embedding methods for knowledge graph completion under the missing completely at random assumption.

Generalization Bounds Knowledge Graph Completion +2

Scalable Rule Learning in Probabilistic Knowledge Bases

1 code implementation AKBC 2019 Arcchit Jain, Tal Friedman, Ondrej Kuzelka, Guy Van Den Broeck, Luc De Raedt

In this paper, we present SafeLearner -- a scalable solution to probabilistic KB completion that performs probabilistic rule learning using lifted probabilistic inference -- as faster approach instead of grounding.

Quantified Markov Logic Networks

no code implementations3 Jul 2018 Víctor Gutiérrez-Basulto, Jean Christoph Jung, Ondrej Kuzelka

Markov Logic Networks (MLNs) are well-suited for expressing statistics such as "with high probability a smoker knows another smoker" but not for expressing statements such as "there is a smoker who knows most other smokers", which is necessary for modeling, e. g. influencers in social networks.

VC-Dimension Based Generalization Bounds for Relational Learning

no code implementations17 Apr 2018 Ondrej Kuzelka, Yuyi Wang, Steven Schockaert

In many applications of relational learning, the available data can be seen as a sample from a larger relational structure (e. g. we may be given a small fragment from some social network).

Generalization Bounds Relational Reasoning

PAC-Reasoning in Relational Domains

no code implementations15 Mar 2018 Ondrej Kuzelka, Yuyi Wang, Jesse Davis, Steven Schockaert

We consider the problem of predicting plausible missing facts in relational data, given a set of imperfect logical rules.

Stacked Structure Learning for Lifted Relational Neural Networks

no code implementations5 Oct 2017 Gustav Sourek, Martin Svatos, Filip Zelezny, Steven Schockaert, Ondrej Kuzelka

Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks.

Relational Marginal Problems: Theory and Estimation

no code implementations18 Sep 2017 Ondrej Kuzelka, Yuyi Wang, Jesse Davis, Steven Schockaert

In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals.

Induction of Interpretable Possibilistic Logic Theories from Relational Data

no code implementations19 May 2017 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.

Relational Reasoning

Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)

no code implementations18 Nov 2016 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

In this paper, we advocate the use of stratified logical theories for representing probabilistic models.

Learning Possibilistic Logic Theories from Default Rules

no code implementations18 Apr 2016 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

We introduce a setting for learning possibilistic logic theories from defaults of the form "if alpha then typically beta".

Learning Theory

Lifted Relational Neural Networks

1 code implementation20 Aug 2015 Gustav Sourek, Vojtech Aschenbrenner, Filip Zelezny, Ondrej Kuzelka

We propose a method combining relational-logic representations with neural network learning.

Relational Reasoning

Encoding Markov Logic Networks in Possibilistic Logic

1 code implementation3 Jun 2015 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds.

Cannot find the paper you are looking for? You can Submit a new open access paper.