Search Results for author: Emile van Krieken

Found 17 papers, 10 papers with code

Noise to the Rescue: Escaping Local Minima in Neurosymbolic Local Search

no code implementations3 Mar 2025 Alessandro Daniele, Emile van Krieken

We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving, enabling the optimisation of discrete Boolean formulas without sacrificing differentiability.

Self-Training Large Language Models for Tool-Use Without Demonstrations

no code implementations9 Feb 2025 Ne Luo, Aryo Pradipta Gema, Xuanli He, Emile van Krieken, Pietro Lesci, Pasquale Minervini

Large language models (LLMs) remain prone to factual inaccuracies and computational errors, including hallucinations and mistakes in mathematical reasoning.

GSM8K Mathematical Reasoning +2

Mixtures of In-Context Learners

no code implementations5 Nov 2024 Giwon Hong, Emile van Krieken, Edoardo Ponti, Nikolay Malkin, Pasquale Minervini

In-context learning (ICL) adapts LLMs by providing demonstrations without fine-tuning the model parameters; however, it does not differentiate between demonstrations and quadratically increases the complexity of Transformer LLMs, exhausting the memory.

In-Context Learning

ULLER: A Unified Language for Learning and Reasoning

no code implementations1 May 2024 Emile van Krieken, Samy Badreddine, Robin Manhaeve, Eleonora Giunchiglia

The field of neuro-symbolic artificial intelligence (NeSy), which combines learning and reasoning, has recently experienced significant growth.

On the Independence Assumption in Neurosymbolic Learning

no code implementations12 Apr 2024 Emile van Krieken, Pasquale Minervini, Edoardo M. Ponti, Antonio Vergari

Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning.

Uncertainty Quantification valid

BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts

1 code implementation19 Feb 2024 Emanuele Marconato, Samuele Bortolotti, Emile van Krieken, Antonio Vergari, Andrea Passerini, Stefano Teso

Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge - encoding, e. g., safety constraints - can be affected by Reasoning Shortcuts (RSs): They learn concepts consistent with the symbolic knowledge by exploiting unintended semantics.

Optimisation in Neurosymbolic Learning Systems

no code implementations19 Jan 2024 Emile van Krieken

How do we connect the symbolic and neural components to communicate this knowledge?

GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks

1 code implementation5 Oct 2023 Taraneh Younesian, Daniel Daza, Emile van Krieken, Thiviyan Thanapalasingam, Peter Bloem

To this end, we introduce GRAPES, an adaptive sampling method that learns to identify the set of nodes crucial for training a GNN.

Graph Sampling Node Classification

Refining neural network predictions using background knowledge

1 code implementation10 Jun 2022 Alessandro Daniele, Emile van Krieken, Luciano Serafini, Frank van Harmelen

Using a new algorithm called Iterative Local Refinement (ILR), we combine refinement functions to find refined predictions for logical formulas of any complexity.

Storchastic: A Framework for General Stochastic Automatic Differentiation

1 code implementation NeurIPS 2021 Emile van Krieken, Jakub M. Tomczak, Annette ten Teije

Stochastic AD extends AD to stochastic computation graphs with sampling steps, which arise when modelers handle the intractable expectations common in Reinforcement Learning and Variational Inference.

Variational Inference

Analyzing Differentiable Fuzzy Implications

no code implementations4 Jun 2020 Emile van Krieken, Erman Acar, Frank van Harmelen

In this paper, we investigate how implications from the fuzzy logic literature behave in a differentiable setting.

Weakly-supervised Learning

Analyzing Differentiable Fuzzy Logic Operators

1 code implementation14 Feb 2020 Emile van Krieken, Erman Acar, Frank van Harmelen

Finally, we empirically show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.

Weakly-supervised Learning

Semi-Supervised Learning using Differentiable Reasoning

1 code implementation13 Aug 2019 Emile van Krieken, Erman Acar, Frank van Harmelen

We introduce Differentiable Reasoning (DR), a novel semi-supervised learning technique which uses relational background knowledge to benefit from unlabeled data.

Cannot find the paper you are looking for? You can Submit a new open access paper.