no code implementations • 3 Mar 2025 • Alessandro Daniele, Emile van Krieken
We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving, enabling the optimisation of discrete Boolean formulas without sacrificing differentiability.
no code implementations • 9 Feb 2025 • Ne Luo, Aryo Pradipta Gema, Xuanli He, Emile van Krieken, Pietro Lesci, Pasquale Minervini
Large language models (LLMs) remain prone to factual inaccuracies and computational errors, including hallucinations and mistakes in mathematical reasoning.
no code implementations • 5 Nov 2024 • Giwon Hong, Emile van Krieken, Edoardo Ponti, Nikolay Malkin, Pasquale Minervini
In-context learning (ICL) adapts LLMs by providing demonstrations without fine-tuning the model parameters; however, it does not differentiate between demonstrations and quadratically increases the complexity of Transformer LLMs, exhausting the memory.
2 code implementations • 14 Jun 2024 • Samuele Bortolotti, Emanuele Marconato, Tommaso Carraro, Paolo Morettin, Emile van Krieken, Antonio Vergari, Stefano Teso, Andrea Passerini
The advent of powerful neural classifiers has increased interest in problems that require both learning and reasoning.
3 code implementations • 6 Jun 2024 • Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, Claire Barale, Robert McHardy, Joshua Harris, Jean Kaddour, Emile van Krieken, Pasquale Minervini
For example, we find that 57% of the analysed questions in the Virology subset contain errors.
no code implementations • 1 May 2024 • Emile van Krieken, Samy Badreddine, Robin Manhaeve, Eleonora Giunchiglia
The field of neuro-symbolic artificial intelligence (NeSy), which combines learning and reasoning, has recently experienced significant growth.
no code implementations • 12 Apr 2024 • Emile van Krieken, Pasquale Minervini, Edoardo M. Ponti, Antonio Vergari
Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning.
1 code implementation • 19 Feb 2024 • Emanuele Marconato, Samuele Bortolotti, Emile van Krieken, Antonio Vergari, Andrea Passerini, Stefano Teso
Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge - encoding, e. g., safety constraints - can be affected by Reasoning Shortcuts (RSs): They learn concepts consistent with the symbolic knowledge by exploiting unintended semantics.
no code implementations • 19 Jan 2024 • Emile van Krieken
How do we connect the symbolic and neural components to communicate this knowledge?
1 code implementation • 5 Oct 2023 • Taraneh Younesian, Daniel Daza, Emile van Krieken, Thiviyan Thanapalasingam, Peter Bloem
To this end, we introduce GRAPES, an adaptive sampling method that learns to identify the set of nodes crucial for training a GNN.
1 code implementation • 13 Jul 2023 • Thiviyan Thanapalasingam, Emile van Krieken, Peter Bloem, Paul Groth
However, Knowledge Graphs are not just sets of links but also have semantics underlying their structure.
1 code implementation • 23 Aug 2022 • Dimitrios Alivanistos, Selene Báez Santamaría, Michael Cochez, Jan-Christoph Kalo, Emile van Krieken, Thiviyan Thanapalasingam
ProP implements a multi-step approach that combines a variety of prompting techniques to achieve this.
1 code implementation • 10 Jun 2022 • Alessandro Daniele, Emile van Krieken, Luciano Serafini, Frank van Harmelen
Using a new algorithm called Iterative Local Refinement (ILR), we combine refinement functions to find refined predictions for logical formulas of any complexity.
1 code implementation • NeurIPS 2021 • Emile van Krieken, Jakub M. Tomczak, Annette ten Teije
Stochastic AD extends AD to stochastic computation graphs with sampling steps, which arise when modelers handle the intractable expectations common in Reinforcement Learning and Variational Inference.
no code implementations • 4 Jun 2020 • Emile van Krieken, Erman Acar, Frank van Harmelen
In this paper, we investigate how implications from the fuzzy logic literature behave in a differentiable setting.
1 code implementation • 14 Feb 2020 • Emile van Krieken, Erman Acar, Frank van Harmelen
Finally, we empirically show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
1 code implementation • 13 Aug 2019 • Emile van Krieken, Erman Acar, Frank van Harmelen
We introduce Differentiable Reasoning (DR), a novel semi-supervised learning technique which uses relational background knowledge to benefit from unlabeled data.