no code implementations • 23 Feb 2024 • Maurice Kraus, Felix Divo, David Steinmann, Devendra Singh Dhami, Kristian Kersting
Actually, common belief is that multi-dataset pretraining does not work for time series!
1 code implementation • 21 Feb 2024 • Hikaru Shindo, Manuel Brack, Gopika Sudhakaran, Devendra Singh Dhami, Patrick Schramowski, Kristian Kersting
To remedy this issue, we propose DeiSAM -- a combination of large pre-trained neural networks with differentiable logic reasoners -- for deictic promptable segmentation.
1 code implementation • 13 Feb 2024 • Antonia Wüst, Wolfgang Stammer, Quentin Delfosse, Devendra Singh Dhami, Kristian Kersting
The challenge in learning abstract concepts from images in an unsupervised fashion lies in the required integration of visual perception and generalizable relational reasoning.
1 code implementation • 24 Aug 2023 • Matej Zečević, Moritz Willig, Devendra Singh Dhami, Kristian Kersting
We conjecture that in the cases where LLM succeed in doing causal inference, underlying was a respective meta SCM that exposed correlations between causal facts in natural language on whose data the LLM was ultimately trained.
1 code implementation • ICCV 2023 • Gopika Sudhakaran, Devendra Singh Dhami, Kristian Kersting, Stefan Roth
Recent years have seen a growing interest in Scene Graph Generation (SGG), a comprehensive visual scene understanding task that aims to predict entity relationships using a relation encoder-decoder pipeline stacked on top of an object encoder-decoder backbone.
1 code implementation • 3 Jul 2023 • Hikaru Shindo, Viktor Pfanschilling, Devendra Singh Dhami, Kristian Kersting
However, due to the memory intensity, most existing approaches do not bring the best of the expressivity of first-order logic, excluding a crucial ability to solve abstract visual reasoning, where agents need to perform reasoning by using analogies on abstract concepts in different scenarios.
1 code implementation • 14 Jun 2023 • Arseny Skryagin, Daniel Ochs, Devendra Singh Dhami, Kristian Kersting
The goal of combining the robustness of neural networks and the expressiveness of symbolic methods has rekindled the interest in Neuro-Symbolic AI.
1 code implementation • 13 Jun 2023 • Lukas Helff, Wolfgang Stammer, Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting
Despite the successes of recent developments in visual AI, different shortcomings still exist; from missing exact logical reasoning, to abstract generalization abilities, to understanding complex and noisy scenes.
no code implementations • 23 Dec 2022 • Matej Zečević, Moritz Willig, Devendra Singh Dhami, Kristian Kersting
Many researchers have voiced their support towards Pearl's counterfactual theory of causation as a stepping stone for AI/ML research's ultimate goal of intelligent systems.
no code implementations • 21 Nov 2022 • Zihan Ye, Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting
To make deep learning do more from less, we propose the first neural meta-symbolic system (NEMESYS) for reasoning and learning: meta programming using differentiable forward-chaining reasoning in first-order logic.
no code implementations • 29 Aug 2022 • Björn Deiseroth, Patrick Schramowski, Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting
Text-to-image models have recently achieved remarkable success with seemingly accurate samples in photo-realistic quality.
no code implementations • 24 Jun 2022 • Jonas Seng, Pooja Prasad, Martin Mundt, Devendra Singh Dhami, Kristian Kersting
Deep neural architectures have profound impact on achieved performance in many of today's AI tasks, yet, their design still heavily relies on human prior knowledge and experience.
1 code implementation • 14 Jun 2022 • Moritz Willig, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Foundation models are subject to an ongoing heated debate, leaving open the question of progress towards AGI and dividing the community into two camps: the ones who see the arguably impressive results as evidence to the scaling hypothesis, and the others who are worried about the lack of interpretability and reasoning capabilities.
no code implementations • 14 Jun 2022 • Florian Peter Busch, Matej Zečević, Kristian Kersting, Devendra Singh Dhami
We introduce an approach where we consider neural encodings for LPs that justify the application of attribution methods from explainable artificial intelligence (XAI) designed for neural learning systems.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 14 Jun 2022 • Salahedine Youssef, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Even though AI has advanced rapidly in recent years displaying success in solving highly complex problems, the class of Bongard Problems (BPs) yet remain largely unsolved by modern ML techniques.
no code implementations • 14 Jun 2022 • Jonas Seng, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Simulations are ubiquitous in machine learning.
no code implementations • 14 Jun 2022 • David Steinmann, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
In this work, we extend the attribution methods for explaining neural networks to linear programs.
1 code implementation • 29 Mar 2022 • Matej Zečević, Florian Peter Busch, Devendra Singh Dhami, Kristian Kersting
Linear Programs (LP) are celebrated widely, particularly so in machine learning where they have allowed for effectively solving probabilistic inference tasks or imposing structure on end-to-end learning systems.
no code implementations • 22 Oct 2021 • Moritz Willig, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Most algorithms in classical and contemporary machine learning focus on correlation-based dependence between features to drive performance.
no code implementations • 22 Oct 2021 • Matej Zečević, Devendra Singh Dhami, Kristian Kersting
More specifically, there are models capable of answering causal queries that are not SCM, which we refer to as \emph{partially causal models} (PCM).
1 code implementation • 18 Oct 2021 • Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting
NSFR factorizes the raw inputs into the object-centric representations, converts them into probabilistic ground atoms, and finally performs differentiable forward-chaining inference using weighted rules for inference.
no code implementations • 7 Oct 2021 • Arseny Skryagin, Wolfgang Stammer, Daniel Ochs, Devendra Singh Dhami, Kristian Kersting
The probability estimates resulting from NPPs act as the binding element between the logical program and raw input data, thereby allowing SLASH to answer task-dependent logical queries.
no code implementations • 5 Oct 2021 • Matej Zečević, Devendra Singh Dhami, Constantin A. Rothkopf, Kristian Kersting
The question part on the user's end we believe to be solved since the user's mental model can provide the causal model.
no code implementations • 14 Sep 2021 • Zhongjie Yu, Devendra Singh Dhami, Kristian Kersting
Probabilistic circuits (PCs) have become the de-facto standard for learning and inference in probabilistic modeling.
no code implementations • 9 Sep 2021 • Matej Zečević, Devendra Singh Dhami, Petar Veličković, Kristian Kersting
Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations.
1 code implementation • 26 May 2021 • Matej Zečević, Devendra Singh Dhami, Kristian Kersting
The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks.
no code implementations • 19 Mar 2021 • Devendra Singh Dhami, Siwen Yan, Gautam Kunapuli, David Page, Sriraam Natarajan
Predicting and discovering drug-drug interactions (DDIs) using machine learning has been studied extensively.
1 code implementation • NeurIPS 2021 • Matej Zečević, Devendra Singh Dhami, Athresh Karanam, Sriraam Natarajan, Kristian Kersting
While probabilistic models are an important tool for studying causality, doing so suffers from the intractability of inference.
no code implementations • 13 Feb 2021 • Devendra Singh Dhami, Siwen Yan, Sriraam Natarajan
We consider the problem of learning distance-based Graph Convolutional Networks (GCNs) for relational data.
no code implementations • NeurIPS Workshop ICBINB 2020 • Siwen Yan, Devendra Singh Dhami, Sriraam Natarajan
Reducing bias while learning and inference is an important requirement to achieve generalizable and better performing models.
no code implementations • 2 Jan 2020 • Devendra Singh Dhami, Siwen Yan, Gautam Kunapuli, Sriraam Natarajan
We consider the problem of structure learning for Gaifman models and learn relational features that can be used to derive feature representations from a knowledge base.
no code implementations • 14 Nov 2019 • Devendra Singh Dhami, Siwen Yan, Gautam Kunapuli, David Page, Sriraam Natarajan
Predicting and discovering drug-drug interactions (DDIs) is an important problem and has been studied extensively both from medical and machine learning point of view.
no code implementations • 31 May 2019 • Mayukh Das, Devendra Singh Dhami, Yang Yu, Gautam Kunapuli, Sriraam Natarajan
Recently, deep models have had considerable success in several tasks, especially with low-level representations.
no code implementations • ICLR 2019 • Mayukh Das, Yang Yu, Devendra Singh Dhami, Gautam Kunapuli, Sriraam Natarajan
While extremely successful in several applications, especially with low-level representations; sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models.
no code implementations • 15 Apr 2019 • Mayukh Das, Yang Yu, Devendra Singh Dhami, Gautam Kunapuli, Sriraam Natarajan
Recently, deep models have been successfully applied in several applications, especially with low-level representations.