no code implementations • 23 Feb 2024 • Muralikrishnna G. Sethuraman, Faramarz Fekri
Under the additive noise model, MissNODAGS learns the causal graph by alternating between imputing the missing data and maximizing the expected log-likelihood of the visible part of the data in each training step, following the principles of the expectation-maximization (EM) framework.
1 code implementation • 19 Feb 2024 • Siheng Xiong, Yuan Yang, Faramarz Fekri, James Clayton Kerce
Compared with static knowledge graphs, temporal knowledge graphs (tKG), which can capture the evolution and change of information over time, are more realistic and general.
1 code implementation • 12 Jan 2024 • Siheng Xiong, Ali Payani, Ramana Kompella, Faramarz Fekri
To be specific, we first teach LLM to translate the context into a temporal graph (TG).
1 code implementation • 25 Dec 2023 • Siheng Xiong, Yuan Yang, Ali Payani, James C Kerce, Faramarz Fekri
We first convert TKGs into a temporal event knowledge graph (TEKG) which has a more explicit representation of time in term of nodes of the graph.
1 code implementation • 24 May 2023 • Yuan Yang, Siheng Xiong, Ali Payani, Ehsan Shareghi, Faramarz Fekri
Translating natural language sentences to first-order logic (NL-FOL translation) is a longstanding challenge in the NLP and formal logic literature.
1 code implementation • 4 Jan 2023 • Muralikrishnna G. Sethuraman, Romain Lopez, Rahul Mohan, Faramarz Fekri, Tommaso Biancalani, Jan-Christian Hütter
Learning causal relationships between variables is a well-studied problem in statistics, with many important applications in science.
no code implementations • 8 Dec 2022 • Duo Xu, Faramarz Fekri
In many real-world applications of control system and robotics, linear temporal logic (LTL) is a widely-used task specification language which has a compositional grammar that naturally induces temporally extended behaviours across tasks, including conditionals and alternative realizations.
no code implementations • 9 Jun 2022 • Yuan Yang, Siheng Xiong, James C Kerce, Faramarz Fekri
Inductive logic reasoning is one of the fundamental tasks on graphs, which seeks to generalize patterns from the data.
no code implementations • 6 May 2022 • Hang Zhang, Afshin Abdi, Faramarz Fekri
For the first time, we show that the correct graphical structure can be correctly recovered under the indefinite sensing system ($d < p$) using insufficient samples ($n < p$).
no code implementations • 11 Apr 2022 • Hang Zhang, Afshin Abdi, Faramarz Fekri
This paper proposes a general framework to design a sparse sensing matrix $\ensuremath{\mathbf{A}}\in \mathbb{R}^{m\times n}$, in a linear measurement system $\ensuremath{\mathbf{y}} = \ensuremath{\mathbf{Ax}}^{\natural} + \ensuremath{\mathbf{w}}$, where $\ensuremath{\mathbf{y}} \in \mathbb{R}^m$, $\ensuremath{\mathbf{x}}^{\natural}\in \RR^n$, and $\ensuremath{\mathbf{w}}$ denote the measurements, the signal with certain structures, and the measurement noise, respectively.
no code implementations • 17 Mar 2022 • Muralikrishnna G. Sethuraman, Hang Zhang, Faramarz Fekri
In this paper, we propose a general framework for designing sensing matrix $\boldsymbol{A} \in \mathbb{R}^{d\times p}$, for estimation of sparse covariance matrix from compressed measurements of the form $\boldsymbol{y} = \boldsymbol{A}\boldsymbol{x} + \boldsymbol{n}$, where $\boldsymbol{y}, \boldsymbol{n} \in \mathbb{R}^d$, and $\boldsymbol{x} \in \mathbb{R}^p$.
no code implementations • 24 Jan 2022 • Yashas Malur Saidutta, Afshin Abdi, Faramarz Fekri
IoT devices generating enormous data and state-of-the-art machine learning techniques together will revolutionize cyber-physical systems.
no code implementations • 8 Nov 2021 • Muralikrishnna G. Sethuraman, Ali Payani, Faramarz Fekri, J. Clayton Kerce
To achieve this, we take a symbolic reasoning based approach using the framework of formal logic.
no code implementations • 21 Jun 2021 • Duo Xu, Faramarz Fekri
In this work, we propose a new hierarchical framework via symbolic RL, leveraging a symbolic transition model to improve the data-efficiency and introduce the interpretability for learned policy.
Hierarchical Reinforcement Learning Inductive logic programming +2
no code implementations • 22 Mar 2021 • Duo Xu, Faramarz Fekri
In this work, inspired by the previous use of Hamiltonian Monte Carlo (HMC) in VI, we propose to integrate the policy network of actor-critic RL with HMC, which is termed as {\it Hamiltonian Policy}.
no code implementations • 19 Aug 2020 • Afshin Abdi, Saeed Rashidi, Faramarz Fekri, Tushar Krishna
In this paper, we consider the parallel implementation of an already-trained deep model on multiple processing nodes (a. k. a.
no code implementations • 30 Jun 2020 • Duo Xu, Mohit Agarwal, Ekansh Gupta, Faramarz Fekri, Raghupathy Sivakumar
Providing Reinforcement Learning (RL) agents with human feedback can dramatically improve various aspects of learning.
no code implementations • 23 Mar 2020 • Ali Payani, Faramarz Fekri
Most importantly, it allows for incorporating expert knowledge into the learning, and hence leading to much faster learning and better generalization compared to the standard deep reinforcement learning.
no code implementations • ICLR 2020 • Duo Xu, Mohit Agarwal, Raghupathy Sivakumar, Faramarz Fekri
Building atop the baseline, we then make the following novel contributions in our work: (i) We argue that the definition of error-potentials is generalizable across different environments; specifically we show that error-potentials of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the error-potentials.
1 code implementation • 8 Jun 2019 • Ali Payani, Faramarz Fekri
In particular, we show that our proposed method outperforms the state of the art ILP solvers in classification tasks for Mutagenesis, Cora and IMDB datasets.
no code implementations • ICLR 2019 • Afshin Abdi, Faramarz Fekri
In distributed training, the communication cost due to the transmission of gradients or the parameters of the deep model is a major bottleneck in scaling up the number of processing nodes.
no code implementations • 2 Apr 2019 • Ali Payani, Faramarz Fekri
In particular, we propose a new framework for learning the inductive logic programming (ILP) problems by exploiting the explicit representational power of NLN.