Search Results for author: Faramarz Fekri

Found 22 papers, 6 papers with code

Learning Cyclic Causal Models from Incomplete Data

no code implementations23 Feb 2024 Muralikrishnna G. Sethuraman, Faramarz Fekri

Under the additive noise model, MissNODAGS learns the causal graph by alternating between imputing the missing data and maximizing the expected log-likelihood of the visible part of the data in each training step, following the principles of the expectation-maximization (EM) framework.

Causal Discovery Imputation

TILP: Differentiable Learning of Temporal Logical Rules on Knowledge Graphs

1 code implementation19 Feb 2024 Siheng Xiong, Yuan Yang, Faramarz Fekri, James Clayton Kerce

Compared with static knowledge graphs, temporal knowledge graphs (tKG), which can capture the evolution and change of information over time, are more realistic and general.

Knowledge Graphs

Large Language Models Can Learn Temporal Reasoning

1 code implementation12 Jan 2024 Siheng Xiong, Ali Payani, Ramana Kompella, Faramarz Fekri

To be specific, we first teach LLM to translate the context into a temporal graph (TG).

Data Augmentation Text Generation

TEILP: Time Prediction over Knowledge Graphs via Logical Reasoning

1 code implementation25 Dec 2023 Siheng Xiong, Yuan Yang, Ali Payani, James C Kerce, Faramarz Fekri

We first convert TKGs into a temporal event knowledge graph (TEKG) which has a more explicit representation of time in term of nodes of the graph.

Knowledge Graphs Logical Reasoning

Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation

1 code implementation24 May 2023 Yuan Yang, Siheng Xiong, Ali Payani, Ehsan Shareghi, Faramarz Fekri

Translating natural language sentences to first-order logic (NL-FOL translation) is a longstanding challenge in the NLP and formal logic literature.

Formal Logic Sentence +1

NODAGS-Flow: Nonlinear Cyclic Causal Structure Learning

1 code implementation4 Jan 2023 Muralikrishnna G. Sethuraman, Romain Lopez, Rahul Mohan, Faramarz Fekri, Tommaso Biancalani, Jan-Christian Hütter

Learning causal relationships between variables is a well-studied problem in statistics, with many important applications in science.

Generalizing LTL Instructions via Future Dependent Options

no code implementations8 Dec 2022 Duo Xu, Faramarz Fekri

In many real-world applications of control system and robotics, linear temporal logic (LTL) is a widely-used task specification language which has a compositional grammar that naturally induces temporally extended behaviours across tasks, including conditionals and alternative realizations.

Temporal Inductive Logic Reasoning

no code implementations9 Jun 2022 Yuan Yang, Siheng Xiong, James C Kerce, Faramarz Fekri

Inductive logic reasoning is one of the fundamental tasks on graphs, which seeks to generalize patterns from the data.

Inductive logic programming Knowledge Graphs

Structure Learning in Graphical Models from Indirect Observations

no code implementations6 May 2022 Hang Zhang, Afshin Abdi, Faramarz Fekri

For the first time, we show that the correct graphical structure can be correctly recovered under the indefinite sensing system ($d < p$) using insufficient samples ($n < p$).

A General Compressive Sensing Construct using Density Evolution

no code implementations11 Apr 2022 Hang Zhang, Afshin Abdi, Faramarz Fekri

This paper proposes a general framework to design a sparse sensing matrix $\ensuremath{\mathbf{A}}\in \mathbb{R}^{m\times n}$, in a linear measurement system $\ensuremath{\mathbf{y}} = \ensuremath{\mathbf{Ax}}^{\natural} + \ensuremath{\mathbf{w}}$, where $\ensuremath{\mathbf{y}} \in \mathbb{R}^m$, $\ensuremath{\mathbf{x}}^{\natural}\in \RR^n$, and $\ensuremath{\mathbf{w}}$ denote the measurements, the signal with certain structures, and the measurement noise, respectively.

Compressive Sensing

A Density Evolution framework for Preferential Recovery of Covariance and Causal Graphs from Compressed Measurements

no code implementations17 Mar 2022 Muralikrishnna G. Sethuraman, Hang Zhang, Faramarz Fekri

In this paper, we propose a general framework for designing sensing matrix $\boldsymbol{A} \in \mathbb{R}^{d\times p}$, for estimation of sparse covariance matrix from compressed measurements of the form $\boldsymbol{y} = \boldsymbol{A}\boldsymbol{x} + \boldsymbol{n}$, where $\boldsymbol{y}, \boldsymbol{n} \in \mathbb{R}^d$, and $\boldsymbol{x} \in \mathbb{R}^p$.

Retrieval

A Machine Learning Framework for Distributed Functional Compression over Wireless Channels in IoT

no code implementations24 Jan 2022 Yashas Malur Saidutta, Afshin Abdi, Faramarz Fekri

IoT devices generating enormous data and state-of-the-art machine learning techniques together will revolutionize cyber-physical systems.

Autonomous Driving BIG-bench Machine Learning +1

Interpretable Model-based Hierarchical Reinforcement Learning using Inductive Logic Programming

no code implementations21 Jun 2021 Duo Xu, Faramarz Fekri

In this work, we propose a new hierarchical framework via symbolic RL, leveraging a symbolic transition model to improve the data-efficiency and introduce the interpretability for learned policy.

Hierarchical Reinforcement Learning Inductive logic programming +2

Improving Actor-Critic Reinforcement Learning via Hamiltonian Monte Carlo Method

no code implementations22 Mar 2021 Duo Xu, Faramarz Fekri

In this work, inspired by the previous use of Hamiltonian Monte Carlo (HMC) in VI, we propose to integrate the policy network of actor-critic RL with HMC, which is termed as {\it Hamiltonian Policy}.

Continuous Control reinforcement-learning +2

Restructuring, Pruning, and Adjustment of Deep Models for Parallel Distributed Inference

no code implementations19 Aug 2020 Afshin Abdi, Saeed Rashidi, Faramarz Fekri, Tushar Krishna

In this paper, we consider the parallel implementation of an already-trained deep model on multiple processing nodes (a. k. a.

Accelerating Reinforcement Learning Agent with EEG-based Implicit Human Feedback

no code implementations30 Jun 2020 Duo Xu, Mohit Agarwal, Ekansh Gupta, Faramarz Fekri, Raghupathy Sivakumar

Providing Reinforcement Learning (RL) agents with human feedback can dramatically improve various aspects of learning.

Autonomous Driving EEG +3

Incorporating Relational Background Knowledge into Reinforcement Learning via Differentiable Inductive Logic Programming

no code implementations23 Mar 2020 Ali Payani, Faramarz Fekri

Most importantly, it allows for incorporating expert knowledge into the learning, and hence leading to much faster learning and better generalization compared to the standard deep reinforcement learning.

Inductive logic programming reinforcement-learning +2

Deep Reinforcement Learning with Implicit Human Feedback

no code implementations ICLR 2020 Duo Xu, Mohit Agarwal, Raghupathy Sivakumar, Faramarz Fekri

Building atop the baseline, we then make the following novel contributions in our work: (i) We argue that the definition of error-potentials is generalizable across different environments; specifically we show that error-potentials of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the error-potentials.

Atari Games EEG +2

Inductive Logic Programming via Differentiable Deep Neural Logic Networks

1 code implementation8 Jun 2019 Ali Payani, Faramarz Fekri

In particular, we show that our proposed method outperforms the state of the art ILP solvers in classification tasks for Mutagenesis, Cora and IMDB datasets.

General Classification Inductive logic programming

Nested Dithered Quantization for Communication Reduction in Distributed Training

no code implementations ICLR 2019 Afshin Abdi, Faramarz Fekri

In distributed training, the communication cost due to the transmission of gradients or the parameters of the deep model is a major bottleneck in scaling up the number of processing nodes.

Quantization

Learning Algorithms via Neural Logic Networks

no code implementations2 Apr 2019 Ali Payani, Faramarz Fekri

In particular, we propose a new framework for learning the inductive logic programming (ILP) problems by exploiting the explicit representational power of NLN.

Inductive logic programming

Cannot find the paper you are looking for? You can Submit a new open access paper.