Search Results for author: Lu Feng

Found 15 papers, 5 papers with code

Safe POMDP Online Planning via Shielding

no code implementations19 Sep 2023 Shili Sheng, David Parker, Lu Feng

POMDP online planning algorithms such as Partially Observable Monte-Carlo Planning (POMCP) can solve very large POMDPs with the goal of maximizing the expected return.

Autonomous Driving Decision Making +1

CARNA: Characterizing Advanced heart failure Risk and hemodyNAmic phenotypes using learned multi-valued decision diagrams

1 code implementation11 Jun 2023 Josephine Lamp, Yuxin Wu, Steven Lamp, Prince Afriyie, Kenneth Bilchick, Lu Feng, Sula Mazimba

To address these limitations, this paper presents CARNA, a hemodynamic risk stratification and phenotyping framework for advanced HF that takes advantage of the explainability and expressivity of machine learned Multi-Valued Decision Diagrams (MVDDs).

Decision Making Descriptive

Explainable Multi-Agent Reinforcement Learning for Temporal Queries

1 code implementation17 May 2023 Kayla Boggess, Sarit Kraus, Lu Feng

As multi-agent reinforcement learning (MARL) systems are increasingly deployed throughout society, it is imperative yet challenging for users to understand the emergent behaviors of MARL agents in complex environments.

Multi-agent Reinforcement Learning reinforcement-learning

Towards Developing Safety Assurance Cases for Learning-Enabled Medical Cyber-Physical Systems

no code implementations23 Nov 2022 Maryam Bagheri, Josephine Lamp, Xugui Zhou, Lu Feng, Homa Alemzadeh

In this paper, we develop a safety assurance case for ML controllers in learning-enabled MCPS, with an emphasis on establishing confidence in the ML-based predictions.

Logic-based Reward Shaping for Multi-Agent Reinforcement Learning

1 code implementation17 Jun 2022 Ingy Elsayed-Aly, Lu Feng

We present a novel method for semi-centralized logic-based MARL reward shaping that is scalable in the number of agents and evaluate it in multiple scenarios.

Multi-agent Reinforcement Learning reinforcement-learning +1

Toward Policy Explanations for Multi-Agent Reinforcement Learning

1 code implementation26 Apr 2022 Kayla Boggess, Sarit Kraus, Lu Feng

Advances in multi-agent reinforcement learning (MARL) enable sequential decision making for a range of exciting multi-agent applications such as cooperative AI and autonomous driving.

Autonomous Driving Decision Making +3

Multi-Objective Controller Synthesis with Uncertain Human Preferences

no code implementations10 May 2021 Shenghui Chen, Kayla Boggess, David Parker, Lu Feng

Complex real-world applications of cyber-physical systems give rise to the need for multi-objective controller synthesis, which concerns the problem of computing an optimal controller subject to multiple (possibly conflicting) criteria.

Safe Multi-Agent Reinforcement Learning via Shielding

no code implementations27 Jan 2021 Ingy Elsayed-Aly, Suda Bharadwaj, Christopher Amato, Rüdiger Ehlers, Ufuk Topcu, Lu Feng

Multi-agent reinforcement learning (MARL) has been increasingly used in a wide range of safety-critical applications, which require guaranteed safety (e. g., no unsafe states are ever visited) during the learning process. Unfortunately, current MARL methods do not have safety guarantees.

Multi-agent Reinforcement Learning reinforcement-learning +1

DeepTake: Prediction of Driver Takeover Behavior using Multimodal Data

no code implementations31 Dec 2020 Erfan Pakdamanian, Shili Sheng, Sonia Baee, Seongkook Heo, Sarit Kraus, Lu Feng

Nevertheless, automated vehicles may still need to occasionally hand the control back to drivers due to technology limitations and legal requirements.

STLnet: Signal Temporal Logic Enforced Multivariate Recurrent Neural Networks

no code implementations NeurIPS 2020 Meiyi Ma, Ji Gao, Lu Feng, John Stankovic

In this paper, we develop a new temporal logic-based learning framework, STLnet, which guides the RNN learning process with auxiliary knowledge of model properties, and produces a more robust model for improved future predictions.

Towards Personalized Explanation of Robot Path Planning via User Feedback

no code implementations1 Nov 2020 Kayla Boggess, Shenghui Chen, Lu Feng

Prior studies have found that explaining robot decisions and actions helps to increase system transparency, improve user understanding, and enable effective human-robot collaboration.

Question Answering Specificity

Predictive Monitoring with Logic-Calibrated Uncertainty for Cyber-Physical Systems

no code implementations31 Oct 2020 Meiyi Ma, John Stankovic, Ezio Bartocci, Lu Feng

We develop a novel approach for monitoring sequential predictions generated from Bayesian Recurrent Neural Networks (RNNs) that can capture the inherent uncertainty in CPS, drawing on insights from our study of real-world CPS datasets.

Decision Making

Towards Transparent Robotic Planning via Contrastive Explanations

no code implementations16 Mar 2020 Shenghui Chen, Kayla Boggess, Lu Feng

Providing explanations of chosen robotic actions can help to increase the transparency of robotic planning and improve users' trust.

MEDIRL: Predicting the Visual Attention of Drivers via Maximum Entropy Deep Inverse Reinforcement Learning

2 code implementations ICCV 2021 Sonia Baee, Erfan Pakdamanian, Inki Kim, Lu Feng, Vicente Ordonez, Laura Barnes

Inspired by human visual attention, we propose a novel inverse reinforcement learning formulation using Maximum Entropy Deep Inverse Reinforcement Learning (MEDIRL) for predicting the visual attention of drivers in accident-prone situations.

Autonomous Vehicles reinforcement-learning +1

Counterexamples for Robotic Planning Explained in Structured Language

no code implementations23 Mar 2018 Lu Feng, Mahsa Ghasemi, Kai-Wei Chang, Ufuk Topcu

Automated techniques such as model checking have been used to verify models of robotic mission plans based on Markov decision processes (MDPs) and generate counterexamples that may help diagnose requirement violations.

Cannot find the paper you are looking for? You can Submit a new open access paper.