no code implementations • 19 Sep 2023 • Shili Sheng, David Parker, Lu Feng
POMDP online planning algorithms such as Partially Observable Monte-Carlo Planning (POMCP) can solve very large POMDPs with the goal of maximizing the expected return.
1 code implementation • 11 Jun 2023 • Josephine Lamp, Yuxin Wu, Steven Lamp, Prince Afriyie, Kenneth Bilchick, Lu Feng, Sula Mazimba
To address these limitations, this paper presents CARNA, a hemodynamic risk stratification and phenotyping framework for advanced HF that takes advantage of the explainability and expressivity of machine learned Multi-Valued Decision Diagrams (MVDDs).
1 code implementation • 17 May 2023 • Kayla Boggess, Sarit Kraus, Lu Feng
As multi-agent reinforcement learning (MARL) systems are increasingly deployed throughout society, it is imperative yet challenging for users to understand the emergent behaviors of MARL agents in complex environments.
no code implementations • 23 Nov 2022 • Maryam Bagheri, Josephine Lamp, Xugui Zhou, Lu Feng, Homa Alemzadeh
In this paper, we develop a safety assurance case for ML controllers in learning-enabled MCPS, with an emphasis on establishing confidence in the ML-based predictions.
1 code implementation • 17 Jun 2022 • Ingy Elsayed-Aly, Lu Feng
We present a novel method for semi-centralized logic-based MARL reward shaping that is scalable in the number of agents and evaluate it in multiple scenarios.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 26 Apr 2022 • Kayla Boggess, Sarit Kraus, Lu Feng
Advances in multi-agent reinforcement learning (MARL) enable sequential decision making for a range of exciting multi-agent applications such as cooperative AI and autonomous driving.
no code implementations • 10 May 2021 • Shenghui Chen, Kayla Boggess, David Parker, Lu Feng
Complex real-world applications of cyber-physical systems give rise to the need for multi-objective controller synthesis, which concerns the problem of computing an optimal controller subject to multiple (possibly conflicting) criteria.
no code implementations • 27 Jan 2021 • Ingy Elsayed-Aly, Suda Bharadwaj, Christopher Amato, Rüdiger Ehlers, Ufuk Topcu, Lu Feng
Multi-agent reinforcement learning (MARL) has been increasingly used in a wide range of safety-critical applications, which require guaranteed safety (e. g., no unsafe states are ever visited) during the learning process. Unfortunately, current MARL methods do not have safety guarantees.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 31 Dec 2020 • Erfan Pakdamanian, Shili Sheng, Sonia Baee, Seongkook Heo, Sarit Kraus, Lu Feng
Nevertheless, automated vehicles may still need to occasionally hand the control back to drivers due to technology limitations and legal requirements.
no code implementations • NeurIPS 2020 • Meiyi Ma, Ji Gao, Lu Feng, John Stankovic
In this paper, we develop a new temporal logic-based learning framework, STLnet, which guides the RNN learning process with auxiliary knowledge of model properties, and produces a more robust model for improved future predictions.
no code implementations • 1 Nov 2020 • Kayla Boggess, Shenghui Chen, Lu Feng
Prior studies have found that explaining robot decisions and actions helps to increase system transparency, improve user understanding, and enable effective human-robot collaboration.
no code implementations • 31 Oct 2020 • Meiyi Ma, John Stankovic, Ezio Bartocci, Lu Feng
We develop a novel approach for monitoring sequential predictions generated from Bayesian Recurrent Neural Networks (RNNs) that can capture the inherent uncertainty in CPS, drawing on insights from our study of real-world CPS datasets.
no code implementations • 16 Mar 2020 • Shenghui Chen, Kayla Boggess, Lu Feng
Providing explanations of chosen robotic actions can help to increase the transparency of robotic planning and improve users' trust.
2 code implementations • ICCV 2021 • Sonia Baee, Erfan Pakdamanian, Inki Kim, Lu Feng, Vicente Ordonez, Laura Barnes
Inspired by human visual attention, we propose a novel inverse reinforcement learning formulation using Maximum Entropy Deep Inverse Reinforcement Learning (MEDIRL) for predicting the visual attention of drivers in accident-prone situations.
no code implementations • 23 Mar 2018 • Lu Feng, Mahsa Ghasemi, Kai-Wei Chang, Ufuk Topcu
Automated techniques such as model checking have been used to verify models of robotic mission plans based on Markov decision processes (MDPs) and generate counterexamples that may help diagnose requirement violations.