Search Results for author: Julie A. Shah

Found 15 papers, 4 papers with code

Explainable deep learning improves human mental models of self-driving cars

no code implementations27 Nov 2024 Eoin M. Kenny, Akshay Dharmavaram, Sang Uk Lee, Tung Phan-Minh, Shreyas Rajesh, Yunqing Hu, Laura Major, Momchil S. Tomov, Julie A. Shah

We anticipate our method could be applied to other safety-critical systems with a human in the loop, such as autonomous drones and robotic surgeons.

Self-Driving Cars

Enhancing Preference-based Linear Bandits via Human Response Time

no code implementations9 Sep 2024 Shen Li, Yuyang Zhang, Zhaolin Ren, Claire Liang, Na Li, Julie A. Shah

Theoretical and empirical analyses show that for queries with strong preferences, response times complement choices by providing extra information about preference strength, leading to significantly improved utility estimation.

Set-based State Estimation with Probabilistic Consistency Guarantee under Epistemic Uncertainty

no code implementations18 Oct 2021 Shen Li, Theodoros Stouraitis, Michael Gienger, Sethu Vijayakumar, Julie A. Shah

Consistent state estimation is challenging, especially under the epistemic uncertainties arising from learned (nonlinear) dynamic and observation models.

Towards an AI Coach to Infer Team Mental Model Alignment in Healthcare

no code implementations17 Feb 2021 Sangwon Seo, Lauren R. Kennedy-Metz, Marco A. Zenati, Julie A. Shah, Roger D. Dias, Vaibhav V. Unhelkar

Shared mental models are critical to team success; however, in practice, team members may have misaligned models due to a variety of factors.

Learning Household Task Knowledge from WikiHow Descriptions

1 code implementation WS 2019 Yilun Zhou, Julie A. Shah, Steven Schockaert

Commonsense procedural knowledge is important for AI agents and robots that operate in a human environment.

On Memory Mechanism in Multi-Agent Reinforcement Learning

no code implementations11 Sep 2019 Yilun Zhou, Derrik E. Asher, Nicholas R. Waytowich, Julie A. Shah

Multi-agent reinforcement learning (MARL) extends (single-agent) reinforcement learning (RL) by introducing additional agents and (potentially) partial observability of the environment.

Multi-agent Reinforcement Learning reinforcement-learning +2

Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness

1 code implementation21 Feb 2019 Yilun Zhou, Steven Schockaert, Julie A. Shah

In this paper we instead propose to learn to predict path quality from crowdsourced human assessments.

Knowledge Graphs

Bayesian Inference of Temporal Task Specifications from Demonstrations

no code implementations NeurIPS 2018 Ankit Shah, Pritish Kamath, Julie A. Shah, Shen Li

When observing task demonstrations, human apprentices are able to identify whether a given task is executed correctly long before they gain expertise in actually performing that task.

Probabilistic Programming

Pose consensus based on dual quaternion algebra with application to decentralized formation control of mobile manipulators

1 code implementation21 Oct 2018 Heitor J. Savino, Luciano C. A. Pimenta, Julie A. Shah, Bruno V. Adorno

The dual quaternion algebra is used to model the agents' poses and also in the distributed control laws, making the proposed technique easily applicable to time-varying formation control of general robotic systems.

Robotics Optimization and Control

Fairness in Multi-Agent Sequential Decision-Making

no code implementations NeurIPS 2014 Chongjie Zhang, Julie A. Shah

We develop a simple linear programming approach and a more scalable game-theoretic approach for computing an optimal fairness policy.

Decision Making Fairness +1

Cannot find the paper you are looking for? You can Submit a new open access paper.