Search Results for author: Yantian Zha

Found 9 papers, 2 papers with code

NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction

no code implementations4 Mar 2024 Snehesh Shrestha, Yantian Zha, Saketh Banagiri, Ge Gao, Yiannis Aloimonos, Cornelia Fermuller

NatSGD serves as a foundational resource at the intersection of machine learning and HRI research, and we demonstrate its effectiveness in training robots to understand tasks through multimodal human commands, emphasizing the significance of jointly considering speech and gestures.

"Task Success" is not Enough: Investigating the Use of Video-Language Models as Behavior Critics for Catching Undesirable Agent Behaviors

no code implementations6 Feb 2024 Lin Guan, Yifan Zhou, Denis Liu, Yantian Zha, Heni Ben Amor, Subbarao Kambhampati

Large-scale generative models are shown to be useful for sampling meaningful candidate solutions, yet they often overlook task constraints and user preferences.

Automated Theorem Proving Game of Go

Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning

1 code implementation11 Oct 2021 Yantian Zha, Lin Guan, Subbarao Kambhampati

Our main contribution is to propose the Self-Explanation for RL from Demonstrations (SERLfD) framework, which can overcome the limitations of traditional RLfD works.

reinforcement-learning Reinforcement Learning (RL)

Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems

no code implementations21 Sep 2021 Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan

The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities.

Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping

1 code implementation2 Apr 2021 Yantian Zha, Siddhant Bhambri, Lin Guan

In this work, our goal is instead to fill the gap between affordance discovery and affordance-based policy learning by integrating the two objectives in an end-to-end imitation learning framework based on deep neural networks.

Contrastive Learning Imitation Learning +1

Plan-Recognition-Driven Attention Modeling for Visual Recognition

no code implementations2 Dec 2018 Yantian Zha, Yikang Li, Tianshu Yu, Subbarao Kambhampati, Baoxin Li

We build an event recognition system, ER-PRN, which takes Pixel Dynamics Network as a subroutine, to recognize events based on observations augmented by plan-recognition-driven attention.

Discovering Underlying Plans Based on Shallow Models

no code implementations4 Mar 2018 Hankz Hankui Zhuo, Yantian Zha, Subbarao Kambhampati

Specifically, we propose two approaches, DUP and RNNPlanner, to discover target plans based on vector representations of actions.

Recognizing Plans by Learning Embeddings from Observed Action Distributions

no code implementations5 Dec 2017 Yantian Zha, Yikang Li, Sriram Gopalakrishnan, Baoxin Li, Subbarao Kambhampati

The first involves resampling the distribution sequences to single action sequences, from which we could learn an action affinity model based on learned action (word) embeddings for plan recognition.

Activity Recognition Word Embeddings

Explicablility as Minimizing Distance from Expected Behavior

no code implementations16 Nov 2016 Anagha Kulkarni, Yantian Zha, Tathagata Chakraborti, Satya Gautam Vadlamudi, Yu Zhang, Subbarao Kambhampati

In order to have effective human-AI collaboration, it is necessary to address how the AI agent's behavior is being perceived by the humans-in-the-loop.

Cannot find the paper you are looking for? You can Submit a new open access paper.