Search Results for author: David Hsu

Found 35 papers, 13 papers with code

End-to-End Partially Observable Visual Navigation in a Diverse Environment

no code implementations16 Sep 2021 Bo Ai, Wei Gao, Vinay, David Hsu

We propose a novel neural network (NN) architecture to represent a local controller and leverage the flexibility of the end-to-end approach to learn a powerful policy.

Visual Navigation

Ab Initio Particle-based Object Manipulation

no code implementations19 Jul 2021 Siwei Chen, Xiao Ma, Yunfan Lu, David Hsu

Like the model-based analytic approaches to manipulation, the particle representation enables the robot to reason about the object's geometry and dynamics in order to choose suitable manipulation actions.

Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation

no code implementations CVPR 2021 Peter Karkus, Shaojun Cai, David Hsu

We introduce the Differentiable SLAM Network (SLAM-net) along with a navigation architecture to enable planar robot navigation in previously unseen indoor environments.

Robot Navigation Simultaneous Localization and Mapping +1

Learning Latent Graph Dynamics for Deformable Object Manipulation

no code implementations25 Apr 2021 Xiao Ma, David Hsu, Wee Sun Lee

To tackle the challenge of many DoFs and complex dynamics, G-DOOM approximates a deformable object as a sparse set of interacting keypoints and learns a graph neural network that captures abstractly the geometry and interaction dynamics of the keypoints.

Contrastive Learning Deformable Object Manipulation +1

Closing the Planning-Learning Loop with Application to Autonomous Driving in a Crowd

no code implementations11 Jan 2021 Panpan Cai, David Hsu

To achieve real-time performance for large-scale planning, this work introduces Learning from Tree Search for Driving (LeTS-Drive), which integrates planning and learning in a closed loop.

Autonomous Driving Robotics

MAGIC: Learning Macro-Actions for Online POMDP Planning

1 code implementation7 Nov 2020 Yiyuan Lee, Panpan Cai, David Hsu

The partially observable Markov decision process (POMDP) is a principled general framework for robot decision making under uncertainty, but POMDP planning suffers from high computational complexity, when long-term planning is required.

Decision Making Decision Making Under Uncertainty

Contrastive Variational Reinforcement Learning for Complex Observations

1 code implementation6 Aug 2020 Xiao Ma, Siwei Chen, David Hsu, Wee Sun Lee

This paper presents Contrastive Variational Reinforcement Learning (CVRL), a model-based method that tackles complex visual observations in DRL.

Atari Games Continuous Control +2

DinerDash Gym: A Benchmark for Policy Learning in High-Dimensional Action Space

1 code implementation13 Jul 2020 Siwei Chen, Xiao Ma, David Hsu

It has been arduous to assess the progress of a policy learning algorithm in the domain of hierarchical task with high dimensional action space due to the lack of a commonly accepted benchmark.

Atari Games

Discriminative Particle Filter Reinforcement Learning for Complex Partial Observations

1 code implementation ICLR 2020 Xiao Ma, Peter Karkus, David Hsu, Wee Sun Lee, Nan Ye

The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making.

Atari Games Decision Making +1

SUMMIT: A Simulator for Urban Driving in Massive Mixed Traffic

2 code implementations11 Nov 2019 Panpan Cai, Yiyuan Lee, Yuanfu Luo, David Hsu

Autonomous driving in an unregulated urban crowd is an outstanding challenge, especially, in the presence of many aggressive, high-speed traffic participants.

Robotics Multiagent Systems

Robot Capability and Intention in Trust-based Decisions across Tasks

no code implementations3 Sep 2019 Yaqi Xie, Indu P Bodala, Desmond C. Ong, David Hsu, Harold Soh

In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots---inferred capability and intention---and their relationship to overall trust and eventual decisions.

Hindsight Trust Region Policy Optimization

1 code implementation29 Jul 2019 Hanbo Zhang, Site Bai, Xuguang Lan, David Hsu, Nanning Zheng

We propose \emph{Hindsight Trust Region Policy Optimization}(HTRPO), a new RL algorithm that extends the highly successful TRPO algorithm with \emph{hindsight} to tackle the challenge of sparse rewards.

Atari Games Policy Gradient Methods

GAMMA: A General Agent Motion Model for Autonomous Driving

1 code implementation4 Jun 2019 Yuanfu Luo, Panpan Cai, Yiyuan Lee, David Hsu

Further, the computational efficiency and the flexibility of GAMMA enable (i) simulation of mixed urban traffic at many locations worldwide and (ii) planning for autonomous driving in dense traffic with uncertain driver behaviors, both in real-time.

Autonomous Driving motion prediction

Particle Filter Recurrent Neural Networks

1 code implementation30 May 2019 Xiao Ma, Peter Karkus, David Hsu, Wee Sun Lee

Recurrent neural networks (RNNs) have been extraordinarily successful for prediction with sequential data.

General Classification Stock Price Prediction +1

LeTS-Drive: Driving in a Crowd by Learning from Tree Search

no code implementations29 May 2019 Panpan Cai, Yuanfu Luo, Aseem Saxena, David Hsu, Wee Sun Lee

LeTS-Drive leverages the robustness of planning and the runtime efficiency of learning to enhance the performance of both.

Autonomous Driving Imitation Learning

Differentiable Algorithm Networks for Composable Robot Learning

no code implementations28 May 2019 Peter Karkus, Xiao Ma, David Hsu, Leslie Pack Kaelbling, Wee Sun Lee, Tomas Lozano-Perez

This paper introduces the Differentiable Algorithm Network (DAN), a composable architecture for robot learning systems.

Factored Contextual Policy Search with Bayesian Optimization

no code implementations26 Apr 2019 Robert Pinsler, Peter Karkus, Andras Kupcsik, David Hsu, Wee Sun Lee

Our key observation is that experience can be directly generalized over target contexts.

Active Learning

Integrating Algorithmic Planning and Deep Learning for Partially Observable Navigation

no code implementations17 Jul 2018 Peter Karkus, David Hsu, Wee Sun Lee

We propose to take a novel approach to robot system design where each building block of a larger system is represented as a differentiable program, i. e. a deep neural network.

Robot Navigation

Push-Net: Deep Planar Pushing for Objects with Unknown Physical Properties

1 code implementation Robotics: Science and Systems 2018 Jue Kun Li, David Hsu, Wee Sun Lee

This paper introduces Push-Net, a deep recurrent neural network model, which enables a robot to push objects of unknown physical properties for re-positioning and re-orientation, using only visual camera images as input.

Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction

no code implementations11 Jun 2018 Mohit Shridhar, David Hsu

The first stage uses a neural network to generate visual descriptions of objects, compares them with the input language expression, and identifies a set of candidate objects.

Human robot interaction Question Generation +1

Solving the Perspective-2-Point Problem for Flying-Camera Photo Composition

no code implementations CVPR 2018 Ziquan Lan, David Hsu, Gim Hee Lee

The user, instead of holding a camera in hand and manually searching for a viewpoint, will interact directly with image contents in the viewfinder through simple gestures, and the flying camera will achieve the desired viewpoint through the autonomous flying capability of the drone.

PORCA: Modeling and Planning for Autonomous Driving among Many Pedestrians

no code implementations30 May 2018 Yuanfu Luo, Panpan Cai, Aniket Bera, David Hsu, Wee Sun Lee, Dinesh Manocha

Our planning system combines a POMDP algorithm with the pedestrian motion model and runs in near real time.

Robotics

Particle Filter Networks with Application to Visual Localization

2 code implementations23 May 2018 Peter Karkus, David Hsu, Wee Sun Lee

Particle filtering is a powerful approach to sequential state estimation and finds application in many domains, including robot localization, object tracking, etc.

Object Tracking Visual Localization

HyP-DESPOT: A Hybrid Parallel Algorithm for Online Planning under Uncertainty

1 code implementation17 Feb 2018 Panpan Cai, Yuanfu Luo, David Hsu, Wee Sun Lee

Planning under uncertainty is critical for robust robot performance in uncertain, dynamic environments, but it incurs high computational cost.

Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning

no code implementations12 Jan 2018 Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, Siddhartha Srinivasa

The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term.

Decision Making

Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation

2 code implementations16 Oct 2017 Wei Gao, David Hsu, Wee Sun Lee, ShengMei Shen, Karthikk Subramanian

How can a delivery robot navigate reliably to a destination in a new office building, with minimal prior information?

Autonomous Navigation

Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction

no code implementations18 Jul 2017 Mohit Shridhar, David Hsu

A core issue for the system is semantic and spatial grounding, which is to infer objects and their spatial relationships from images and natural language expressions.

Human robot interaction

QMDP-Net: Deep Learning for Planning under Partial Observability

2 code implementations NeurIPS 2017 Peter Karkus, David Hsu, Wee Sun Lee

It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure of planning in a network learning architecture.

Factored Contextual Policy Search with Bayesian Optimization

no code implementations6 Dec 2016 Peter Karkus, Andras Kupcsik, David Hsu, Wee Sun Lee

Scarce data is a major challenge to scaling robot learning to truly complex tasks, as we need to generalize locally learned policies over different "contexts".

Active Learning

DESPOT: Online POMDP Planning with Regularization

no code implementations NeurIPS 2013 Nan Ye, Adhiraj Somani, David Hsu, Wee Sun Lee

We show that the best policy obtained from a DESPOT is near-optimal, with a regret bound that depends on the representation size of the optimal policy.

Autonomous Driving

POMDP-lite for Robust Robot Planning under Uncertainty

no code implementations16 Feb 2016 Min Chen, Emilio Frazzoli, David Hsu, Wee Sun Lee

We show that a POMDP-lite is equivalent to a set of fully observable Markov decision processes indexed by a hidden parameter and is useful for modeling a variety of interesting robotic tasks.

Exploration in Interactive Personalized Music Recommendation: A Reinforcement Learning Approach

no code implementations6 Nov 2013 Xinxi Wang, Yi Wang, David Hsu, Ye Wang

Current music recommender systems typically act in a greedy fashion by recommending songs with the highest user ratings.

Bayesian Inference Recommendation Systems +1

Monte Carlo Bayesian Reinforcement Learning

no code implementations27 Jun 2012 Yi Wang, Kok Sung Won, David Hsu, Wee Sun Lee

Bayesian reinforcement learning (BRL) encodes prior knowledge of the world in a model and represents uncertainty in model parameters by maintaining a probability distribution over them.

Monte Carlo Value Iteration with Macro-Actions

no code implementations NeurIPS 2011 Zhan Lim, Lee Sun, David Hsu

The recently introduced Monte Carlo Value Iteration (MCVI) can tackle POMDPs with very large discrete state spaces or continuous state spaces, but its performance degrades when faced with long planning horizons.

Cannot find the paper you are looking for? You can Submit a new open access paper.