Search Results for author: Hao-Tien Lewis Chiang

Found 9 papers, 1 papers with code

Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios

no code implementations17 Oct 2023 Qiping Zhang, Nathan Tsoi, Booyeon Choi, Jie Tan, Hao-Tien Lewis Chiang, Marynel Vázquez

As a more scalable and cost-effective alternative, we study the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques.

Binary Classification

Language to Rewards for Robotic Skill Synthesis

no code implementations14 Jun 2023 Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia

However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applying LLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered control primitives to interface with the robot.

In-Context Learning Logical Reasoning

RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies

no code implementations10 Jul 2019 Hao-Tien Lewis Chiang, Jasmine Hsu, Marek Fiser, Lydia Tapia, Aleksandra Faust

Through the combination of sampling-based planning, a Rapidly Exploring Randomized Tree (RRT) and an efficient kinodynamic motion planner through machine learning, we propose an efficient solution to long-range planning for kinodynamic motion planning.

Motion Planning

Long-Range Indoor Navigation with PRM-RL

no code implementations25 Feb 2019 Anthony Francis, Aleksandra Faust, Hao-Tien Lewis Chiang, Jasmine Hsu, J. Chase Kew, Marek Fiser, Tsang-Wei Edward Lee

Long-range indoor navigation requires guiding robots with noisy sensors and controls through cluttered environments along paths that span a variety of buildings.

Navigate reinforcement-learning +2

Learning Navigation Behaviors End-to-End with AutoRL

no code implementations26 Sep 2018 Hao-Tien Lewis Chiang, Aleksandra Faust, Marek Fiser, Anthony Francis

The policies are trained in small, static environments with AutoRL, an evolutionary automation layer around Reinforcement Learning (RL) that searches for a deep RL reward and neural network architecture with large-scale hyper-parameter optimization.

Motion Planning reinforcement-learning +1

Deep Neural Networks for Swept Volume Prediction Between Configurations

no code implementations29 May 2018 Hao-Tien Lewis Chiang, Aleksandra Faust, Lydia Tapia

Swept Volume (SV), the volume displaced by an object when it is moving along a trajectory, is considered a useful metric for motion planning.

Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.