Search Results for author: Xuesu Xiao

Found 20 papers, 4 papers with code

Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination

no code implementations25 Mar 2024 Saad Abdul Ghani, Zizhao Wang, Peter Stone, Xuesu Xiao

In our new Dynamic Learning from Learned Hallucination (Dyna-LfLH), we design and learn a novel latent distribution and sample dynamic obstacles from it, so the generated training data can be used to learn a motion planner to navigate in dynamic environments.

Hallucination Imitation Learning +2

VANP: Learning Where to See for Navigation with Self-Supervised Vision-Action Pre-Training

no code implementations12 Mar 2024 Mohammad Nazeri, Junzhe Wang, Amirreza Payandeh, Xuesu Xiao

However, most robotic visual navigation methods rely on deep learning models pre-trained on vision tasks, which prioritize salient objects -- not necessarily relevant to navigation and potentially misleading.

Self-Supervised Learning Visual Navigation

Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement Learning

no code implementations6 Mar 2024 Zifan Xu, Amir Hossain Raj, Xuesu Xiao, Peter Stone

To address the inefficiency of tracking distant navigation goals, we introduce a hierarchical locomotion controller that combines a classical planner tasked with planning waypoints to reach a faraway global goal location, and an RL-based policy trained to follow these waypoints by generating low-level motion commands.

Navigate reinforcement-learning +1

Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning

no code implementations23 Jan 2024 Zizhao Wang, Caroline Wang, Xuesu Xiao, Yuke Zhu, Peter Stone

Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications.

reinforcement-learning Reinforcement Learning (RL)

A Study on Learning Social Robot Navigation with Multimodal Perception

1 code implementation22 Sep 2023 Bhabaranjan Panigrahi, Amir Hossain Raj, Mohammad Nazeri, Xuesu Xiao

Autonomous mobile robots need to perceive the environments with their onboard sensors (e. g., LiDARs and RGB cameras) and then make appropriate navigation decisions.

Decision Making Navigate +1

How susceptible are LLMs to Logical Fallacies?

1 code implementation18 Aug 2023 Amirreza Payandeh, Dan Pluth, Jordan Hosier, Xuesu Xiao, Vijay K. Gurbani

Then, it evaluates the debater's performance in logical reasoning by contrasting the scenario where the persuader employs logical fallacies against one where logical reasoning is used.

Logical Fallacies Logical Reasoning

Causal Dynamics Learning for Task-Independent State Abstraction

1 code implementation27 Jun 2022 Zizhao Wang, Xuesu Xiao, Zifan Xu, Yuke Zhu, Peter Stone

Learning dynamics models accurately is an important goal for Model-Based Reinforcement Learning (MBRL), but most MBRL methods learn a dense dynamics model which is vulnerable to spurious correlations and therefore generalizes poorly to unseen states.

Model-based Reinforcement Learning

High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization

no code implementations16 Jun 2022 Pranav Atreya, Haresh Karnan, Kavan Singh Sikand, Xuesu Xiao, Sadegh Rabiee, Joydeep Biswas

However, the types of control problems these approaches can be applied to are limited only to that of following pre-computed kinodynamically feasible trajectories.

VI-IKD: High-Speed Accurate Off-Road Navigation using Learned Visual-Inertial Inverse Kinodynamics

no code implementations30 Mar 2022 Haresh Karnan, Kavan Singh Sikand, Pranav Atreya, Sadegh Rabiee, Xuesu Xiao, Garrett Warnell, Peter Stone, Joydeep Biswas

In this paper, we hypothesize that to enable accurate high-speed off-road navigation using a learned IKD model, in addition to inertial information from the past, one must also anticipate the kinodynamic interactions of the vehicle with the terrain in the future.

Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of Demonstrations for Social Navigation

no code implementations28 Mar 2022 Haresh Karnan, Anirudh Nair, Xuesu Xiao, Garrett Warnell, Soeren Pirk, Alexander Toshev, Justin Hart, Joydeep Biswas, Peter Stone

Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a 'socially compliant' manner in the presence of other intelligent agents such as humans.

Imitation Learning Navigate +1

Visual Representation Learning for Preference-Aware Path Planning

no code implementations18 Sep 2021 Kavan Singh Sikand, Sadegh Rabiee, Adam Uccello, Xuesu Xiao, Garrett Warnell, Joydeep Biswas

We introduce Visual Representation Learning for Preference-Aware Path Planning (VRL-PAP), an alternative approach that overcomes all three limitations: VRL-PAP leverages unlabeled human demonstrations of navigation to autonomously generate triplets for learning visual representations of terrain that are viewpoint invariant and encode terrain types in a continuous representation space.

Representation Learning Semantic Segmentation

Conflict Avoidance in Social Navigation -- a Survey

no code implementations23 Jun 2021 Reuth Mirsky, Xuesu Xiao, Justin Hart, Peter Stone

This survey aims to bridge this gap by introducing such a common language, using it to survey existing work, and highlighting open problems.

Social Navigation

APPLD: Adaptive Planner Parameter Learning from Demonstration

no code implementations31 Mar 2020 Xuesu Xiao, Bo Liu, Garrett Warnell, Jonathan Fink, Peter Stone

Existing autonomous robot navigation systems allow robots to move from one point to another in a collision-free manner.

Robot Navigation

Tethered Aerial Visual Assistance

no code implementations15 Jan 2020 Xuesu Xiao, Jan Dufek, Robin R. Murphy

In this paper, an autonomous tethered Unmanned Aerial Vehicle (UAV) is developed into a visual assistant in a marsupial co-robots team, collaborating with a tele-operated Unmanned Ground Vehicle (UGV) for robot operations in unstructured or confined environments.

Navigate

Autonomous Visual Assistance for Robot Operations Using a Tethered UAV

no code implementations29 Mar 2019 Xuesu Xiao, Jan Dufek, Robin R. Murphy

This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments.

Robotics

Explicit-risk-aware Path Planning with Reward Maximization

no code implementations7 Mar 2019 Xuesu Xiao, Jan Dufek, Robin Murphy

Without manual assignment of the negative impact to the planner caused by risk, this planner takes in a pre-established viewpoint quality map and plans target location and path leading to it simultaneously, in order to maximize overall reward along the entire path while minimizing risk.

Cannot find the paper you are looking for? You can Submit a new open access paper.