no code implementations • 17 Oct 2022 • Junhong Xu, Durgakant Pushp, Kai Yin, Lantao Liu
Using both simulated and real-world experiments in multi-robot navigation tasks, we demonstrate that the resulting framework allows the robots to reason about different levels of rational behaviors of other agents and compute a reasonable strategy under its computational constraint.
no code implementations • 8 Sep 2020 • Junhong Xu, Kai Yin, Lantao Liu
We first predict the future state distributions of other vehicles to account for their uncertain behaviors affected by the time-varying disturbances.
no code implementations • 3 Jun 2020 • Junhong Xu, Kai Yin, Lantao Liu
We propose a principled kernel-based policy iteration algorithm to solve the continuous-state Markov Decision Processes (MDPs).
no code implementations • 22 May 2019 • Junhong Xu, Kai Yin, Lantao Liu
We propose a solution to a time-varying variant of Markov Decision Processes which can be used to address decision-theoretic planning problems for autonomous systems operating in unstructured outdoor environments.
no code implementations • 14 Aug 2018 • Junhong Xu, Qiwei Liu, Hanqing Guo, Aaron Kageza, Saeed AlQarni, Shaoen Wu
Deep imitation learning enables robots to learn from expert demonstrations to perform tasks such as lane following or obstacle avoidance.
no code implementations • 22 Sep 2017 • Junhong Xu, Shangyue Zhu, Hanqing Guo, Shaoen Wu
This solution includes a suboptimal sensor policy based on sensor fusion to automatically label states encountered by a robot to avoid human supervision during training.