no code implementations • 11 Nov 2022 • Xinyu Zhao, Razvan C. Fetecau, Mo Chen
To improve the stability of the learning-based policy and efficiency of exploration, we utilize an imitation loss based on the state-of-the-art classical control policy.
no code implementations • 23 Oct 2022 • Pedram Agand, Mo Chen, Hamid D. Taghirad
We also compare our method with recursive least squares and the particle filter, and show that our technique has significantly more accurate point estimates as well as a decrease in tracking error of the value of interest.
1 code implementation • 23 Oct 2022 • Pedram Agand, Michael Chang, Mo Chen
Using a single camera to estimate the distances of objects reduces costs compared to stereo-vision and LiDAR.
1 code implementation • 15 Sep 2022 • Mohammad Mahdavian, Payam Nikdel, Mahdi TaherAhmadi, Mo Chen
The proposed architecture divides human motion prediction into two parts: 1) the human trajectory, which is the hip joint 3D position over time and 2) the human pose which is the all other joints 3D positions over time with respect to a fixed hip joint.
no code implementations • 13 Sep 2022 • Payam Nikdel, Mohammad Mahdavian, Mo Chen
We show that our system outperforms the state-of-the-art in human motion prediction while it can predict diverse multi-motion future trajectories with hip movements
no code implementations • 15 Aug 2022 • Saba Akhyani, Mehryar Abbasi Boroujeni, Mo Chen, Angelica Lim
Robots and artificial agents that interact with humans should be able to do so without bias and inequity, but facial perception systems have notoriously been found to work more poorly for certain groups of people than others.
1 code implementation • 12 Apr 2022 • Minh Bui, George Giovanis, Mo Chen, Arrvindh Shriraman
This paper introduces OptimizedDP, a high-performance software library that solves time-dependent Hamilton-Jacobi partial differential equation (PDE), computes backward reachable sets with application in robotics, and contains value iterations algorithm implementation for continuous action-state space Markov Decision Process (MDP) while leveraging user-friendliness of Python for different problem specifications without sacrificing efficiency of the core computation.
no code implementations • 29 Mar 2022 • Xubo Lyu, Amin Banitalebi-Dehkordi, Mo Chen, Yong Zhang
Multi-agent policy gradient methods have demonstrated success in games and robotics but are often limited to problems with low-level action space.
Hierarchical Reinforcement Learning
Multi-agent Reinforcement Learning
+3
no code implementations • 29 Sep 2021 • Pedram Agand, Mo Chen, Hamid Taghirad
Our method shows at-least 70\% improvement in parameter point estimation accuracy and approximately 55\% reduction in tracking error of the value of interest compared to recursive least squares and conventional MCMC.
no code implementations • 29 Sep 2021 • Payam Jome Yazdian, Mo Chen, Angelica Lim
We propose a vector-quantized variational autoencoder structure as well as training techniques to learn a rigorous representation of gesture sequences.
no code implementations • 1 Jan 2021 • Pedram Agand, Mo Chen, Hamid D. Taghirad
We demonstrate our approach on a challenging benchmark: estimation of parameters in the Hunt-Crossley dynamic model, which models both on/off contact forces applied to soft materials.
no code implementations • 5 Nov 2020 • Payam Nikdel, Richard Vaughan, Mo Chen
Our deep RL module implicitly estimates human trajectory and produces short-term navigational goals to guide the robot.
no code implementations • 4 Nov 2020 • Xubo Lyu, Site Li, Seth Siriya, Ye Pu, Mo Chen
On the other end, "classical methods" such as optimal control generate solutions without collecting data, but assume that an accurate model of the system and environment is known and are mostly limited to problems with low-dimensional (lo-dim) state spaces.
no code implementations • 28 Oct 2020 • Zhitian Zhang, Jimin Rhim, Taher Ahmadi, Kefan Yang, Angelica Lim, Mo Chen
This article describes a dataset collected in a set of experiments that involves human participants and a robot.
no code implementations • L4DC 2020 • Anjian Li, Somil Bansal, Georgios Giovanis, Varun Tolani, Claire Tomlin, Mo Chen
In Bansal et al. (2019), a novel visual navigation framework that combines learning-based and model-based approaches has been proposed.
no code implementations • WS 2018 • Kaiyin Zhou, Sheng Zhang, Xiangyu Meng, Qi Luo, Yuxing Wang, Ke Ding, Yukun Feng, Mo Chen, Kevin Cohen, Jingbo Xia
Sequence labeling of biomedical entities, e. g., side effects or phenotypes, was a long-term task in BioNLP and MedNLP communities.
1 code implementation • 16 Jun 2018 • Boris Ivanovic, James Harrison, Apoorva Sharma, Mo Chen, Marco Pavone
Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance.
no code implementations • 21 Sep 2017 • Somil Bansal, Mo Chen, Sylvia Herbert, Claire J. Tomlin
Hamilton-Jacobi (HJ) reachability analysis is an important formal verification method for guaranteeing performance and safety properties of dynamical systems; it has been applied to many small-scale systems in the past decade.
Systems and Control Dynamical Systems Optimization and Control
no code implementations • 21 Mar 2017 • Sylvia L. Herbert, Mo Chen, SooJean Han, Somil Bansal, Jaime F. Fisac, Claire J. Tomlin
We propose a new algorithm FaSTrack: Fast and Safe Tracking for High Dimensional systems.
Robotics
no code implementations • 10 Nov 2016 • Frank Jiang, Glen Chou, Mo Chen, Claire J. Tomlin
To sidestep the curse of dimensionality when computing solutions to Hamilton-Jacobi-Bellman partial differential equations (HJB PDE), we propose an algorithm that leverages a neural network to approximate the value function.