no code implementations • 14 Jun 2022 • Hyoshin Park, Justice Darko, Niharika Deshpande, Venktesh Pandey, Hui Su, Masahiro Ono, Dedrick Barkely, Larkin Folsom, Derek Posselt, Steve Chien
We introduce temporal multimodal multivariate learning, a new family of decision making models that can indirectly learn and transfer online information from simultaneous observations of a probability distribution with more than one peak or more than one outcome variable from one time stage to another.
no code implementations • 18 Mar 2022 • Larry Matthies, Shreyansh Daftry, Scott Tepsuporn, Yang Cheng, Deegan Atha, R. Michael Swan, Sanjna Ravichandar, Masahiro Ono
At the end of each drive, a ground-in-the-loop (GITL) interaction is used to get a position update from human operators in a more global reference frame, by matching images or local maps from onboard the rover to orbital reconnaissance images or maps of a large region around the rover's current position.
no code implementations • 9 Mar 2022 • Shreyansh Daftry, Neil Abcouwer, Tyler del Sesto, Siddarth Venkatraman, Jialin Song, Lucas Igel, Amos Byon, Ugo Rosolia, Yisong Yue, Masahiro Ono
We present MLNav, a learning-enhanced path planning framework for safety-critical and resource-limited systems operating in complex environments, such as rovers navigating on Mars.
no code implementations • 11 Nov 2020 • Neil Abcouwer, Shreyansh Daftry, Siddarth Venkatraman, Tyler del Sesto, Olivier Toupet, Ravi Lanka, Jialin Song, Yisong Yue, Masahiro Ono
Enhanced AutoNav (ENav), the baseline surface navigation software for NASA's Perseverance rover, sorts a list of candidate paths for the rover to traverse, then uses the Approximate Clearance Evaluation (ACE) algorithm to evaluate whether the most highly ranked paths are safe.
no code implementations • 27 Sep 2019 • Mohamadreza Ahmadi, Masahiro Ono, Michel D. Ingham, Richard M. Murray, Aaron D. Ames
We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives.
1 code implementation • 3 Jul 2019 • Jialin Song, Ravi Lanka, Yisong Yue, Masahiro Ono
We study the problem of learning sequential decision-making policies in settings with multiple state-action representations.
no code implementations • 3 Apr 2018 • Jialin Song, Ravi Lanka, Albert Zhao, Aadyot Bhatnagar, Yisong Yue, Masahiro Ono
We study the problem of learning a good search policy for combinatorial search spaces.
no code implementations • 6 Jul 2016 • Masahiro Ono, Mahmoud El Chamie, Marco Pavone, Behcet Acikmese
We found that the same result holds for stochastic optimal control problems with continuous state and action spaces. Furthermore, we show the randomization of control input can result in reduced cost when the optimization problem is nonconvex, and the cost reduction is equal to the duality gap.
no code implementations • 4 Feb 2014 • Masahiro Ono, Brian C. Williams, L. Blackmore
The second capability is essential for the planner to solve problems with a continuous state space such as vehicle path planning.