Search Results for author: Masahiro Ono

Found 9 papers, 1 papers with code

Temporal Multimodal Multivariate Learning

no code implementations14 Jun 2022 Hyoshin Park, Justice Darko, Niharika Deshpande, Venktesh Pandey, Hui Su, Masahiro Ono, Dedrick Barkely, Larkin Folsom, Derek Posselt, Steve Chien

We introduce temporal multimodal multivariate learning, a new family of decision making models that can indirectly learn and transfer online information from simultaneous observations of a probability distribution with more than one peak or more than one outcome variable from one time stage to another.

Decision Making

Lunar Rover Localization Using Craters as Landmarks

no code implementations18 Mar 2022 Larry Matthies, Shreyansh Daftry, Scott Tepsuporn, Yang Cheng, Deegan Atha, R. Michael Swan, Sanjna Ravichandar, Masahiro Ono

At the end of each drive, a ground-in-the-loop (GITL) interaction is used to get a position update from human operators in a more global reference frame, by matching images or local maps from onboard the rover to orbital reconnaissance images or maps of a large region around the rover's current position.

Visual Odometry

MLNav: Learning to Safely Navigate on Martian Terrains

no code implementations9 Mar 2022 Shreyansh Daftry, Neil Abcouwer, Tyler del Sesto, Siddarth Venkatraman, Jialin Song, Lucas Igel, Amos Byon, Ugo Rosolia, Yisong Yue, Masahiro Ono

We present MLNav, a learning-enhanced path planning framework for safety-critical and resource-limited systems operating in complex environments, such as rovers navigating on Mars.


Machine Learning Based Path Planning for Improved Rover Navigation (Pre-Print Version)

no code implementations11 Nov 2020 Neil Abcouwer, Shreyansh Daftry, Siddarth Venkatraman, Tyler del Sesto, Olivier Toupet, Ravi Lanka, Jialin Song, Yisong Yue, Masahiro Ono

Enhanced AutoNav (ENav), the baseline surface navigation software for NASA's Perseverance rover, sorts a list of candidate paths for the rover to traverse, then uses the Approximate Clearance Evaluation (ACE) algorithm to evaluate whether the most highly ranked paths are safe.

BIG-bench Machine Learning

Risk-Averse Planning Under Uncertainty

no code implementations27 Sep 2019 Mohamadreza Ahmadi, Masahiro Ono, Michel D. Ingham, Richard M. Murray, Aaron D. Ames

We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives.

Co-training for Policy Learning

1 code implementation3 Jul 2019 Jialin Song, Ravi Lanka, Yisong Yue, Masahiro Ono

We study the problem of learning sequential decision-making policies in settings with multiple state-action representations.

Combinatorial Optimization Continuous Control +1

Learning to Search via Retrospective Imitation

no code implementations3 Apr 2018 Jialin Song, Ravi Lanka, Albert Zhao, Aadyot Bhatnagar, Yisong Yue, Masahiro Ono

We study the problem of learning a good search policy for combinatorial search spaces.

Imitation Learning

Mixed Strategy for Constrained Stochastic Optimal Control

no code implementations6 Jul 2016 Masahiro Ono, Mahmoud El Chamie, Marco Pavone, Behcet Acikmese

We found that the same result holds for stochastic optimal control problems with continuous state and action spaces. Furthermore, we show the randomization of control input can result in reduced cost when the optimization problem is nonconvex, and the cost reduction is equal to the duality gap.


Probabilistic Planning for Continuous Dynamic Systems under Bounded Risk

no code implementations4 Feb 2014 Masahiro Ono, Brian C. Williams, L. Blackmore

The second capability is essential for the planner to solve problems with a continuous state space such as vehicle path planning.


Cannot find the paper you are looking for? You can Submit a new open access paper.