no code implementations • 27 Feb 2025 • Ashkan Jasour, Guglielmo Daddi, Masafumi Endo, Tiago S. Vaquero, Michael Paton, Marlin P. Strub, Sabrina Corpino, Michel Ingham, Masahiro Ono, Rohan Thakker
Snake robots enable mobility through extreme terrains and confined environments in terrestrial and space applications.
1 code implementation • 14 Nov 2024 • Masahiro Ono
In this study, we introduce rigorous algorithms for a data categorization method, designated as the Tocky locus approach, which uses normalized and trigonometric-transformed Fluorescent Timer data, suitable for quantitative analysis.
2 code implementations • 6 Nov 2024 • Masahiro Ono
Results: In this study, we introduce the R package that automates the data preprocessing of Timer fluorescence data from flow cytometry experiments for quantitative analysis at single-cell level.
no code implementations • 31 Oct 2024 • Masahiro Ono
Advancements in cytometry technologies have led to a remarkable increase in the number of markers that can be analyzed simultaneously, presenting significant challenges in data analysis.
no code implementations • 2 May 2024 • Deegan Atha, R. Michael Swan, Abhishek Cauligi, Anne Bettens, Edwin Goh, Dima Kogan, Larry Matthies, Masahiro Ono
The ability to determine the pose of a rover in an inertial frame autonomously is a crucial capability necessary for the next generation of surface rover missions on other planetary bodies.
no code implementations • 14 Jun 2022 • Hyoshin Park, Justice Darko, Niharika Deshpande, Venktesh Pandey, Hui Su, Masahiro Ono, Dedrick Barkely, Larkin Folsom, Derek Posselt, Steve Chien
We introduce temporal multimodal multivariate learning, a new family of decision making models that can indirectly learn and transfer online information from simultaneous observations of a probability distribution with more than one peak or more than one outcome variable from one time stage to another.
no code implementations • 18 Mar 2022 • Larry Matthies, Shreyansh Daftry, Scott Tepsuporn, Yang Cheng, Deegan Atha, R. Michael Swan, Sanjna Ravichandar, Masahiro Ono
At the end of each drive, a ground-in-the-loop (GITL) interaction is used to get a position update from human operators in a more global reference frame, by matching images or local maps from onboard the rover to orbital reconnaissance images or maps of a large region around the rover's current position.
no code implementations • 9 Mar 2022 • Shreyansh Daftry, Neil Abcouwer, Tyler del Sesto, Siddarth Venkatraman, Jialin Song, Lucas Igel, Amos Byon, Ugo Rosolia, Yisong Yue, Masahiro Ono
We present MLNav, a learning-enhanced path planning framework for safety-critical and resource-limited systems operating in complex environments, such as rovers navigating on Mars.
no code implementations • 11 Nov 2020 • Neil Abcouwer, Shreyansh Daftry, Siddarth Venkatraman, Tyler del Sesto, Olivier Toupet, Ravi Lanka, Jialin Song, Yisong Yue, Masahiro Ono
Enhanced AutoNav (ENav), the baseline surface navigation software for NASA's Perseverance rover, sorts a list of candidate paths for the rover to traverse, then uses the Approximate Clearance Evaluation (ACE) algorithm to evaluate whether the most highly ranked paths are safe.
no code implementations • 27 Sep 2019 • Mohamadreza Ahmadi, Masahiro Ono, Michel D. Ingham, Richard M. Murray, Aaron D. Ames
We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives.
1 code implementation • 3 Jul 2019 • Jialin Song, Ravi Lanka, Yisong Yue, Masahiro Ono
We study the problem of learning sequential decision-making policies in settings with multiple state-action representations.
no code implementations • 3 Apr 2018 • Jialin Song, Ravi Lanka, Albert Zhao, Aadyot Bhatnagar, Yisong Yue, Masahiro Ono
We study the problem of learning a good search policy for combinatorial search spaces.
no code implementations • 6 Jul 2016 • Masahiro Ono, Mahmoud El Chamie, Marco Pavone, Behcet Acikmese
We found that the same result holds for stochastic optimal control problems with continuous state and action spaces. Furthermore, we show the randomization of control input can result in reduced cost when the optimization problem is nonconvex, and the cost reduction is equal to the duality gap.
no code implementations • 4 Feb 2014 • Masahiro Ono, Brian C. Williams, L. Blackmore
The second capability is essential for the planner to solve problems with a continuous state space such as vehicle path planning.