no code implementations • 1 Mar 2024 • Noriaki Hirose, Dhruv Shah, Kyle Stachowicz, Ajay Sridhar, Sergey Levine
Specifically, SELFI stabilizes the online learning process by incorporating the same model-based learning objective from offline pre-training into the Q-values learned with online model-free reinforcement learning.
no code implementations • 26 Jun 2023 • Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine
In this paper, we describe the Visual Navigation Transformer (ViNT), a foundation model that aims to bring the success of general-purpose pre-trained models to vision-based robotic navigation.
no code implementations • 2 Jun 2023 • Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
no code implementations • 14 Oct 2022 • Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine
Machine learning techniques rely on large and diverse datasets for generalization.
1 code implementation • 7 Oct 2022 • Dhruv Shah, Ajay Sridhar, Arjun Bhorkar, Noriaki Hirose, Sergey Levine
Learning provides a powerful tool for vision-based navigation, but the capabilities of learning-based policies are constrained by limited training data.
no code implementations • 24 Mar 2022 • Shun Taguchi, Noriaki Hirose
Monocular camera re-localization refers to the task of estimating the absolute camera pose from an instance image in a known environment, which has been intensively studied for alternative localization in GPS-denied environments.
no code implementations • 20 Oct 2021 • Noriaki Hirose, Kosuke Tahara
This framework is attractive for researchers because the depth and pose networks can be trained from just time sequence images without the need for the ground truth depth and poses.
no code implementations • 24 Nov 2020 • Noriaki Hirose, Shun Taguchi, Keisuke Kawano, Satoshi Koide
Self-supervised learning for monocular depth estimation is widely investigated as an alternative to supervised learning approach, that requires a lot of ground truths.
no code implementations • 3 Jun 2020 • Noriaki Hirose, Satoshi Koide, Keisuke Kawano, Ruho Kondo
We propose a novel objective for penalizing geometric inconsistencies to improve the depth and pose estimation performance of monocular camera images.
no code implementations • 22 Jun 2018 • Noriaki Hirose, Amir Sadeghian, Fei Xia, Roberto Martin-Martin, Silvio Savarese
We present VUNet, a novel view(VU) synthesis method for mobile robots in dynamic environments, and its application to the estimation of future traversability.
1 code implementation • CVPR 2019 • Amir Sadeghian, Vineet Kosaraju, Ali Sadeghian, Noriaki Hirose, S. Hamid Rezatofighi, Silvio Savarese
Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors.
Ranked #4 on Trajectory Prediction on Stanford Drone (ADE (8/12) @K=5 metric)
no code implementations • 8 Mar 2018 • Noriaki Hirose, Amir Sadeghian, Marynel Vázquez, Patrick Goebel, Silvio Savarese
We present semi-supervised deep learning approaches for traversability estimation from fisheye images.
no code implementations • 16 Sep 2017 • Noriaki Hirose, Amir Sadeghian, Patrick Goebel, Silvio Savarese
It is important for robots to be able to decide whether they can go through a space or not, as they navigate through a dynamic environment.