no code implementations • 28 Mar 2024 • Avinash Ummadisingu, Jongkeum Choi, Koki Yamane, Shimpei Masuda, Naoki Fukaya, Kuniyuki Takahashi
Acquiring accurate depth information of transparent objects using off-the-shelf RGB-D cameras is a well-known challenge in Computer Vision and Robotics.
no code implementations • 10 Mar 2022 • Avinash Ummadisingu, Kuniyuki Takahashi, Naoki Fukaya
To address this problem, we propose a method that trains purely on synthetic data and successfully transfers to the real world using sim2real methods by creating datasets of filled food trays using high-quality 3d models of real pieces of food for the training instance segmentation models.
1 code implementation • 27 Sep 2019 • Kuniyuki Takahashi, Kenta Yonekura
Invisible marker is invisible under visible (regular) light conditions, but becomes visible under invisible light, such as ultraviolet (UV) light.
1 code implementation • 9 Mar 2018 • Kuniyuki Takahashi, Jethro Tan
Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment.
Robotics
no code implementations • 17 Oct 2017 • Ayaka Kume, Eiichi Matsumoto, Kuniyuki Takahashi, Wilson Ko, Jethro Tan
To solve this problem, we propose Map-based Multi-Policy Reinforcement Learning (MMPRL), which aims to search and store multiple policies that encode different behavioral features while maximizing the expected reward in advance of the environment change.
1 code implementation • 17 Oct 2017 • Jun Hatori, Yuta Kikuchi, Sosuke Kobayashi, Kuniyuki Takahashi, Yuta Tsuboi, Yuya Unno, Wilson Ko, Jethro Tan
In this paper, we propose the first comprehensive system that can handle unconstrained spoken language and is able to effectively resolve ambiguity in spoken instructions.