Search Results for author: Kuniyuki Takahashi

Found 6 papers, 3 papers with code

SAID-NeRF: Segmentation-AIDed NeRF for Depth Completion of Transparent Objects

no code implementations28 Mar 2024 Avinash Ummadisingu, Jongkeum Choi, Koki Yamane, Shimpei Masuda, Naoki Fukaya, Kuniyuki Takahashi

Acquiring accurate depth information of transparent objects using off-the-shelf RGB-D cameras is a well-known challenge in Computer Vision and Robotics.

Depth Completion Depth Estimation +3

Cluttered Food Grasping with Adaptive Fingers and Synthetic-Data Trained Object Detection

no code implementations10 Mar 2022 Avinash Ummadisingu, Kuniyuki Takahashi, Naoki Fukaya

To address this problem, we propose a method that trains purely on synthetic data and successfully transfers to the real world using sim2real methods by creating datasets of filled food trays using high-quality 3d models of real pieces of food for the training instance segmentation models.

Instance Segmentation object-detection +2

Invisible Marker: Automatic Annotation of Segmentation Masks for Object Manipulation

1 code implementation27 Sep 2019 Kuniyuki Takahashi, Kenta Yonekura

Invisible marker is invisible under visible (regular) light conditions, but becomes visible under invisible light, such as ultraviolet (UV) light.

Segmentation Semantic Segmentation

Deep Visuo-Tactile Learning: Estimation of Tactile Properties from Images

1 code implementation9 Mar 2018 Kuniyuki Takahashi, Jethro Tan

Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment.

Robotics

Map-based Multi-Policy Reinforcement Learning: Enhancing Adaptability of Robots by Deep Reinforcement Learning

no code implementations17 Oct 2017 Ayaka Kume, Eiichi Matsumoto, Kuniyuki Takahashi, Wilson Ko, Jethro Tan

To solve this problem, we propose Map-based Multi-Policy Reinforcement Learning (MMPRL), which aims to search and store multiple policies that encode different behavioral features while maximizing the expected reward in advance of the environment change.

Bayesian Optimization reinforcement-learning +2

Interactively Picking Real-World Objects with Unconstrained Spoken Language Instructions

1 code implementation17 Oct 2017 Jun Hatori, Yuta Kikuchi, Sosuke Kobayashi, Kuniyuki Takahashi, Yuta Tsuboi, Yuya Unno, Wilson Ko, Jethro Tan

In this paper, we propose the first comprehensive system that can handle unconstrained spoken language and is able to effectively resolve ambiguity in spoken instructions.

object-detection Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.