Search Results for author: Shigeki SUGANO

Found 5 papers, 1 papers with code

Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot

no code implementations26 Sep 2023 Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki SUGANO, Tetsuya OGATA

We tackled on the task of cooking scrambled eggs using real ingredients, in which the robot needs to perceive the states of the egg and adjust stirring movement in real time, while the egg is heated and the state changes continuously.

How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning

no code implementations4 Jun 2021 Namiko Saito, Tetsuya OGATA, Satoshi Funabashi, Hiroki Mori, Shigeki SUGANO

We also examine the contributions of images, force, and tactile data and show that learning a variety of multimodal information results in rich perception for tool use.

Multimodal Deep Learning Object

RETHINKING SELF-DRIVING : MULTI -TASK KNOWLEDGE FOR BETTER GENERALIZATION AND ACCIDENT EXPLANATION ABILITY

no code implementations ICLR 2019 Zhihao LI, Toshiyuki MOTOYOSHI, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO

Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of train- ing driving dataset is limited (2) Lack of accident explanation ability when driving models don’t work as expected.

Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability

1 code implementation28 Sep 2018 Zhihao Li, Toshiyuki Motoyoshi, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO

Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models don't work as expected.

Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning

no code implementations23 Sep 2018 Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya OGATA, Shigeki SUGANO

We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.

Cannot find the paper you are looking for? You can Submit a new open access paper.