Search Results for author: Kanata Suzuki

Found 10 papers, 2 papers with code

Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot

no code implementations26 Sep 2023 Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki SUGANO, Tetsuya OGATA

We tackled on the task of cooking scrambled eggs using real ingredients, in which the robot needs to perceive the states of the egg and adjust stirring movement in real time, while the egg is heated and the state changes continuously.

Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data

no code implementations8 Mar 2022 Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya OGATA

We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions.

Translation

Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations

1 code implementation30 Oct 2021 Akira Sakai, Taro Sunagawa, Spandan Madan, Kanata Suzuki, Takashi Katoh, Hiromichi Kobashi, Hanspeter Pfister, Pawan Sinha, Xavier Boix, Tomotake Sasaki

While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available.

Embodying Pre-Trained Word Embeddings Through Robot Actions

no code implementations17 Apr 2021 Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya OGATA

These embeddings allow the robot to properly generate actions from unseen words that are not paired with actions in a dataset.

Translation Word Embeddings

In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning

1 code implementation17 Mar 2021 Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya OGATA

However, a manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.

A Multi-task Learning Framework for Grasping-Position Detection and Few-Shot Classification

no code implementations12 Mar 2020 Yasuto Yokota, Kanata Suzuki, Yuzi Kanazawa, Tomoyoshi Takebayashi

However, the DNN that was used to detect grasping positions has two problems with respect to extracting feature vectors from a layer for shape classification: (1) Because each layer of the grasping position detection DNN is activated by all objects in the input image, it is necessary to refine the features for each grasping position.

Robotics

Online Self-Supervised Learning for Object Picking: Detecting Optimum Grasping Position using a Metric Learning Approach

no code implementations8 Mar 2020 Kanata Suzuki, Yasuto Yokota, Yuzi Kanazawa, Tomoyoshi Takebayashi

: SSD that detects the grasping position of an object, and Siamese networks (SNs) that evaluate the trial sample using the similarity of two input data in the feature space.

Metric Learning Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.