Search Results for author: Tetsuya OGATA

Found 25 papers, 5 papers with code

A Peg-in-hole Task Strategy for Holes in Concrete

no code implementations29 Mar 2024 André Yuji Yasutomi, Hiroki Mori, Tetsuya OGATA

Since the displacement increases as the peg gets closer to the hole (due to the chamfered shape of holes in concrete), it is a useful parameter for inputting in the DNN.

Friction

Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions

no code implementations27 Dec 2023 André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito, Hiroki Mori, Tetsuya OGATA

In this study, we introduce a vision and proprioceptive data-driven robot control model for this task that is robust to challenging lighting and hole surface conditions.

Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot

no code implementations26 Sep 2023 Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki SUGANO, Tetsuya OGATA

We tackled on the task of cooking scrambled eggs using real ingredients, in which the robot needs to perceive the states of the egg and adjust stirring movement in real time, while the egg is heated and the state changes continuously.

Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use

no code implementations29 Jun 2022 Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya OGATA

The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states.

Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data

no code implementations8 Mar 2022 Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya OGATA

We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions.

Translation

Emergence of sensory attenuation based upon the free-energy principle

1 code implementation4 Nov 2021 Hayato Idei, Wataru Ohata, Yuichi Yamashita, Tetsuya OGATA, Jun Tani

Consequently, shifts between the two sensorimotor contexts triggered transitions from one free-energy state to another in the network via executive control, which caused shifts between attenuating and amplifying prediction-error-induced responses in the sensory areas.

How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning

no code implementations4 Jun 2021 Namiko Saito, Tetsuya OGATA, Satoshi Funabashi, Hiroki Mori, Shigeki SUGANO

We also examine the contributions of images, force, and tactile data and show that learning a variety of multimodal information results in rich perception for tool use.

Multimodal Deep Learning Object

Embodying Pre-Trained Word Embeddings Through Robot Actions

no code implementations17 Apr 2021 Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya OGATA

These embeddings allow the robot to properly generate actions from unseen words that are not paired with actions in a dataset.

Translation Word Embeddings

In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning

1 code implementation17 Mar 2021 Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya OGATA

However, a manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.

Transferable Task Execution from Pixels through Deep Planning Domain Learning

no code implementations8 Mar 2020 Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya OGATA, Dieter Fox

On the other hand, symbolic planning methods such as STRIPS have long been able to solve new problems given only a domain definition and a symbolic goal, but these approaches often struggle on the real world robotic tasks due to the challenges of grounding these symbols from sensor data in a partially-observable world.

RETHINKING SELF-DRIVING : MULTI -TASK KNOWLEDGE FOR BETTER GENERALIZATION AND ACCIDENT EXPLANATION ABILITY

no code implementations ICLR 2019 Zhihao LI, Toshiyuki MOTOYOSHI, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO

Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of train- ing driving dataset is limited (2) Lack of accident explanation ability when driving models don’t work as expected.

CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments

no code implementations7 Nov 2018 Nelson Yalta, Shinji Watanabe, Takaaki Hori, Kazuhiro Nakadai, Tetsuya OGATA

By employing a convolutional neural network (CNN)-based multichannel end-to-end speech recognition system, this study attempts to overcome the presents difficulties in everyday environments.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability

1 code implementation28 Sep 2018 Zhihao Li, Toshiyuki Motoyoshi, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO

Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models don't work as expected.

Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning

no code implementations23 Sep 2018 Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya OGATA, Shigeki SUGANO

We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.

Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation

3 code implementations3 Jul 2018 Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, Tetsuya OGATA

However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data.

Motion Estimation

Encoding Longer-term Contextual Multi-modal Information in a Predictive Coding Model

no code implementations17 Apr 2018 Junpei Zhong, Tetsuya OGATA, Angelo Cangelosi

On the other hand, the incoming sensory information corrects such prediction of the events on the higher level by the novel or surprising signal.

AFA-PredNet: The action modulation within predictive coding

no code implementations11 Apr 2018 Junpei Zhong, Angelo Cangelosi, Xinzheng Zhang, Tetsuya OGATA

The predictive processing (PP) hypothesizes that the predictive inference of our sensorimotor system is encoded implicitly in the regularities between perception and action.

Causal Inference

General problem solving with category theory

no code implementations14 Sep 2017 Francisco J. Arjonilla, Tetsuya OGATA

Generators realize cognitive operations over a system by grouping morphisms, whilst evaluators group objects as a way to generalize outsets and goals to partially defined states.

Toward Abstraction from Multi-modal Data: Empirical Studies on Multiple Time-scale Recurrent Models

no code implementations7 Feb 2017 Junpei Zhong, Angelo Cangelosi, Tetsuya OGATA

This was done by conducting two studies based on a smaller data- set (two-dimension time sequences from non-linear functions) and a relatively large data-set (43-dimension time sequences from iCub manipulation tasks with multi-modal data).

Robot Manipulation Text Generation

Sensorimotor Input as a Language Generalisation Tool: A Neurorobotics Model for Generation and Generalisation of Noun-Verb Combinations with Sensorimotor Inputs

no code implementations11 May 2016 Junpei Zhong, Martin Peniak, Jun Tani, Tetsuya OGATA, Angelo Cangelosi

The paper presents a neurorobotics cognitive model to explain the understanding and generalisation of nouns and verbs combinations when a vocal command consisting of a verb-noun sentence is provided to a humanoid robot.

Language Acquisition Sentence

Symbol Emergence in Robotics: A Survey

no code implementations29 Sep 2015 Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya OGATA, Hideki Asoh

Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people.

Cannot find the paper you are looking for? You can Submit a new open access paper.