no code implementations • 29 Mar 2024 • André Yuji Yasutomi, Hiroki Mori, Tetsuya OGATA
Since the displacement increases as the peg gets closer to the hole (due to the chamfered shape of holes in concrete), it is a useful parameter for inputting in the DNN.
no code implementations • 27 Dec 2023 • André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito, Hiroki Mori, Tetsuya OGATA
In this study, we introduce a vision and proprioceptive data-driven robot control model for this task that is robust to challenging lighting and hole surface conditions.
no code implementations • 26 Sep 2023 • Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki SUGANO, Tetsuya OGATA
We tackled on the task of cooking scrambled eggs using real ingredients, in which the robot needs to perceive the states of the egg and adjust stirring movement in real time, while the egg is heated and the state changes continuously.
no code implementations • 30 Aug 2023 • Kazuki Hori, Kanata Suzuki, Tetsuya OGATA
The application of the Large Language Model (LLM) to robot action planning has been actively studied.
no code implementations • 29 Jun 2022 • Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya OGATA
The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states.
no code implementations • 8 Mar 2022 • Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya OGATA
We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions.
1 code implementation • CVPR 2022 • Ryosuke Yamada, Hirokatsu Kataoka, Naoya Chiba, Yukiyasu Domae, Tetsuya OGATA
Moreover, the PC-FractalDB pre-trained model is especially effective in training with limited data.
Ranked #18 on 3D Object Detection on SUN-RGBD val (using extra training data)
1 code implementation • 4 Nov 2021 • Hayato Idei, Wataru Ohata, Yuichi Yamashita, Tetsuya OGATA, Jun Tani
Consequently, shifts between the two sensorimotor contexts triggered transitions from one free-energy state to another in the network via executive control, which caused shifts between attenuating and amplifying prediction-error-induced responses in the sensory areas.
no code implementations • 4 Jun 2021 • Namiko Saito, Tetsuya OGATA, Satoshi Funabashi, Hiroki Mori, Shigeki SUGANO
We also examine the contributions of images, force, and tactile data and show that learning a variety of multimodal information results in rich perception for tool use.
no code implementations • 17 Apr 2021 • Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya OGATA
These embeddings allow the robot to properly generate actions from unseen words that are not paired with actions in a dataset.
1 code implementation • 17 Mar 2021 • Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya OGATA
However, a manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
no code implementations • 18 Jan 2021 • Kanata Suzuki, Tetsuya OGATA
The learning instability caused by these unstable signals is a problem that remains to be solved in DRL.
no code implementations • 31 Mar 2020 • Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu, Tetsuya OGATA
Although numerous robots have been developed, less have focused on otaku-culture or on embodying the anime character figurine.
Action Generation Cultural Vocal Bursts Intensity Prediction +1
no code implementations • 8 Mar 2020 • Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya OGATA, Dieter Fox
On the other hand, symbolic planning methods such as STRIPS have long been able to solve new problems given only a domain definition and a symbolic goal, but these approaches often struggle on the real world robotic tasks due to the challenges of grounding these symbols from sensor data in a partially-observable world.
no code implementations • ICLR 2019 • Zhihao LI, Toshiyuki MOTOYOSHI, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of train- ing driving dataset is limited (2) Lack of accident explanation ability when driving models don’t work as expected.
no code implementations • 7 Nov 2018 • Nelson Yalta, Shinji Watanabe, Takaaki Hori, Kazuhiro Nakadai, Tetsuya OGATA
By employing a convolutional neural network (CNN)-based multichannel end-to-end speech recognition system, this study attempts to overcome the presents difficulties in everyday environments.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 28 Sep 2018 • Zhihao Li, Toshiyuki Motoyoshi, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models don't work as expected.
no code implementations • 23 Sep 2018 • Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya OGATA, Shigeki SUGANO
We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.
3 code implementations • 3 Jul 2018 • Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, Tetsuya OGATA
However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data.
no code implementations • 17 Apr 2018 • Junpei Zhong, Tetsuya OGATA, Angelo Cangelosi
On the other hand, the incoming sensory information corrects such prediction of the events on the higher level by the novel or surprising signal.
no code implementations • 11 Apr 2018 • Junpei Zhong, Angelo Cangelosi, Xinzheng Zhang, Tetsuya OGATA
The predictive processing (PP) hypothesizes that the predictive inference of our sensorimotor system is encoded implicitly in the regularities between perception and action.
no code implementations • 14 Sep 2017 • Francisco J. Arjonilla, Tetsuya OGATA
Generators realize cognitive operations over a system by grouping morphisms, whilst evaluators group objects as a way to generalize outsets and goals to partially defined states.
no code implementations • 7 Feb 2017 • Junpei Zhong, Angelo Cangelosi, Tetsuya OGATA
This was done by conducting two studies based on a smaller data- set (two-dimension time sequences from non-linear functions) and a relatively large data-set (43-dimension time sequences from iCub manipulation tasks with multi-modal data).
no code implementations • 11 May 2016 • Junpei Zhong, Martin Peniak, Jun Tani, Tetsuya OGATA, Angelo Cangelosi
The paper presents a neurorobotics cognitive model to explain the understanding and generalisation of nouns and verbs combinations when a vocal command consisting of a verb-noun sentence is provided to a humanoid robot.
no code implementations • 29 Sep 2015 • Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya OGATA, Hideki Asoh
Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people.