no code implementations • 15 Jan 2024 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Additionally, this dataset includes visual attention signals as well as dual-action labels, a signal that separates actions into a robust reaching trajectory and precise interaction with objects, and language instructions to achieve robust and precise object manipulation.
no code implementations • 16 Oct 2023 • Yoshiyuki Ohmura, Yasuo Kuniyoshi
A configurational force is a novel force of a certain type of aggregates not generated by a pair of elementary particles.
no code implementations • 5 Oct 2023 • Takayuki Komatsu, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Based on this result, we hypothesize that it is important to maximize the attention mask of the image region best represented by a single latent vector corresponding to the attention mask.
no code implementations • 31 May 2023 • Yoshiyuki Ohmura, Wataru Shimaya, Yasuo Kuniyoshi
In addition, we show that a brain model that learns to satisfy the algebraic independence between neural networks separates the latent space into multiple metric spaces corresponding to qualia types, suggesting that our theory can contribute to the further development of the mathematical theory of consciousness.
no code implementations • 18 Mar 2022 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Long-horizon dexterous robot manipulation of deformable objects, such as banana peeling, is a problematic task because of the difficulties in object modeling and a lack of knowledge about stable and dexterous manipulation skills.
no code implementations • 19 Feb 2022 • Heecheol Kim, Yoshiyuki Ohmura, Akihiko Nagakubo, Yasuo Kuniyoshi
In this study, deep imitation learning is applied to tasks that require force feedback.
no code implementations • 10 Feb 2022 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
We propose that gaze prediction from sequential visual input enables the robot to perform a manipulation task that requires memory.
no code implementations • 1 Aug 2021 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Deep imitation learning is promising for solving dexterous manipulation tasks because it does not require an environment model and pre-programmed robot behavior.
no code implementations • 2 Feb 2021 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
The results of this study demonstrate that a deep imitation learning based method, inspired by the gaze-based dual resolution visuomotor control system in humans, can solve the needle threading task.
no code implementations • 26 Aug 2020 • Izumi Karino, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Our results also demonstrate that the identified critical states are intuitively interpretable regarding the crucial nature of the action selection.