no code implementations • 6 Mar 2024 • Yoshia Abe, Tatsuya Daikoku, Yasuo Kuniyoshi
Recently, it has been recognized that large language models demonstrate high performance on various intellectual tasks.
no code implementations • 15 Jan 2024 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Additionally, this dataset includes visual attention signals as well as dual-action labels, a signal that separates actions into a robust reaching trajectory and precise interaction with objects, and language instructions to achieve robust and precise object manipulation.
no code implementations • 16 Oct 2023 • Yoshiyuki Ohmura, Yasuo Kuniyoshi
A configurational force is a novel force of a certain type of aggregates not generated by a pair of elementary particles.
no code implementations • 5 Oct 2023 • Takayuki Komatsu, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Based on this result, we hypothesize that it is important to maximize the attention mask of the image region best represented by a single latent vector corresponding to the attention mask.
no code implementations • 31 May 2023 • Yoshiyuki Ohmura, Wataru Shimaya, Yasuo Kuniyoshi
In addition, we show that a brain model that learns to satisfy the algebraic independence between neural networks separates the latent space into multiple metric spaces corresponding to qualia types, suggesting that our theory can contribute to the further development of the mathematical theory of consciousness.
no code implementations • 1 Apr 2022 • Mitsumasa Nakajima, Katsuma Inoue, Kenji Tanaka, Yasuo Kuniyoshi, Toshikazu Hashimoto, Kohei Nakajima
In addition, we can emulate and accelerate the computation for this training on a simple and scalable physical system.
no code implementations • 18 Mar 2022 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Long-horizon dexterous robot manipulation of deformable objects, such as banana peeling, is a problematic task because of the difficulties in object modeling and a lack of knowledge about stable and dexterous manipulation skills.
no code implementations • 19 Feb 2022 • Heecheol Kim, Yoshiyuki Ohmura, Akihiko Nagakubo, Yasuo Kuniyoshi
In this study, deep imitation learning is applied to tasks that require force feedback.
no code implementations • 10 Feb 2022 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
We propose that gaze prediction from sequential visual input enables the robot to perform a manipulation task that requires memory.
no code implementations • 1 Aug 2021 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Deep imitation learning is promising for solving dexterous manipulation tasks because it does not require an environment model and pre-programmed robot behavior.
no code implementations • 25 Jul 2021 • Ryoya Ogishima, Izumi Karino, Yasuo Kuniyoshi
Reinforcement Learning (RL) requires a large amount of exploration especially in sparse-reward settings.
no code implementations • 6 Jun 2021 • Katsuma Inoue, Soh Ohara, Yasuo Kuniyoshi, Kohei Nakajima
A Lite BERT (ALBERT) is literally characterized as a lightweight version of BERT, in which the number of BERT parameters is reduced by repeatedly applying the same neural network called Transformer's encoder layer.
no code implementations • 2 Feb 2021 • Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
The results of this study demonstrate that a deep imitation learning based method, inspired by the gaze-based dual resolution visuomotor control system in humans, can solve the needle threading task.
no code implementations • 1 Jan 2021 • Ryoya Ogishima, Izumi Karino, Yasuo Kuniyoshi
Imitation Learning (IL) and Reinforcement Learning (RL) from high dimensional sensory inputs are often introduced as separate problems, but a more realistic problem setting is how to merge the techniques so that the agent can reduce exploration costs by partially imitating experts at the same time it maximizes its return.
no code implementations • 26 Aug 2020 • Izumi Karino, Yoshiyuki Ohmura, Yasuo Kuniyoshi
Our results also demonstrate that the identified critical states are intuitively interpretable regarding the crucial nature of the action selection.
no code implementations • 18 Sep 2018 • Izumi Karino, Kazutoshi Tanaka, Ryuma Niiyama, Yasuo Kuniyoshi
Moreover, this method switches isotropic exploration and directional exploration in parameter space with regard to obtained rewards.
no code implementations • NeurIPS 2012 • Tatsuya Harada, Yasuo Kuniyoshi
This paper proposes a novel image representation called a Graphical Gaussian Vector, which is a counterpart of the codebook and local feature matching approaches.