no code implementations • 20 Dec 2024 • Henrique Oyama, Jun Tani
The current study investigates possible neural mechanisms underling autonomous shifts between focus state and mind-wandering by conducting model simulation experiments.
no code implementations • 2 Oct 2024 • Alex Baranski, Jun Tani
We tackle the challenge of rapidly adapting an agent's behavior to solve spatiotemporally continuous problems in novel settings.
no code implementations • 13 May 2024 • Theodore Jerome Tinker, Kenji Doya, Jun Tani
Two rewards encouraging efficient exploration are the entropy of action policy and curiosity for information gain.
1 code implementation • 29 Mar 2024 • Prasanna Vijayaraghavan, Jeffrey Frederic Queisser, Sergio Verduzco Flores, Jun Tani
Our results show that generalization in learning to unlearned verb-noun compositions, is significantly enhanced when training variations of task composition are increased.
no code implementations • 15 Nov 2023 • Rui Fukushima, Jun Tani
However, this raises a crucial question about Transformer's generalization in learning (GIL) capacity.
1 code implementation • 11 Apr 2023 • Dongqi Han, Kenji Doya, Dongsheng Li, Jun Tani
The habitual behavior is generated by using prior distribution of intention, which is goal-less; and the goal-directed behavior is generated by the posterior distribution of intention, which is conditioned on the goal.
1 code implementation • 5 May 2022 • Fabien C. Y. Benureau, Jun Tani
We propose to make the physical characteristics of a robot oscillate while it learns to improve its behavioral performance.
no code implementations • 26 Feb 2022 • Vsevolod Nikulin, Jun Tani
Robot kinematics data, despite being a high dimensional process, is highly correlated, especially when considering motions grouped in certain primitives.
1 code implementation • 21 Feb 2022 • Takazumi Matsumoto, Wataru Ohata, Fabien C. Y. Benureau, Jun Tani
We show that goal-directed action planning and generation in a teleological framework can be formulated using the free energy principle.
no code implementations • 3 Dec 2021 • Pablo Lanillos, Cristian Meo, Corrado Pezzato, Ajith Anil Meera, Mohamed Baioumy, Wataru Ohata, Alexander Tschantz, Beren Millidge, Martijn Wisse, Christopher L. Buckley, Jun Tani
Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning.
1 code implementation • 4 Nov 2021 • Hayato Idei, Wataru Ohata, Yuichi Yamashita, Tetsuya OGATA, Jun Tani
Consequently, shifts between the two sensorimotor contexts triggered transitions from one free-energy state to another in the network via executive control, which caused shifts between attenuating and amplifying prediction-error-induced responses in the sensory areas.
no code implementations • 18 Jun 2021 • Dongqi Han, Kenji Doya, Jun Tani
Habitual behavior, which is obtained from the prior distribution of ${z}$, is acquired by reinforcement learning.
no code implementations • 3 Mar 2021 • Nadine Wirkuttis, Jun Tani
This study investigated how social interaction among robotic agents changes dynamically depending on the individual belief of action intention.
no code implementations • 18 Feb 2021 • Siqing Hou, Dongqi Han, Jun Tani
This paper builds on the idea of replaying demonstrations for memory-dependent continuous control, by proposing a novel algorithm, Recurrent Actor-Critic with Demonstration and Experience Replay (READER).
1 code implementation • 28 Oct 2020 • Fabien C. Y. Benureau, Jun Tani
Evolution and development operate at different timescales; generations for the one, a lifetime for the other.
no code implementations • 29 Jun 2020 • Hendry F. Chame, Ahmadreza Ahmadi, Jun Tani
Human-robot interaction is becoming an interesting area of research in cognitive science, notably, for the study of social cognition.
1 code implementation • 27 May 2020 • Takazumi Matsumoto, Jun Tani
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences.
1 code implementation • ICLR 2020 • Dongqi Han, Kenji Doya, Jun Tani
In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy.
no code implementations • 5 Nov 2019 • Hendry Ferreira Chame, Jun Tani
Embodiment and subjective experience in human-robot interaction are important aspects to consider when studying both natural cognition and adaptive robotics to human environments.
no code implementations • 12 Mar 2019 • Minju Jung, Takazumi Matsumoto, Jun Tani
Furthermore, our analysis of comparative experiments indicated that introduction of visual working memory and the inference mechanism using variational Bayes predictive coding significantly improve the performance in planning adequate goal-directed actions.
1 code implementation • 29 Jan 2019 • Dongqi Han, Kenji Doya, Jun Tani
Furthermore, we show that the self-developed compositionality of the network enhances faster re-learning when adapting to a new task that is a re-composition of previously learned sub-goals, than when starting from scratch.
1 code implementation • 4 Nov 2018 • Ahmadreza Ahmadi, Jun Tani
The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on two terms of a lower bound on the marginal likelihood of the sequential data.
1 code implementation • 28 May 2018 • German I. Parisi, Jun Tani, Cornelius Weber, Stefan Wermter
Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience.
no code implementations • 15 May 2018 • Jungsik Hwang, Jun Tani
The results also showed that the different way of learning the basic actions induced the self-organization of the memory structure with the different characteristics, resulting in the generation of different levels of creative actions.
no code implementations • 7 Mar 2018 • Minkyu Choi, Takazumi Matsumoto, Minju Jung, Jun Tani
The current paper presents how a predictive coding type deep recurrent neural networks can generate vision-based goal-directed plans based on prior learning experience by examining experiment results using a real arm robot.
no code implementations • 2 Aug 2017 • Minkyu Choi, Jun Tani
The paper examines how model performance during pattern generation as well as predictive imitation varies depending on the stage of learning.
no code implementations • 30 Jun 2017 • Ahmadreza Ahmadi, Jun Tani
We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns.
no code implementations • 8 Jun 2017 • Jungsik Hwang, Jinhyung Kim, Ahmadreza Ahmadi, Minkyu Choi, Jun Tani
This study presents a dynamic neural network model based on the predictive coding framework for perceiving and predicting the dynamic visuo-proprioceptive patterns.
no code implementations • 8 Jun 2017 • Jungsik Hwang, Jun Tani
This study investigates how adequate coordination among the different cognitive processes of a humanoid robot can be developed through end-to-end learning of direct perception of visuomotor stream.
no code implementations • 24 May 2017 • Minju Jung, Haanvid Lee, Jun Tani
In this paper, inspired by the normalization and detrending methods, we propose adaptive detrending (AD) for temporal normalization in order to accelerate the training of ConvRNNs, especially for convolutional gated recurrent unit (ConvGRU).
no code implementations • 6 Jun 2016 • Minkyu Choi, Jun Tani
The current paper presents a novel recurrent neural network model, the predictive multiple spatio-temporal scales RNN (P-MSTRNN), which can generate as well as recognize dynamic visual patterns in the predictive coding framework.
no code implementations • 11 May 2016 • Junpei Zhong, Martin Peniak, Jun Tani, Tetsuya OGATA, Angelo Cangelosi
The paper presents a neurorobotics cognitive model to explain the understanding and generalisation of nouns and verbs combinations when a vocal command consisting of a verb-noun sentence is provided to a humanoid robot.
no code implementations • 5 Feb 2016 • Haanvid Lee, Minju Jung, Jun Tani
The analysis of the internal representation obtained through the learning with the dataset clarifies what sorts of functional hierarchy can be developed by extracting the essential compositionality underlying the dataset.
no code implementations • 9 Jul 2015 • Jungsik Hwang, Minju Jung, Naveen Madapana, Jinhyung Kim, Minkyu Choi, Jun Tani
The current study examines how adequate coordination among different cognitive processes including visual recognition, attention switching, action preparation and generation can be developed via learning of robots by introducing a novel model, the Visuo-Motor Deep Dynamic Neural Network (VMDNN).