Search Results for author: Ryota Kanai

Found 22 papers, 8 papers with code

Remembering Transformer for Continual Learning

no code implementations11 Apr 2024 Yuwei Sun, Ippei Fujisawa, Arthur Juliani, Jun Sakuma, Ryota Kanai

Neural networks encounter the challenge of Catastrophic Forgetting (CF) in continual learning, where new task knowledge interferes with previously learned knowledge.

Continual Learning

Stimulation technology for brain and nerves, now and future

no code implementations29 Feb 2024 Masaru Kuwabara, Ryota Kanai

In individuals afflicted with conditions such as paralysis, the implementation of Brain-Computer-Interface (BCI) has begun to significantly impact their quality of life.

Brain Computer Interface

Associative Transformer

1 code implementation22 Sep 2023 Yuwei Sun, Hideya Ochiai, Zhirong Wu, Stephen Lin, Ryota Kanai

Existing studies such as the Coordination method employ iterative cross-attention mechanisms with a bottleneck to enable the sparse association of inputs.

Artificial Global Workspace Inductive Bias +2

Logical Tasks for Measuring Extrapolation and Rule Comprehension

1 code implementation14 Nov 2022 Ippei Fujisawa, Ryota Kanai

Furthermore, we discuss the relevance of logical tasks to concepts such as extrapolation, explainability, and inductive bias.

Inductive Bias Logical Reasoning

On the link between conscious function and general intelligence in humans and machines

no code implementations24 Mar 2022 Arthur Juliani, Kai Arulkumaran, Shuntaro Sasai, Ryota Kanai

In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence.

AI agents for facilitating social interactions and wellbeing

no code implementations26 Feb 2022 Hiro Taiyo Hamada, Ryota Kanai

While social relationships within groups are a critical factor for wellbeing, the development of wellbeing AI for social interactions remains relatively scarce.

Experimental Evidence that Empowerment May Drive Exploration in Sparse-Reward Environments

no code implementations14 Jul 2021 Francesco Massari, Martin Biehl, Lisa Meeden, Ryota Kanai

A possible countermeasure is to endow RL agents with an intrinsic reward function, or 'intrinsic motivation', which rewards the agent based on certain features of the current sensor state.

Reinforcement Learning (RL)

Deep Learning and the Global Workspace Theory

no code implementations4 Dec 2020 Rufin VanRullen, Ryota Kanai

Recent advances in deep learning have allowed Artificial Intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks.

Translation

Non-trivial informational closure of a Bayesian hyperparameter

no code implementations5 Oct 2020 Martin Biehl, Ryota Kanai

On the other hand we attempt to establish a connection between a quantity that is a feature of the interpretation of the hyperparameter as a model, namely the information gain, and the one-step pointwise NTIC which is a quantity that does not depend on this interpretation.

A Technical Critique of Some Parts of the Free Energy Principle

no code implementations12 Jan 2020 Martin Biehl, Felix A. Pollock, Ryota Kanai

Additionally, we highlight that the variational densities presented in newer formulations of the free energy principle and lemma are parameterised by different variables than in older works, leading to a substantially different interpretation of the theory.

Bayesian Inference LEMMA

A unified strategy for implementing curiosity and empowerment driven reinforcement learning

no code implementations18 Jun 2018 Ildefons Magrans de Abril, Ryota Kanai

Curiosity reward informs the agent about the relevance of a recent agent action, whereas empowerment is implemented as the opposite information flow from the agent to the environment that quantifies the agent's potential of controlling its own future.

reinforcement-learning Reinforcement Learning (RL)

Boredom-driven curious learning by Homeo-Heterostatic Value Gradients

1 code implementation5 Jun 2018 Yen Yu, Acer Y. C. Chang, Ryota Kanai

This paper presents the Homeo-Heterostatic Value Gradients (HHVG) algorithm as a formal account on the constructive interplay between boredom and curiosity which gives rise to effective exploration and superior forward model learning.

Being curious about the answers to questions: novelty search with learned attention

1 code implementation1 Jun 2018 Nicholas Guttenberg, Martin Biehl, Nathaniel Virgo, Ryota Kanai

We investigate the use of attentional neural network layers in order to learn a `behavior characterization' which can be used to drive novelty search and curiosity-based policies.

Learning to generate classifiers

1 code implementation30 Mar 2018 Nicholas Guttenberg, Ryota Kanai

We train a network to generate mappings between training sets and classification policies (a 'classifier generator') by conditioning on the entire training set via an attentional mechanism.

General Classification

Curiosity-driven reinforcement learning with homeostatic regulation

no code implementations23 Jan 2018 Ildefons Magrans de Abril, Ryota Kanai

We propose a curiosity reward based on information theory principles and consistent with the animal instinct to maintain certain critical parameters within a bounded range.

reinforcement-learning Reinforcement Learning (RL)

Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory

no code implementations19 Dec 2017 Jun Kitazono, Ryota Kanai, Masafumi Oizumi

In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of $\Phi$ by evaluating the accuracy of the algorithm in simulated data and real neural data.

Learning body-affordances to simplify action spaces

1 code implementation15 Aug 2017 Nicholas Guttenberg, Martin Biehl, Ryota Kanai

Controlling embodied agents with many actuated degrees of freedom is a challenging task.

A description length approach to determining the number of k-means clusters

no code implementations28 Feb 2017 Hiromitsu Mizutani, Ryota Kanai

Here we report two types of compression ratio based on two ways to quantify the description length of data after compression.

Clustering Data Compression +1

Counterfactual Control for Free from Generative Models

2 code implementations22 Feb 2017 Nicholas Guttenberg, Yen Yu, Ryota Kanai

In this method, the problem of action selection is reduced to one of gradient descent on the latent space of the generative model, with the model itself providing the means of evaluating outcomes and finding the gradient, much like how the reward network in Deep Q-Networks (DQN) provides gradient information for the action generator.

counterfactual

Permutation-equivariant neural networks applied to dynamics prediction

2 code implementations14 Dec 2016 Nicholas Guttenberg, Nathaniel Virgo, Olaf Witkowski, Hidetoshi Aoki, Ryota Kanai

The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain.

Translation

Neural Coarse-Graining: Extracting slowly-varying latent degrees of freedom with neural networks

no code implementations1 Sep 2016 Nicholas Guttenberg, Martin Biehl, Ryota Kanai

We present a loss function for neural networks that encompasses an idea of trivial versus non-trivial predictions, such that the network jointly determines its own prediction goals and learns to satisfy them.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.