no code implementations • 20 Jul 2024 • Yoshiki Ito, Taro Toyoizumi
Here, we employ interacting sequential and context-inference modules to drive model-based learning as a means to better understand experimental neuronal activity data, lesion studies, and clinical research.
1 code implementation • 15 Apr 2024 • Yuri Kinoshita, Taro Toyoizumi
While neural networks can enjoy an outstanding flexibility and exhibit unprecedented performance, the mechanism behind their behavior is still not well-understood.
no code implementations • 17 Nov 2023 • Zhengqi He, Taro Toyoizumi
These models have proven to be invaluable tools for studying another complex system known to process human language: the brain.
no code implementations • 6 Apr 2023 • Kensuke Yoshida, Taro Toyoizumi
Here, we review recent theoretical approaches investigating their roles in learning and discuss the possibility that non-rapid eye movement (NREM) sleep selectively consolidates memory, and rapid eye movement (REM) sleep reorganizes the representations of memories.
no code implementations • 9 Feb 2023 • Louis Kang, Taro Toyoizumi
We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts.
no code implementations • 13 Oct 2022 • Zhengqi He, Taro Toyoizumi
With the existence of side-effects brought about by the large size of the foundation language model such as deployment cost, availability issues, and environmental cost, there is some interest in exploring other possible directions, such as a divide-and-conquer scheme.
no code implementations • 27 Aug 2021 • Ho Ka Chan, Taro Toyoizumi
When making decisions under risk, people often exhibit behaviors that classical economic theories cannot explain.
no code implementations • 8 Jan 2021 • Zhengqi He, Taro Toyoizumi
We look at this problem from a new perspective where the interpretation of task solving is synthesized by quantifying how much and what previously unused information is exploited in addition to the information used to solve previous tasks.
1 code implementation • 1 Mar 2020 • Takuya Isomura, Taro Toyoizumi
Generalization of time series prediction remains an important open issue in machine learning, wherein earlier methods have either large generalization error or local minima.
1 code implementation • 30 Nov 2019 • Yoshiki Ito, Taro Toyoizumi
Traveling waves are commonly observed across the brain.
1 code implementation • 2 Aug 2018 • Takuya Isomura, Taro Toyoizumi
This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately -- when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality.
no code implementations • 27 Jan 2017 • Haiping Huang, Taro Toyoizumi
Therefore, it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a parameter region of better generalization capabilities.
no code implementations • 12 Aug 2016 • Haiping Huang, Taro Toyoizumi
This study deepens our understanding of unsupervised learning from a finite number of data, and may provide insights into its role in training deep networks.
no code implementations • 1 Feb 2015 • Haiping Huang, Taro Toyoizumi
Learning in restricted Boltzmann machine is typically hard due to the computation of gradients of log-likelihood function.