no code implementations • 11 May 2023 • Tomohiro Nabika, Kenji Nagata, Shun Katakami, Masaichiro Mizumaki, Masato Okada
Therefore, we applied Bayesian inference-based data analysis using the exchange Monte Carlo method to realize a sequential experimental design with general parametric models.
no code implementations • 16 May 2021 • Haruka Asanuma, Shiro Takagi, Yoshihiro Nagano, Yuki Yoshida, Yasuhiko Igarashi, Masato Okada
Teacher-student learning is a framework in which we introduce two neural networks: one neural network is a target function in supervised learning, and the other is a learning neural network.
1 code implementation • NeurIPS 2019 • Yuki Yoshida, Masato Okada
The plateau phenomenon, wherein the loss value stops decreasing during the process of learning, has been reported by various researchers.
no code implementations • 25 Sep 2019 • Shiro Takagi, Yoshihiro Nagano, Yuki Yoshida, Masato Okada
Model-agnostic meta-learning (MAML) is known as a powerful meta-learning method.
no code implementations • 25 Sep 2019 • Yoshihiro Nagano, Shiro Takagi, Yuki Yoshida, Masato Okada
The local learning approach extracts semantic representations for these datasets by training the embedding model from scratch for each local neighborhood, respectively.
no code implementations • 11 Dec 2018 • Kenji Nagata, Yoh-ichi Mototake, Rei Muraoka, Takehiko Sasaki, Masato Okada
Since the measurement time is strongly related to the signal-to-noise ratio for the Poisson noise model, Bayesian measurement with Poisson noise model enables us to clarify the relationship between the measurement time and the limit of estimation.
no code implementations • 29 May 2018 • Tomoyuki Obuchi, Yoshinori Nakanishi-Ohno, Masato Okada, Yoshiyuki Kabashima
The analysis is conducted through evaluation of the entropy, an exponential rate of the number of combinations of variables giving a specific value of fit error to given data which is assumed to be generated from a linear process using the design matrix.
no code implementations • 12 Dec 2017 • Yoshihiro Nagano, Ryo Karakida, Masato Okada
Our study demonstrated that transient dynamics of inference first approaches a concept, and then moves close to a memory.
no code implementations • 7 Jul 2017 • Yasuhiko Igarashi, Hikaru Takenaka, Yoshinori Nakanishi-Ohno, Makoto Uemura, Shiro Ikeda, Masato Okada
By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states.
no code implementations • 20 Jun 2017 • Kazuyuki Hara, Kentaro Katahira, Masato Okada
The value of the objective function for an unperturbed output is called a baseline.
no code implementations • 26 Jul 2016 • Satoru Tokuda, Kenji Nagata, Masato Okada
The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter.
no code implementations • NeurIPS 2012 • Hiroki Terashima, Masato Okada
The computational modelling of the primary auditory cortex (A1) has been less fruitful than that of the primary visual cortex (V1) due to the less organized properties of A1.
no code implementations • NeurIPS 2010 • Ken Takiyama, Masato Okada
Synthetic data analysis reveals the high performance of our algorithm in estimating state transitions, the number of neural states, and nonstationary firing rates compared to previous methods.
no code implementations • NeurIPS 2010 • Kentaro Katahira, Kazuo Okanoya, Masato Okada
Loewenstein & Seung (2006) demonstrated that matching behavior is a steady state of learning in neural networks if the synaptic weights change proportionally to the covariance between reward and neural activities.
no code implementations • NeurIPS 2008 • Masafumi Oizumi, Toshiyuki Ishii, Kazuya Ishibashi, Toshihiko Hosoya, Masato Okada
Then, we compute how much information is lost when information is decoded using the simplified models, i. e., ``mismatched decoders''.