no code implementations • 7 Feb 2021 • Zhen Liang, Rushuang Zhou, Li Zhang, Linling Li, Gan Huang, Zhiguo Zhang, Shin Ishii
The performance of the extracted deep and low-dimensional features by EEGFuseNet is carefully evaluated in an unsupervised emotion recognition application based on three public emotion databases.
no code implementations • 2 Aug 2019 • Henrik Skibbe, Akiya Watakabe, Ken Nakae, Carlos Enrique Gutierrez, Hiromichi Tsukada, Junichi Hata, Takashi Kawase, Rui Gong, Alexander Woodward, Kenji Doya, Hideyuki Okano, Tetsuo Yamamori, Shin Ishii
Understanding the connectivity in the brain is an important prerequisite for understanding how the brain processes information.
1 code implementation • bioRxiv Neuroscience 2019 • Hidetoshi Urakubo, Torsten Bullmann, Yoshiyuki Kubota, Shigeyuki Oba, Shin Ishii
The spatial scale of the 3D reconstruction grows rapidly owing to deep neural networks (DNNs) that enable automated image segmentation.
no code implementations • ICLR 2018 • Sotetsu Koyamada, Yuta Kikuchi, Atsunori Kanemura, Shin-ichi Maeda, Shin Ishii
Neural sequence generation is commonly approached by using maximum- likelihood (ML) estimation or reinforcement learning (RL).
no code implementations • CVPR 2018 • Kourosh Meshgi, Shigeyuki Oba, Shin Ishii
To remove this redundancy and have an effective ensemble learning, it is critical for the committee to include consistent hypotheses that differ from one-another, covering the version space with minimum overlaps.
1 code implementation • 30 Jun 2017 • Sotetsu Koyamada, Yuta Kikuchi, Atsunori Kanemura, Shin-ichi Maeda, Shin Ishii
We propose a new neural sequence model training method in which the objective function is defined by $\alpha$-divergence.
no code implementations • 28 Apr 2017 • Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba, Shin Ishii
However, by updating all of the ensemble using a shared set of samples and their final labels, such diversity is lost or reduced to the diversity provided by the underlying features or internal classifiers' dynamics.
12 code implementations • 13 Apr 2017 • Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Shin Ishii
In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets.
no code implementations • 2 Apr 2017 • Kourosh Meshgi, Shigeyuki Oba, Shin Ishii
To cope with variations of the target shape and appearance, the classifier is updated online with different samples of the target and the background.
no code implementations • 31 Mar 2017 • Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba, Shin Ishii
We also introduce a budgeting mechanism which prevents the unbounded growth in the number of examples in the first detector to maintain its rapid response.
5 code implementations • 2 Jul 2015 • Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii
We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution.
no code implementations • 31 Jan 2015 • Sotetsu Koyamada, Yumi Shikauchi, Ken Nakae, Masanori Koyama, Shin Ishii
Our PSA successfully visualized the subject-independent features contributing to the subject-transferability of the trained decoder.
no code implementations • 21 Dec 2014 • Sotetsu Koyamada, Masanori Koyama, Ken Nakae, Shin Ishii
We then visualize the PSMs to demonstrate the PSA's ability to decompose the knowledge acquired by the trained classifiers.